Jul 15 23:52:43.823260 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue Jul 15 22:01:05 -00 2025 Jul 15 23:52:43.823292 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e99cfd77676fb46bb6e7e7d8fcebb095dd84f43a354bdf152777c6b07182cd66 Jul 15 23:52:43.823304 kernel: BIOS-provided physical RAM map: Jul 15 23:52:43.823313 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 15 23:52:43.823322 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 15 23:52:43.823331 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 15 23:52:43.823341 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jul 15 23:52:43.823354 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jul 15 23:52:43.823367 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jul 15 23:52:43.823376 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jul 15 23:52:43.823385 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 15 23:52:43.823394 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 15 23:52:43.823403 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 15 23:52:43.823412 kernel: NX (Execute Disable) protection: active Jul 15 23:52:43.823426 kernel: APIC: Static calls initialized Jul 15 23:52:43.823436 kernel: SMBIOS 2.8 present. Jul 15 23:52:43.823450 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jul 15 23:52:43.823459 kernel: DMI: Memory slots populated: 1/1 Jul 15 23:52:43.823469 kernel: Hypervisor detected: KVM Jul 15 23:52:43.823478 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 15 23:52:43.823487 kernel: kvm-clock: using sched offset of 4233099851 cycles Jul 15 23:52:43.823497 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 15 23:52:43.823507 kernel: tsc: Detected 2794.750 MHz processor Jul 15 23:52:43.823521 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 15 23:52:43.823532 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 15 23:52:43.823541 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jul 15 23:52:43.823588 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jul 15 23:52:43.823598 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 15 23:52:43.823608 kernel: Using GB pages for direct mapping Jul 15 23:52:43.823617 kernel: ACPI: Early table checksum verification disabled Jul 15 23:52:43.823627 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jul 15 23:52:43.823636 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 23:52:43.823667 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 23:52:43.823676 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 23:52:43.823686 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jul 15 23:52:43.823695 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 23:52:43.823706 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 23:52:43.823716 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 23:52:43.823726 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 23:52:43.823737 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jul 15 23:52:43.823755 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jul 15 23:52:43.823766 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jul 15 23:52:43.823776 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jul 15 23:52:43.823787 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jul 15 23:52:43.823797 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jul 15 23:52:43.823807 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jul 15 23:52:43.823820 kernel: No NUMA configuration found Jul 15 23:52:43.823830 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jul 15 23:52:43.823839 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Jul 15 23:52:43.823849 kernel: Zone ranges: Jul 15 23:52:43.823859 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 15 23:52:43.823869 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jul 15 23:52:43.823879 kernel: Normal empty Jul 15 23:52:43.823889 kernel: Device empty Jul 15 23:52:43.823899 kernel: Movable zone start for each node Jul 15 23:52:43.823910 kernel: Early memory node ranges Jul 15 23:52:43.823924 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 15 23:52:43.823934 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jul 15 23:52:43.823945 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jul 15 23:52:43.823955 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 15 23:52:43.823966 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 15 23:52:43.823976 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jul 15 23:52:43.823987 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 15 23:52:43.824001 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 15 23:52:43.824012 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 15 23:52:43.824025 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 15 23:52:43.824035 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 15 23:52:43.824049 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 15 23:52:43.824060 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 15 23:52:43.824071 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 15 23:52:43.824081 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 15 23:52:43.824092 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 15 23:52:43.824102 kernel: TSC deadline timer available Jul 15 23:52:43.824113 kernel: CPU topo: Max. logical packages: 1 Jul 15 23:52:43.824127 kernel: CPU topo: Max. logical dies: 1 Jul 15 23:52:43.824137 kernel: CPU topo: Max. dies per package: 1 Jul 15 23:52:43.824147 kernel: CPU topo: Max. threads per core: 1 Jul 15 23:52:43.824158 kernel: CPU topo: Num. cores per package: 4 Jul 15 23:52:43.824168 kernel: CPU topo: Num. threads per package: 4 Jul 15 23:52:43.824179 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jul 15 23:52:43.824190 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 15 23:52:43.824200 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 15 23:52:43.824211 kernel: kvm-guest: setup PV sched yield Jul 15 23:52:43.824221 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jul 15 23:52:43.824234 kernel: Booting paravirtualized kernel on KVM Jul 15 23:52:43.824245 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 15 23:52:43.824256 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jul 15 23:52:43.824267 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jul 15 23:52:43.824277 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jul 15 23:52:43.824288 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 15 23:52:43.824298 kernel: kvm-guest: PV spinlocks enabled Jul 15 23:52:43.824309 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 15 23:52:43.824321 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e99cfd77676fb46bb6e7e7d8fcebb095dd84f43a354bdf152777c6b07182cd66 Jul 15 23:52:43.824335 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 15 23:52:43.824345 kernel: random: crng init done Jul 15 23:52:43.824356 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 15 23:52:43.824367 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 15 23:52:43.824377 kernel: Fallback order for Node 0: 0 Jul 15 23:52:43.824388 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Jul 15 23:52:43.824398 kernel: Policy zone: DMA32 Jul 15 23:52:43.824409 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 15 23:52:43.824422 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 15 23:52:43.824433 kernel: ftrace: allocating 40095 entries in 157 pages Jul 15 23:52:43.824446 kernel: ftrace: allocated 157 pages with 5 groups Jul 15 23:52:43.824456 kernel: Dynamic Preempt: voluntary Jul 15 23:52:43.824467 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 15 23:52:43.824478 kernel: rcu: RCU event tracing is enabled. Jul 15 23:52:43.824489 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 15 23:52:43.824500 kernel: Trampoline variant of Tasks RCU enabled. Jul 15 23:52:43.824514 kernel: Rude variant of Tasks RCU enabled. Jul 15 23:52:43.824527 kernel: Tracing variant of Tasks RCU enabled. Jul 15 23:52:43.824538 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 15 23:52:43.824557 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 15 23:52:43.824568 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 15 23:52:43.824579 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 15 23:52:43.824589 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 15 23:52:43.824600 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 15 23:52:43.824610 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 15 23:52:43.824632 kernel: Console: colour VGA+ 80x25 Jul 15 23:52:43.824643 kernel: printk: legacy console [ttyS0] enabled Jul 15 23:52:43.824678 kernel: ACPI: Core revision 20240827 Jul 15 23:52:43.824689 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 15 23:52:43.824704 kernel: APIC: Switch to symmetric I/O mode setup Jul 15 23:52:43.824715 kernel: x2apic enabled Jul 15 23:52:43.824730 kernel: APIC: Switched APIC routing to: physical x2apic Jul 15 23:52:43.824741 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jul 15 23:52:43.824752 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jul 15 23:52:43.824766 kernel: kvm-guest: setup PV IPIs Jul 15 23:52:43.824777 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 15 23:52:43.824789 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Jul 15 23:52:43.824800 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Jul 15 23:52:43.824811 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 15 23:52:43.824822 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 15 23:52:43.824833 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 15 23:52:43.824844 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 15 23:52:43.824858 kernel: Spectre V2 : Mitigation: Retpolines Jul 15 23:52:43.824870 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 15 23:52:43.824881 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 15 23:52:43.824892 kernel: RETBleed: Mitigation: untrained return thunk Jul 15 23:52:43.824903 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 15 23:52:43.824914 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 15 23:52:43.824925 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jul 15 23:52:43.824937 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jul 15 23:52:43.824949 kernel: x86/bugs: return thunk changed Jul 15 23:52:43.824962 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jul 15 23:52:43.824973 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 15 23:52:43.824985 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 15 23:52:43.824996 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 15 23:52:43.825006 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 15 23:52:43.825018 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 15 23:52:43.825029 kernel: Freeing SMP alternatives memory: 32K Jul 15 23:52:43.825040 kernel: pid_max: default: 32768 minimum: 301 Jul 15 23:52:43.825051 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 15 23:52:43.825065 kernel: landlock: Up and running. Jul 15 23:52:43.825076 kernel: SELinux: Initializing. Jul 15 23:52:43.825087 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 15 23:52:43.825102 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 15 23:52:43.825113 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 15 23:52:43.825124 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 15 23:52:43.825135 kernel: ... version: 0 Jul 15 23:52:43.825146 kernel: ... bit width: 48 Jul 15 23:52:43.825157 kernel: ... generic registers: 6 Jul 15 23:52:43.825171 kernel: ... value mask: 0000ffffffffffff Jul 15 23:52:43.825182 kernel: ... max period: 00007fffffffffff Jul 15 23:52:43.825193 kernel: ... fixed-purpose events: 0 Jul 15 23:52:43.825204 kernel: ... event mask: 000000000000003f Jul 15 23:52:43.825215 kernel: signal: max sigframe size: 1776 Jul 15 23:52:43.825226 kernel: rcu: Hierarchical SRCU implementation. Jul 15 23:52:43.825237 kernel: rcu: Max phase no-delay instances is 400. Jul 15 23:52:43.825249 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 15 23:52:43.825260 kernel: smp: Bringing up secondary CPUs ... Jul 15 23:52:43.825274 kernel: smpboot: x86: Booting SMP configuration: Jul 15 23:52:43.825285 kernel: .... node #0, CPUs: #1 #2 #3 Jul 15 23:52:43.825296 kernel: smp: Brought up 1 node, 4 CPUs Jul 15 23:52:43.825306 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Jul 15 23:52:43.825318 kernel: Memory: 2428912K/2571752K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54424K init, 2544K bss, 136904K reserved, 0K cma-reserved) Jul 15 23:52:43.825329 kernel: devtmpfs: initialized Jul 15 23:52:43.825340 kernel: x86/mm: Memory block size: 128MB Jul 15 23:52:43.825351 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 15 23:52:43.825363 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 15 23:52:43.825377 kernel: pinctrl core: initialized pinctrl subsystem Jul 15 23:52:43.825388 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 15 23:52:43.825399 kernel: audit: initializing netlink subsys (disabled) Jul 15 23:52:43.825410 kernel: audit: type=2000 audit(1752623560.451:1): state=initialized audit_enabled=0 res=1 Jul 15 23:52:43.825421 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 15 23:52:43.825432 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 15 23:52:43.825443 kernel: cpuidle: using governor menu Jul 15 23:52:43.825454 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 15 23:52:43.825465 kernel: dca service started, version 1.12.1 Jul 15 23:52:43.825479 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jul 15 23:52:43.825490 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jul 15 23:52:43.825501 kernel: PCI: Using configuration type 1 for base access Jul 15 23:52:43.825512 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 15 23:52:43.825523 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 15 23:52:43.825534 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 15 23:52:43.825555 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 15 23:52:43.825567 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 15 23:52:43.825578 kernel: ACPI: Added _OSI(Module Device) Jul 15 23:52:43.825592 kernel: ACPI: Added _OSI(Processor Device) Jul 15 23:52:43.825603 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 15 23:52:43.825614 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 15 23:52:43.825625 kernel: ACPI: Interpreter enabled Jul 15 23:52:43.825636 kernel: ACPI: PM: (supports S0 S3 S5) Jul 15 23:52:43.825668 kernel: ACPI: Using IOAPIC for interrupt routing Jul 15 23:52:43.825680 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 15 23:52:43.825691 kernel: PCI: Using E820 reservations for host bridge windows Jul 15 23:52:43.825701 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 15 23:52:43.825716 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 15 23:52:43.825959 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 15 23:52:43.826117 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jul 15 23:52:43.826270 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jul 15 23:52:43.826285 kernel: PCI host bridge to bus 0000:00 Jul 15 23:52:43.826456 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 15 23:52:43.826615 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 15 23:52:43.826802 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 15 23:52:43.826944 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jul 15 23:52:43.827090 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 15 23:52:43.827229 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jul 15 23:52:43.827369 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 15 23:52:43.827571 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jul 15 23:52:43.827808 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jul 15 23:52:43.827993 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Jul 15 23:52:43.828153 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Jul 15 23:52:43.828307 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Jul 15 23:52:43.828461 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 15 23:52:43.828689 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jul 15 23:52:43.828852 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Jul 15 23:52:43.829015 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Jul 15 23:52:43.829169 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Jul 15 23:52:43.829342 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jul 15 23:52:43.829500 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Jul 15 23:52:43.829706 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Jul 15 23:52:43.829873 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Jul 15 23:52:43.830053 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jul 15 23:52:43.830216 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Jul 15 23:52:43.830371 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Jul 15 23:52:43.830525 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Jul 15 23:52:43.830718 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Jul 15 23:52:43.830898 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jul 15 23:52:43.831053 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 15 23:52:43.831228 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jul 15 23:52:43.831381 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Jul 15 23:52:43.831533 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Jul 15 23:52:43.831737 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jul 15 23:52:43.831895 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jul 15 23:52:43.831911 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 15 23:52:43.831923 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 15 23:52:43.831940 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 15 23:52:43.831951 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 15 23:52:43.831962 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 15 23:52:43.831973 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 15 23:52:43.831984 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 15 23:52:43.831995 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 15 23:52:43.832006 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 15 23:52:43.832017 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 15 23:52:43.832029 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 15 23:52:43.832043 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 15 23:52:43.832054 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 15 23:52:43.832065 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 15 23:52:43.832076 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 15 23:52:43.832087 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 15 23:52:43.832099 kernel: iommu: Default domain type: Translated Jul 15 23:52:43.832110 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 15 23:52:43.832121 kernel: PCI: Using ACPI for IRQ routing Jul 15 23:52:43.832132 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 15 23:52:43.832147 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 15 23:52:43.832159 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jul 15 23:52:43.832314 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 15 23:52:43.832467 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 15 23:52:43.832632 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 15 23:52:43.832664 kernel: vgaarb: loaded Jul 15 23:52:43.832675 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 15 23:52:43.832687 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 15 23:52:43.832698 kernel: clocksource: Switched to clocksource kvm-clock Jul 15 23:52:43.832714 kernel: VFS: Disk quotas dquot_6.6.0 Jul 15 23:52:43.832725 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 15 23:52:43.832736 kernel: pnp: PnP ACPI init Jul 15 23:52:43.832924 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jul 15 23:52:43.832942 kernel: pnp: PnP ACPI: found 6 devices Jul 15 23:52:43.832953 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 15 23:52:43.832965 kernel: NET: Registered PF_INET protocol family Jul 15 23:52:43.832976 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 15 23:52:43.832992 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 15 23:52:43.833004 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 15 23:52:43.833015 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 15 23:52:43.833027 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 15 23:52:43.833038 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 15 23:52:43.833049 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 15 23:52:43.833060 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 15 23:52:43.833071 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 15 23:52:43.833086 kernel: NET: Registered PF_XDP protocol family Jul 15 23:52:43.833248 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 15 23:52:43.833386 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 15 23:52:43.833519 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 15 23:52:43.833685 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jul 15 23:52:43.833821 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jul 15 23:52:43.833952 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jul 15 23:52:43.833967 kernel: PCI: CLS 0 bytes, default 64 Jul 15 23:52:43.833979 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Jul 15 23:52:43.833996 kernel: Initialise system trusted keyrings Jul 15 23:52:43.834007 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 15 23:52:43.834019 kernel: Key type asymmetric registered Jul 15 23:52:43.834030 kernel: Asymmetric key parser 'x509' registered Jul 15 23:52:43.834041 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 15 23:52:43.834053 kernel: io scheduler mq-deadline registered Jul 15 23:52:43.834064 kernel: io scheduler kyber registered Jul 15 23:52:43.834075 kernel: io scheduler bfq registered Jul 15 23:52:43.834087 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 15 23:52:43.834101 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 15 23:52:43.834113 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 15 23:52:43.834124 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jul 15 23:52:43.834136 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 15 23:52:43.834147 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 15 23:52:43.834159 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 15 23:52:43.834170 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 15 23:52:43.834182 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 15 23:52:43.834364 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 15 23:52:43.834384 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 15 23:52:43.834520 kernel: rtc_cmos 00:04: registered as rtc0 Jul 15 23:52:43.834688 kernel: rtc_cmos 00:04: setting system clock to 2025-07-15T23:52:43 UTC (1752623563) Jul 15 23:52:43.834865 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jul 15 23:52:43.834881 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 15 23:52:43.834892 kernel: NET: Registered PF_INET6 protocol family Jul 15 23:52:43.834904 kernel: Segment Routing with IPv6 Jul 15 23:52:43.834915 kernel: In-situ OAM (IOAM) with IPv6 Jul 15 23:52:43.834932 kernel: NET: Registered PF_PACKET protocol family Jul 15 23:52:43.834943 kernel: Key type dns_resolver registered Jul 15 23:52:43.834954 kernel: IPI shorthand broadcast: enabled Jul 15 23:52:43.834966 kernel: sched_clock: Marking stable (2978002143, 111376950)->(3108974316, -19595223) Jul 15 23:52:43.834977 kernel: registered taskstats version 1 Jul 15 23:52:43.834989 kernel: Loading compiled-in X.509 certificates Jul 15 23:52:43.835000 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: cfc533be64675f3c66ee10d42aa8c5ce2115881d' Jul 15 23:52:43.835012 kernel: Demotion targets for Node 0: null Jul 15 23:52:43.835023 kernel: Key type .fscrypt registered Jul 15 23:52:43.835037 kernel: Key type fscrypt-provisioning registered Jul 15 23:52:43.835048 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 15 23:52:43.835060 kernel: ima: Allocated hash algorithm: sha1 Jul 15 23:52:43.835071 kernel: ima: No architecture policies found Jul 15 23:52:43.835082 kernel: clk: Disabling unused clocks Jul 15 23:52:43.835094 kernel: Warning: unable to open an initial console. Jul 15 23:52:43.835105 kernel: Freeing unused kernel image (initmem) memory: 54424K Jul 15 23:52:43.835117 kernel: Write protecting the kernel read-only data: 24576k Jul 15 23:52:43.835131 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 15 23:52:43.835142 kernel: Run /init as init process Jul 15 23:52:43.835153 kernel: with arguments: Jul 15 23:52:43.835164 kernel: /init Jul 15 23:52:43.835175 kernel: with environment: Jul 15 23:52:43.835186 kernel: HOME=/ Jul 15 23:52:43.835197 kernel: TERM=linux Jul 15 23:52:43.835208 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 15 23:52:43.835220 systemd[1]: Successfully made /usr/ read-only. Jul 15 23:52:43.835239 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 15 23:52:43.835266 systemd[1]: Detected virtualization kvm. Jul 15 23:52:43.835279 systemd[1]: Detected architecture x86-64. Jul 15 23:52:43.835290 systemd[1]: Running in initrd. Jul 15 23:52:43.835303 systemd[1]: No hostname configured, using default hostname. Jul 15 23:52:43.835318 systemd[1]: Hostname set to . Jul 15 23:52:43.835333 systemd[1]: Initializing machine ID from VM UUID. Jul 15 23:52:43.835345 systemd[1]: Queued start job for default target initrd.target. Jul 15 23:52:43.835357 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 23:52:43.835370 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 23:52:43.835383 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 15 23:52:43.835396 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 15 23:52:43.835409 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 15 23:52:43.835425 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 15 23:52:43.835439 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 15 23:52:43.835452 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 15 23:52:43.835464 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 23:52:43.835477 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 15 23:52:43.835490 systemd[1]: Reached target paths.target - Path Units. Jul 15 23:52:43.835502 systemd[1]: Reached target slices.target - Slice Units. Jul 15 23:52:43.835517 systemd[1]: Reached target swap.target - Swaps. Jul 15 23:52:43.835529 systemd[1]: Reached target timers.target - Timer Units. Jul 15 23:52:43.835541 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 15 23:52:43.835563 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 15 23:52:43.835576 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 15 23:52:43.835588 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 15 23:52:43.835601 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 15 23:52:43.835613 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 15 23:52:43.835626 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 23:52:43.835641 systemd[1]: Reached target sockets.target - Socket Units. Jul 15 23:52:43.835669 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 15 23:52:43.835681 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 15 23:52:43.835694 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 15 23:52:43.835708 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 15 23:52:43.835726 systemd[1]: Starting systemd-fsck-usr.service... Jul 15 23:52:43.835738 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 15 23:52:43.835751 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 15 23:52:43.835763 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 23:52:43.835776 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 15 23:52:43.835789 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 23:52:43.835805 systemd[1]: Finished systemd-fsck-usr.service. Jul 15 23:52:43.835818 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 15 23:52:43.835854 systemd-journald[219]: Collecting audit messages is disabled. Jul 15 23:52:43.835886 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 15 23:52:43.835899 systemd-journald[219]: Journal started Jul 15 23:52:43.835925 systemd-journald[219]: Runtime Journal (/run/log/journal/dd36dd0562ac46b4af17447b8c072753) is 6M, max 48.6M, 42.5M free. Jul 15 23:52:43.819891 systemd-modules-load[222]: Inserted module 'overlay' Jul 15 23:52:43.861390 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 15 23:52:43.861413 kernel: Bridge firewalling registered Jul 15 23:52:43.851245 systemd-modules-load[222]: Inserted module 'br_netfilter' Jul 15 23:52:43.867789 systemd[1]: Started systemd-journald.service - Journal Service. Jul 15 23:52:43.868470 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 15 23:52:43.871728 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 23:52:43.877807 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 15 23:52:43.880038 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 15 23:52:43.883540 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 15 23:52:43.888390 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 15 23:52:43.899123 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 23:52:43.901493 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 15 23:52:43.906731 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 15 23:52:43.907696 systemd-tmpfiles[243]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 15 23:52:43.909750 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 15 23:52:43.913603 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 23:52:43.925978 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 15 23:52:43.941881 dracut-cmdline[259]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e99cfd77676fb46bb6e7e7d8fcebb095dd84f43a354bdf152777c6b07182cd66 Jul 15 23:52:43.985844 systemd-resolved[262]: Positive Trust Anchors: Jul 15 23:52:43.985866 systemd-resolved[262]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 15 23:52:43.985900 systemd-resolved[262]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 15 23:52:43.988863 systemd-resolved[262]: Defaulting to hostname 'linux'. Jul 15 23:52:43.990199 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 15 23:52:43.996344 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 15 23:52:44.060701 kernel: SCSI subsystem initialized Jul 15 23:52:44.070679 kernel: Loading iSCSI transport class v2.0-870. Jul 15 23:52:44.082696 kernel: iscsi: registered transport (tcp) Jul 15 23:52:44.104687 kernel: iscsi: registered transport (qla4xxx) Jul 15 23:52:44.104773 kernel: QLogic iSCSI HBA Driver Jul 15 23:52:44.128139 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 15 23:52:44.157810 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 23:52:44.160280 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 15 23:52:44.219582 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 15 23:52:44.223915 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 15 23:52:44.286681 kernel: raid6: avx2x4 gen() 29948 MB/s Jul 15 23:52:44.303674 kernel: raid6: avx2x2 gen() 29847 MB/s Jul 15 23:52:44.328673 kernel: raid6: avx2x1 gen() 25020 MB/s Jul 15 23:52:44.328697 kernel: raid6: using algorithm avx2x4 gen() 29948 MB/s Jul 15 23:52:44.371790 kernel: raid6: .... xor() 8101 MB/s, rmw enabled Jul 15 23:52:44.371832 kernel: raid6: using avx2x2 recovery algorithm Jul 15 23:52:44.392681 kernel: xor: automatically using best checksumming function avx Jul 15 23:52:44.561696 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 15 23:52:44.572067 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 15 23:52:44.575929 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 23:52:44.615625 systemd-udevd[471]: Using default interface naming scheme 'v255'. Jul 15 23:52:44.621303 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 23:52:44.626777 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 15 23:52:44.657281 dracut-pre-trigger[480]: rd.md=0: removing MD RAID activation Jul 15 23:52:44.687920 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 15 23:52:44.691621 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 15 23:52:44.769459 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 23:52:44.774248 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 15 23:52:44.800676 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jul 15 23:52:44.803684 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 15 23:52:44.805460 kernel: cryptd: max_cpu_qlen set to 1000 Jul 15 23:52:44.806891 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 15 23:52:44.806913 kernel: GPT:9289727 != 19775487 Jul 15 23:52:44.806929 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 15 23:52:44.808660 kernel: GPT:9289727 != 19775487 Jul 15 23:52:44.808685 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 15 23:52:44.808695 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 15 23:52:44.813682 kernel: AES CTR mode by8 optimization enabled Jul 15 23:52:44.845702 kernel: libata version 3.00 loaded. Jul 15 23:52:44.853013 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 23:52:44.891772 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jul 15 23:52:44.891795 kernel: ahci 0000:00:1f.2: version 3.0 Jul 15 23:52:44.891990 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 15 23:52:44.853149 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 23:52:44.892001 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 23:52:44.896893 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jul 15 23:52:44.897074 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jul 15 23:52:44.897216 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 15 23:52:44.901046 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 23:52:44.903670 kernel: scsi host0: ahci Jul 15 23:52:44.905679 kernel: scsi host1: ahci Jul 15 23:52:44.907057 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 15 23:52:44.909744 kernel: scsi host2: ahci Jul 15 23:52:44.922677 kernel: scsi host3: ahci Jul 15 23:52:44.922906 kernel: scsi host4: ahci Jul 15 23:52:44.923680 kernel: scsi host5: ahci Jul 15 23:52:44.926070 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 0 Jul 15 23:52:44.926113 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 0 Jul 15 23:52:44.926124 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 0 Jul 15 23:52:44.927668 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 0 Jul 15 23:52:44.927689 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 0 Jul 15 23:52:44.929241 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 0 Jul 15 23:52:44.937592 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 15 23:52:44.978492 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 15 23:52:44.978931 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 23:52:44.995621 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 15 23:52:44.995771 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 15 23:52:45.011930 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 15 23:52:45.014025 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 15 23:52:45.060181 disk-uuid[631]: Primary Header is updated. Jul 15 23:52:45.060181 disk-uuid[631]: Secondary Entries is updated. Jul 15 23:52:45.060181 disk-uuid[631]: Secondary Header is updated. Jul 15 23:52:45.063908 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 15 23:52:45.070687 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 15 23:52:45.241109 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jul 15 23:52:45.241176 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 15 23:52:45.241188 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 15 23:52:45.241198 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 15 23:52:45.242678 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 15 23:52:45.242698 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 15 23:52:45.243750 kernel: ata3.00: applying bridge limits Jul 15 23:52:45.244681 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 15 23:52:45.244698 kernel: ata3.00: configured for UDMA/100 Jul 15 23:52:45.245701 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 15 23:52:45.300706 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 15 23:52:45.300980 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 15 23:52:45.314689 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 15 23:52:45.669961 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 15 23:52:45.671750 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 15 23:52:45.673690 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 23:52:45.675040 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 15 23:52:45.676031 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 15 23:52:45.708341 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 15 23:52:46.069679 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 15 23:52:46.070708 disk-uuid[632]: The operation has completed successfully. Jul 15 23:52:46.097998 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 15 23:52:46.098149 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 15 23:52:46.139247 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 15 23:52:46.164813 sh[660]: Success Jul 15 23:52:46.181726 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 15 23:52:46.181766 kernel: device-mapper: uevent: version 1.0.3 Jul 15 23:52:46.183360 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 15 23:52:46.192671 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jul 15 23:52:46.226536 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 15 23:52:46.229683 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 15 23:52:46.251915 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 15 23:52:46.258075 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 15 23:52:46.260081 kernel: BTRFS: device fsid 5e84ae48-fef7-4576-99b7-f45b3ea9aa4e devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (672) Jul 15 23:52:46.262319 kernel: BTRFS info (device dm-0): first mount of filesystem 5e84ae48-fef7-4576-99b7-f45b3ea9aa4e Jul 15 23:52:46.262356 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 15 23:52:46.262372 kernel: BTRFS info (device dm-0): using free-space-tree Jul 15 23:52:46.267979 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 15 23:52:46.268522 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 15 23:52:46.270240 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 15 23:52:46.271274 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 15 23:52:46.275164 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 15 23:52:46.314786 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (703) Jul 15 23:52:46.314853 kernel: BTRFS info (device vda6): first mount of filesystem 00a9d8f6-6c10-4cef-8e74-b38121477a0b Jul 15 23:52:46.317231 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 15 23:52:46.317287 kernel: BTRFS info (device vda6): using free-space-tree Jul 15 23:52:46.326701 kernel: BTRFS info (device vda6): last unmount of filesystem 00a9d8f6-6c10-4cef-8e74-b38121477a0b Jul 15 23:52:46.328531 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 15 23:52:46.330384 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 15 23:52:46.414481 ignition[745]: Ignition 2.21.0 Jul 15 23:52:46.414496 ignition[745]: Stage: fetch-offline Jul 15 23:52:46.414542 ignition[745]: no configs at "/usr/lib/ignition/base.d" Jul 15 23:52:46.414552 ignition[745]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 23:52:46.414640 ignition[745]: parsed url from cmdline: "" Jul 15 23:52:46.414644 ignition[745]: no config URL provided Jul 15 23:52:46.414667 ignition[745]: reading system config file "/usr/lib/ignition/user.ign" Jul 15 23:52:46.414676 ignition[745]: no config at "/usr/lib/ignition/user.ign" Jul 15 23:52:46.414702 ignition[745]: op(1): [started] loading QEMU firmware config module Jul 15 23:52:46.414707 ignition[745]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 15 23:52:46.428611 ignition[745]: op(1): [finished] loading QEMU firmware config module Jul 15 23:52:46.438810 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 15 23:52:46.441316 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 15 23:52:46.473941 ignition[745]: parsing config with SHA512: e152997caef0fc009a7333bfc187ed6cfa975e427f4ebf462a2ae60d4ba77a9c1037a8d72802b84ce2e11412da7449434bc3bf9ad20285cefe8a4dbd40182f58 Jul 15 23:52:46.480291 unknown[745]: fetched base config from "system" Jul 15 23:52:46.480480 unknown[745]: fetched user config from "qemu" Jul 15 23:52:46.480841 ignition[745]: fetch-offline: fetch-offline passed Jul 15 23:52:46.480899 ignition[745]: Ignition finished successfully Jul 15 23:52:46.484638 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 15 23:52:46.495439 systemd-networkd[850]: lo: Link UP Jul 15 23:52:46.495449 systemd-networkd[850]: lo: Gained carrier Jul 15 23:52:46.497270 systemd-networkd[850]: Enumeration completed Jul 15 23:52:46.497377 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 15 23:52:46.497772 systemd-networkd[850]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 23:52:46.497777 systemd-networkd[850]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 23:52:46.499642 systemd[1]: Reached target network.target - Network. Jul 15 23:52:46.499752 systemd-networkd[850]: eth0: Link UP Jul 15 23:52:46.499757 systemd-networkd[850]: eth0: Gained carrier Jul 15 23:52:46.499765 systemd-networkd[850]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 23:52:46.501504 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 15 23:52:46.505906 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 15 23:52:46.531751 systemd-networkd[850]: eth0: DHCPv4 address 10.0.0.86/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 15 23:52:46.545270 ignition[854]: Ignition 2.21.0 Jul 15 23:52:46.545284 ignition[854]: Stage: kargs Jul 15 23:52:46.545436 ignition[854]: no configs at "/usr/lib/ignition/base.d" Jul 15 23:52:46.545448 ignition[854]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 23:52:46.547195 ignition[854]: kargs: kargs passed Jul 15 23:52:46.547261 ignition[854]: Ignition finished successfully Jul 15 23:52:46.551693 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 15 23:52:46.554818 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 15 23:52:46.591011 ignition[863]: Ignition 2.21.0 Jul 15 23:52:46.591025 ignition[863]: Stage: disks Jul 15 23:52:46.591205 ignition[863]: no configs at "/usr/lib/ignition/base.d" Jul 15 23:52:46.591218 ignition[863]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 23:52:46.592620 ignition[863]: disks: disks passed Jul 15 23:52:46.592717 ignition[863]: Ignition finished successfully Jul 15 23:52:46.596665 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 15 23:52:46.599347 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 15 23:52:46.601481 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 15 23:52:46.601586 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 15 23:52:46.602113 systemd[1]: Reached target sysinit.target - System Initialization. Jul 15 23:52:46.602430 systemd[1]: Reached target basic.target - Basic System. Jul 15 23:52:46.604164 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 15 23:52:46.637642 systemd-fsck[873]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 15 23:52:46.645477 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 15 23:52:46.649740 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 15 23:52:46.778689 kernel: EXT4-fs (vda9): mounted filesystem e7011b63-42ae-44ea-90bf-c826e39292b2 r/w with ordered data mode. Quota mode: none. Jul 15 23:52:46.778909 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 15 23:52:46.779538 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 15 23:52:46.781195 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 15 23:52:46.783277 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 15 23:52:46.785361 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 15 23:52:46.785403 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 15 23:52:46.785429 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 15 23:52:46.801040 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 15 23:52:46.804541 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 15 23:52:46.809817 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (881) Jul 15 23:52:46.809849 kernel: BTRFS info (device vda6): first mount of filesystem 00a9d8f6-6c10-4cef-8e74-b38121477a0b Jul 15 23:52:46.809864 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 15 23:52:46.809887 kernel: BTRFS info (device vda6): using free-space-tree Jul 15 23:52:46.813025 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 15 23:52:46.847716 initrd-setup-root[905]: cut: /sysroot/etc/passwd: No such file or directory Jul 15 23:52:46.852216 initrd-setup-root[912]: cut: /sysroot/etc/group: No such file or directory Jul 15 23:52:46.857387 initrd-setup-root[919]: cut: /sysroot/etc/shadow: No such file or directory Jul 15 23:52:46.862012 initrd-setup-root[926]: cut: /sysroot/etc/gshadow: No such file or directory Jul 15 23:52:46.972380 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 15 23:52:46.974077 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 15 23:52:46.976314 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 15 23:52:46.995735 kernel: BTRFS info (device vda6): last unmount of filesystem 00a9d8f6-6c10-4cef-8e74-b38121477a0b Jul 15 23:52:47.009840 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 15 23:52:47.032927 ignition[996]: INFO : Ignition 2.21.0 Jul 15 23:52:47.032927 ignition[996]: INFO : Stage: mount Jul 15 23:52:47.036377 ignition[996]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 23:52:47.036377 ignition[996]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 23:52:47.039507 ignition[996]: INFO : mount: mount passed Jul 15 23:52:47.040283 ignition[996]: INFO : Ignition finished successfully Jul 15 23:52:47.043955 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 15 23:52:47.046573 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 15 23:52:47.258535 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 15 23:52:47.260589 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 15 23:52:47.296847 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1008) Jul 15 23:52:47.296931 kernel: BTRFS info (device vda6): first mount of filesystem 00a9d8f6-6c10-4cef-8e74-b38121477a0b Jul 15 23:52:47.296946 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 15 23:52:47.297940 kernel: BTRFS info (device vda6): using free-space-tree Jul 15 23:52:47.304880 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 15 23:52:47.348273 ignition[1025]: INFO : Ignition 2.21.0 Jul 15 23:52:47.348273 ignition[1025]: INFO : Stage: files Jul 15 23:52:47.350231 ignition[1025]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 23:52:47.350231 ignition[1025]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 23:52:47.350231 ignition[1025]: DEBUG : files: compiled without relabeling support, skipping Jul 15 23:52:47.354114 ignition[1025]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 15 23:52:47.354114 ignition[1025]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 15 23:52:47.357000 ignition[1025]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 15 23:52:47.357000 ignition[1025]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 15 23:52:47.357000 ignition[1025]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 15 23:52:47.357000 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 15 23:52:47.357000 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 15 23:52:47.355212 unknown[1025]: wrote ssh authorized keys file for user: core Jul 15 23:52:47.389853 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 15 23:52:47.515682 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 15 23:52:47.515682 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 15 23:52:47.519468 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 15 23:52:47.610675 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 15 23:52:47.731060 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 15 23:52:47.733414 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 15 23:52:47.733414 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 15 23:52:47.733414 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 15 23:52:47.733414 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 15 23:52:47.733414 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 15 23:52:47.733414 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 15 23:52:47.733414 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 15 23:52:47.733414 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 15 23:52:47.934250 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 15 23:52:47.936641 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 15 23:52:47.936641 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 15 23:52:48.094150 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 15 23:52:48.094150 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 15 23:52:48.112817 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jul 15 23:52:48.144855 systemd-networkd[850]: eth0: Gained IPv6LL Jul 15 23:52:48.401153 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 15 23:52:49.253672 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 15 23:52:49.256362 ignition[1025]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 15 23:52:49.257988 ignition[1025]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 15 23:52:49.265701 ignition[1025]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 15 23:52:49.265701 ignition[1025]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 15 23:52:49.265701 ignition[1025]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jul 15 23:52:49.271772 ignition[1025]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 15 23:52:49.271772 ignition[1025]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 15 23:52:49.271772 ignition[1025]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jul 15 23:52:49.271772 ignition[1025]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jul 15 23:52:49.289633 ignition[1025]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 15 23:52:49.295446 ignition[1025]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 15 23:52:49.297242 ignition[1025]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jul 15 23:52:49.297242 ignition[1025]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jul 15 23:52:49.297242 ignition[1025]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jul 15 23:52:49.297242 ignition[1025]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 15 23:52:49.297242 ignition[1025]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 15 23:52:49.297242 ignition[1025]: INFO : files: files passed Jul 15 23:52:49.297242 ignition[1025]: INFO : Ignition finished successfully Jul 15 23:52:49.305302 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 15 23:52:49.307047 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 15 23:52:49.310117 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 15 23:52:49.331910 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 15 23:52:49.332092 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 15 23:52:49.337422 initrd-setup-root-after-ignition[1054]: grep: /sysroot/oem/oem-release: No such file or directory Jul 15 23:52:49.342484 initrd-setup-root-after-ignition[1056]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 15 23:52:49.342484 initrd-setup-root-after-ignition[1056]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 15 23:52:49.346382 initrd-setup-root-after-ignition[1060]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 15 23:52:49.350175 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 15 23:52:49.350533 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 15 23:52:49.355610 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 15 23:52:49.436093 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 15 23:52:49.436239 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 15 23:52:49.438544 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 15 23:52:49.439604 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 15 23:52:49.440002 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 15 23:52:49.441089 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 15 23:52:49.481567 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 15 23:52:49.484610 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 15 23:52:49.516702 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 15 23:52:49.518022 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 23:52:49.520223 systemd[1]: Stopped target timers.target - Timer Units. Jul 15 23:52:49.522257 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 15 23:52:49.522406 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 15 23:52:49.524723 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 15 23:52:49.526234 systemd[1]: Stopped target basic.target - Basic System. Jul 15 23:52:49.528345 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 15 23:52:49.530365 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 15 23:52:49.532370 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 15 23:52:49.534521 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 15 23:52:49.536698 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 15 23:52:49.538738 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 15 23:52:49.540971 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 15 23:52:49.542909 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 15 23:52:49.545062 systemd[1]: Stopped target swap.target - Swaps. Jul 15 23:52:49.546801 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 15 23:52:49.546921 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 15 23:52:49.549214 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 15 23:52:49.550681 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 23:52:49.552760 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 15 23:52:49.552922 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 23:52:49.554945 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 15 23:52:49.555063 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 15 23:52:49.557393 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 15 23:52:49.557507 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 15 23:52:49.559333 systemd[1]: Stopped target paths.target - Path Units. Jul 15 23:52:49.561042 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 15 23:52:49.565420 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 23:52:49.565593 systemd[1]: Stopped target slices.target - Slice Units. Jul 15 23:52:49.565958 systemd[1]: Stopped target sockets.target - Socket Units. Jul 15 23:52:49.566245 systemd[1]: iscsid.socket: Deactivated successfully. Jul 15 23:52:49.566339 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 15 23:52:49.566766 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 15 23:52:49.566850 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 15 23:52:49.567267 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 15 23:52:49.567393 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 15 23:52:49.567738 systemd[1]: ignition-files.service: Deactivated successfully. Jul 15 23:52:49.567840 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 15 23:52:49.569092 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 15 23:52:49.569424 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 15 23:52:49.569533 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 23:52:49.570767 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 15 23:52:49.571038 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 15 23:52:49.571141 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 23:52:49.571439 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 15 23:52:49.571536 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 15 23:52:49.575666 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 15 23:52:49.588895 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 15 23:52:49.613218 ignition[1080]: INFO : Ignition 2.21.0 Jul 15 23:52:49.613218 ignition[1080]: INFO : Stage: umount Jul 15 23:52:49.615031 ignition[1080]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 23:52:49.615031 ignition[1080]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 23:52:49.619360 ignition[1080]: INFO : umount: umount passed Jul 15 23:52:49.619360 ignition[1080]: INFO : Ignition finished successfully Jul 15 23:52:49.618487 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 15 23:52:49.623250 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 15 23:52:49.623420 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 15 23:52:49.626844 systemd[1]: Stopped target network.target - Network. Jul 15 23:52:49.626934 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 15 23:52:49.626987 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 15 23:52:49.629812 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 15 23:52:49.629862 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 15 23:52:49.630797 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 15 23:52:49.630851 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 15 23:52:49.631127 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 15 23:52:49.631172 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 15 23:52:49.631553 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 15 23:52:49.637738 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 15 23:52:49.647476 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 15 23:52:49.647694 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 15 23:52:49.654001 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 15 23:52:49.654368 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 15 23:52:49.654531 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 15 23:52:49.657851 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 15 23:52:49.658971 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 15 23:52:49.662265 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 15 23:52:49.662327 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 15 23:52:49.665694 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 15 23:52:49.666625 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 15 23:52:49.666711 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 15 23:52:49.667861 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 15 23:52:49.667922 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 15 23:52:49.672100 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 15 23:52:49.672150 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 15 23:52:49.673088 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 15 23:52:49.673139 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 23:52:49.677019 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 23:52:49.682588 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 15 23:52:49.682672 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 15 23:52:49.698597 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 15 23:52:49.699880 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 23:52:49.703542 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 15 23:52:49.703726 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 15 23:52:49.705852 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 15 23:52:49.705905 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 15 23:52:49.706582 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 15 23:52:49.706623 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 23:52:49.707045 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 15 23:52:49.707094 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 15 23:52:49.707904 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 15 23:52:49.707955 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 15 23:52:49.714772 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 15 23:52:49.714836 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 15 23:52:49.717031 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 15 23:52:49.718888 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 15 23:52:49.718948 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 23:52:49.724442 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 15 23:52:49.724502 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 23:52:49.729672 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 23:52:49.729727 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 23:52:49.734520 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 15 23:52:49.734588 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 15 23:52:49.734639 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 15 23:52:49.754828 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 15 23:52:49.754959 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 15 23:52:49.870482 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 15 23:52:49.870636 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 15 23:52:49.872102 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 15 23:52:49.874682 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 15 23:52:49.874744 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 15 23:52:49.877824 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 15 23:52:49.909667 systemd[1]: Switching root. Jul 15 23:52:50.009808 systemd-journald[219]: Journal stopped Jul 15 23:52:51.521860 systemd-journald[219]: Received SIGTERM from PID 1 (systemd). Jul 15 23:52:51.521921 kernel: SELinux: policy capability network_peer_controls=1 Jul 15 23:52:51.521940 kernel: SELinux: policy capability open_perms=1 Jul 15 23:52:51.521957 kernel: SELinux: policy capability extended_socket_class=1 Jul 15 23:52:51.521969 kernel: SELinux: policy capability always_check_network=0 Jul 15 23:52:51.521983 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 15 23:52:51.521995 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 15 23:52:51.522006 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 15 23:52:51.522018 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 15 23:52:51.522034 kernel: SELinux: policy capability userspace_initial_context=0 Jul 15 23:52:51.522046 kernel: audit: type=1403 audit(1752623570.657:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 15 23:52:51.522059 systemd[1]: Successfully loaded SELinux policy in 49.552ms. Jul 15 23:52:51.522081 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.127ms. Jul 15 23:52:51.522095 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 15 23:52:51.522110 systemd[1]: Detected virtualization kvm. Jul 15 23:52:51.522122 systemd[1]: Detected architecture x86-64. Jul 15 23:52:51.522135 systemd[1]: Detected first boot. Jul 15 23:52:51.522147 systemd[1]: Initializing machine ID from VM UUID. Jul 15 23:52:51.522160 zram_generator::config[1127]: No configuration found. Jul 15 23:52:51.522173 kernel: Guest personality initialized and is inactive Jul 15 23:52:51.522184 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 15 23:52:51.522196 kernel: Initialized host personality Jul 15 23:52:51.522214 kernel: NET: Registered PF_VSOCK protocol family Jul 15 23:52:51.522231 systemd[1]: Populated /etc with preset unit settings. Jul 15 23:52:51.522244 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 15 23:52:51.522256 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 15 23:52:51.522269 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 15 23:52:51.522281 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 15 23:52:51.522294 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 15 23:52:51.522306 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 15 23:52:51.522318 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 15 23:52:51.522342 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 15 23:52:51.522360 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 15 23:52:51.522373 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 15 23:52:51.522386 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 15 23:52:51.522398 systemd[1]: Created slice user.slice - User and Session Slice. Jul 15 23:52:51.522410 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 23:52:51.522424 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 23:52:51.522437 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 15 23:52:51.522449 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 15 23:52:51.522465 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 15 23:52:51.522478 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 15 23:52:51.522490 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 15 23:52:51.522502 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 23:52:51.522514 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 15 23:52:51.522527 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 15 23:52:51.522539 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 15 23:52:51.522558 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 15 23:52:51.522570 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 15 23:52:51.522583 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 23:52:51.522595 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 15 23:52:51.522607 systemd[1]: Reached target slices.target - Slice Units. Jul 15 23:52:51.522619 systemd[1]: Reached target swap.target - Swaps. Jul 15 23:52:51.522631 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 15 23:52:51.522643 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 15 23:52:51.522738 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 15 23:52:51.522754 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 15 23:52:51.522766 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 15 23:52:51.522778 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 23:52:51.522790 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 15 23:52:51.522802 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 15 23:52:51.522814 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 15 23:52:51.522826 systemd[1]: Mounting media.mount - External Media Directory... Jul 15 23:52:51.522838 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 23:52:51.522851 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 15 23:52:51.522867 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 15 23:52:51.522879 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 15 23:52:51.522892 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 15 23:52:51.522905 systemd[1]: Reached target machines.target - Containers. Jul 15 23:52:51.522917 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 15 23:52:51.522932 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 23:52:51.522944 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 15 23:52:51.522956 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 15 23:52:51.522968 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 23:52:51.522982 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 15 23:52:51.522994 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 23:52:51.523006 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 15 23:52:51.523018 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 23:52:51.523031 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 15 23:52:51.523043 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 15 23:52:51.523056 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 15 23:52:51.523067 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 15 23:52:51.523082 systemd[1]: Stopped systemd-fsck-usr.service. Jul 15 23:52:51.523095 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 23:52:51.523107 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 15 23:52:51.523119 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 15 23:52:51.523133 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 15 23:52:51.523145 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 15 23:52:51.523157 kernel: ACPI: bus type drm_connector registered Jul 15 23:52:51.523168 kernel: fuse: init (API version 7.41) Jul 15 23:52:51.523180 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 15 23:52:51.523195 kernel: loop: module loaded Jul 15 23:52:51.523207 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 15 23:52:51.523218 systemd[1]: verity-setup.service: Deactivated successfully. Jul 15 23:52:51.523230 systemd[1]: Stopped verity-setup.service. Jul 15 23:52:51.523243 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 23:52:51.523278 systemd-journald[1212]: Collecting audit messages is disabled. Jul 15 23:52:51.523301 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 15 23:52:51.523314 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 15 23:52:51.523336 systemd-journald[1212]: Journal started Jul 15 23:52:51.523364 systemd-journald[1212]: Runtime Journal (/run/log/journal/dd36dd0562ac46b4af17447b8c072753) is 6M, max 48.6M, 42.5M free. Jul 15 23:52:51.241583 systemd[1]: Queued start job for default target multi-user.target. Jul 15 23:52:51.255633 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 15 23:52:51.256270 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 15 23:52:51.527698 systemd[1]: Started systemd-journald.service - Journal Service. Jul 15 23:52:51.529598 systemd[1]: Mounted media.mount - External Media Directory. Jul 15 23:52:51.530930 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 15 23:52:51.532354 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 15 23:52:51.533709 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 15 23:52:51.535223 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 15 23:52:51.537067 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 23:52:51.538927 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 15 23:52:51.539154 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 15 23:52:51.540914 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 23:52:51.541168 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 23:52:51.542985 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 15 23:52:51.543252 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 15 23:52:51.544967 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 23:52:51.545255 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 23:52:51.547097 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 15 23:52:51.547438 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 15 23:52:51.549176 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 23:52:51.549477 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 23:52:51.551581 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 15 23:52:51.553376 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 23:52:51.556440 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 15 23:52:51.558409 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 15 23:52:51.574876 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 15 23:52:51.578092 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 15 23:52:51.581273 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 15 23:52:51.582852 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 15 23:52:51.582979 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 15 23:52:51.585483 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 15 23:52:51.590780 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 15 23:52:51.592176 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 23:52:51.593800 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 15 23:52:51.597348 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 15 23:52:51.599019 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 23:52:51.600710 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 15 23:52:51.602201 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 15 23:52:51.603684 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 15 23:52:51.607813 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 15 23:52:51.610681 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 15 23:52:51.613839 systemd-journald[1212]: Time spent on flushing to /var/log/journal/dd36dd0562ac46b4af17447b8c072753 is 56.435ms for 981 entries. Jul 15 23:52:51.613839 systemd-journald[1212]: System Journal (/var/log/journal/dd36dd0562ac46b4af17447b8c072753) is 8M, max 195.6M, 187.6M free. Jul 15 23:52:51.680404 systemd-journald[1212]: Received client request to flush runtime journal. Jul 15 23:52:51.680443 kernel: loop0: detected capacity change from 0 to 221472 Jul 15 23:52:51.613781 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 15 23:52:51.616381 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 15 23:52:51.674531 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 23:52:51.685498 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 15 23:52:51.687873 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 15 23:52:51.693443 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 15 23:52:51.696795 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 15 23:52:51.699166 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 15 23:52:51.713955 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 15 23:52:51.725683 kernel: loop1: detected capacity change from 0 to 146240 Jul 15 23:52:51.734590 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 15 23:52:51.738059 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 15 23:52:51.743396 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 15 23:52:51.767678 kernel: loop2: detected capacity change from 0 to 113872 Jul 15 23:52:51.784571 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. Jul 15 23:52:51.784591 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. Jul 15 23:52:51.795445 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 23:52:51.799690 kernel: loop3: detected capacity change from 0 to 221472 Jul 15 23:52:51.820711 kernel: loop4: detected capacity change from 0 to 146240 Jul 15 23:52:51.836680 kernel: loop5: detected capacity change from 0 to 113872 Jul 15 23:52:51.853673 (sd-merge)[1267]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 15 23:52:51.854439 (sd-merge)[1267]: Merged extensions into '/usr'. Jul 15 23:52:51.859942 systemd[1]: Reload requested from client PID 1246 ('systemd-sysext') (unit systemd-sysext.service)... Jul 15 23:52:51.859959 systemd[1]: Reloading... Jul 15 23:52:52.021243 zram_generator::config[1294]: No configuration found. Jul 15 23:52:52.177744 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 23:52:52.245683 ldconfig[1241]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 15 23:52:52.276813 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 15 23:52:52.277105 systemd[1]: Reloading finished in 416 ms. Jul 15 23:52:52.314331 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 15 23:52:52.331192 systemd[1]: Starting ensure-sysext.service... Jul 15 23:52:52.333243 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 15 23:52:52.343711 systemd[1]: Reload requested from client PID 1330 ('systemctl') (unit ensure-sysext.service)... Jul 15 23:52:52.343725 systemd[1]: Reloading... Jul 15 23:52:52.364076 systemd-tmpfiles[1331]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 15 23:52:52.364571 systemd-tmpfiles[1331]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 15 23:52:52.365119 systemd-tmpfiles[1331]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 15 23:52:52.365468 systemd-tmpfiles[1331]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 15 23:52:52.366952 systemd-tmpfiles[1331]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 15 23:52:52.367313 systemd-tmpfiles[1331]: ACLs are not supported, ignoring. Jul 15 23:52:52.367457 systemd-tmpfiles[1331]: ACLs are not supported, ignoring. Jul 15 23:52:52.371858 systemd-tmpfiles[1331]: Detected autofs mount point /boot during canonicalization of boot. Jul 15 23:52:52.371943 systemd-tmpfiles[1331]: Skipping /boot Jul 15 23:52:52.452681 zram_generator::config[1359]: No configuration found. Jul 15 23:52:52.464054 systemd-tmpfiles[1331]: Detected autofs mount point /boot during canonicalization of boot. Jul 15 23:52:52.464997 systemd-tmpfiles[1331]: Skipping /boot Jul 15 23:52:52.563143 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 23:52:52.644736 systemd[1]: Reloading finished in 300 ms. Jul 15 23:52:52.667338 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 15 23:52:52.711323 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 23:52:52.711503 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 23:52:52.712896 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 23:52:52.715071 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 23:52:52.717717 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 23:52:52.718901 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 23:52:52.719128 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 23:52:52.719264 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 23:52:52.722195 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 23:52:52.722393 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 23:52:52.722559 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 23:52:52.722645 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 23:52:52.722780 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 23:52:52.727182 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 23:52:52.727427 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 23:52:52.729156 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 23:52:52.729427 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 23:52:52.731247 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 23:52:52.731511 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 23:52:52.737605 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 23:52:52.738020 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 23:52:52.739557 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 15 23:52:52.740886 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 23:52:52.740929 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 23:52:52.740994 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 23:52:52.741056 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 15 23:52:52.741110 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 23:52:52.741597 systemd[1]: Finished ensure-sysext.service. Jul 15 23:52:52.742898 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 23:52:52.753367 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 15 23:52:52.756606 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 15 23:52:52.851968 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 15 23:52:52.914950 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 15 23:52:52.921012 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 15 23:52:52.932031 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 15 23:52:52.934161 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 15 23:52:52.934406 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 15 23:52:52.939769 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 15 23:52:53.197707 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 15 23:52:53.273083 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 15 23:52:53.275416 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 15 23:52:53.298062 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 15 23:52:53.301885 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 15 23:52:53.395688 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 15 23:52:53.424847 systemd[1]: Reached target time-set.target - System Time Set. Jul 15 23:52:53.448450 systemd-resolved[1415]: Positive Trust Anchors: Jul 15 23:52:53.448470 systemd-resolved[1415]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 15 23:52:53.448510 systemd-resolved[1415]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 15 23:52:53.453546 systemd-resolved[1415]: Defaulting to hostname 'linux'. Jul 15 23:52:53.455471 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 15 23:52:53.490531 augenrules[1445]: No rules Jul 15 23:52:53.491124 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 15 23:52:53.492929 systemd[1]: audit-rules.service: Deactivated successfully. Jul 15 23:52:53.493299 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 15 23:52:53.496145 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 15 23:52:53.499420 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 23:52:53.501789 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 15 23:52:53.534367 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 15 23:52:53.553910 systemd-udevd[1452]: Using default interface naming scheme 'v255'. Jul 15 23:52:53.576640 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 23:52:53.578379 systemd[1]: Reached target sysinit.target - System Initialization. Jul 15 23:52:53.579788 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 15 23:52:53.581552 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 15 23:52:53.582991 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 15 23:52:53.584557 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 15 23:52:53.585816 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 15 23:52:53.587502 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 15 23:52:53.588851 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 15 23:52:53.588951 systemd[1]: Reached target paths.target - Path Units. Jul 15 23:52:53.590068 systemd[1]: Reached target timers.target - Timer Units. Jul 15 23:52:53.592869 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 15 23:52:53.597256 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 15 23:52:53.606196 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 15 23:52:53.607897 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 15 23:52:53.609299 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 15 23:52:53.619926 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 15 23:52:53.621782 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 15 23:52:53.629308 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 15 23:52:53.631032 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 15 23:52:53.643327 systemd[1]: Reached target sockets.target - Socket Units. Jul 15 23:52:53.644336 systemd[1]: Reached target basic.target - Basic System. Jul 15 23:52:53.645304 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 15 23:52:53.645395 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 15 23:52:53.649904 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 15 23:52:53.653284 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 15 23:52:53.655407 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 15 23:52:53.660080 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 15 23:52:53.660227 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 15 23:52:53.668839 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 15 23:52:53.673090 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 15 23:52:53.676892 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 15 23:52:53.682956 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 15 23:52:53.688595 jq[1487]: false Jul 15 23:52:53.690908 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 15 23:52:53.697517 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 15 23:52:53.699621 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 15 23:52:53.702089 google_oslogin_nss_cache[1489]: oslogin_cache_refresh[1489]: Refreshing passwd entry cache Jul 15 23:52:53.701992 oslogin_cache_refresh[1489]: Refreshing passwd entry cache Jul 15 23:52:53.706508 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 15 23:52:53.712083 oslogin_cache_refresh[1489]: Failure getting users, quitting Jul 15 23:52:53.712936 google_oslogin_nss_cache[1489]: oslogin_cache_refresh[1489]: Failure getting users, quitting Jul 15 23:52:53.712936 google_oslogin_nss_cache[1489]: oslogin_cache_refresh[1489]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 15 23:52:53.712936 google_oslogin_nss_cache[1489]: oslogin_cache_refresh[1489]: Refreshing group entry cache Jul 15 23:52:53.712936 google_oslogin_nss_cache[1489]: oslogin_cache_refresh[1489]: Failure getting groups, quitting Jul 15 23:52:53.712936 google_oslogin_nss_cache[1489]: oslogin_cache_refresh[1489]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 15 23:52:53.709305 systemd[1]: Starting update-engine.service - Update Engine... Jul 15 23:52:53.712112 oslogin_cache_refresh[1489]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 15 23:52:53.712157 oslogin_cache_refresh[1489]: Refreshing group entry cache Jul 15 23:52:53.712739 oslogin_cache_refresh[1489]: Failure getting groups, quitting Jul 15 23:52:53.712752 oslogin_cache_refresh[1489]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 15 23:52:53.713998 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 15 23:52:53.720286 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 15 23:52:53.727299 extend-filesystems[1488]: Found /dev/vda6 Jul 15 23:52:53.722036 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 15 23:52:53.722309 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 15 23:52:53.722736 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 15 23:52:53.722977 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 15 23:52:53.729429 systemd[1]: motdgen.service: Deactivated successfully. Jul 15 23:52:53.730952 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 15 23:52:53.732526 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 15 23:52:53.733706 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 15 23:52:53.745365 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 15 23:52:53.745659 jq[1507]: true Jul 15 23:52:53.752740 extend-filesystems[1488]: Found /dev/vda9 Jul 15 23:52:53.758926 extend-filesystems[1488]: Checking size of /dev/vda9 Jul 15 23:52:53.759819 update_engine[1505]: I20250715 23:52:53.755318 1505 main.cc:92] Flatcar Update Engine starting Jul 15 23:52:53.782012 dbus-daemon[1485]: [system] SELinux support is enabled Jul 15 23:52:53.843936 kernel: mousedev: PS/2 mouse device common for all mice Jul 15 23:52:53.844080 jq[1522]: true Jul 15 23:52:53.844745 update_engine[1505]: I20250715 23:52:53.790035 1505 update_check_scheduler.cc:74] Next update check in 6m9s Jul 15 23:52:53.782220 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 15 23:52:53.858347 extend-filesystems[1488]: Resized partition /dev/vda9 Jul 15 23:52:53.881479 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jul 15 23:52:53.882744 extend-filesystems[1536]: resize2fs 1.47.2 (1-Jan-2025) Jul 15 23:52:53.877935 systemd[1]: Started update-engine.service - Update Engine. Jul 15 23:52:53.883966 tar[1512]: linux-amd64/helm Jul 15 23:52:53.880075 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 15 23:52:53.880106 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 15 23:52:53.881482 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 15 23:52:53.881498 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 15 23:52:53.886711 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 15 23:52:53.888870 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 15 23:52:53.899711 kernel: ACPI: button: Power Button [PWRF] Jul 15 23:52:53.918684 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 15 23:52:53.919805 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 15 23:52:53.924146 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 15 23:52:53.926194 systemd-networkd[1484]: lo: Link UP Jul 15 23:52:53.926199 systemd-networkd[1484]: lo: Gained carrier Jul 15 23:52:53.928066 systemd-networkd[1484]: Enumeration completed Jul 15 23:52:53.928164 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 15 23:52:53.929549 systemd[1]: Reached target network.target - Network. Jul 15 23:52:53.929668 systemd-networkd[1484]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 23:52:53.929673 systemd-networkd[1484]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 23:52:53.930494 systemd-networkd[1484]: eth0: Link UP Jul 15 23:52:53.930773 systemd-networkd[1484]: eth0: Gained carrier Jul 15 23:52:53.930787 systemd-networkd[1484]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 23:52:53.934607 systemd[1]: Starting containerd.service - containerd container runtime... Jul 15 23:52:53.941975 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 15 23:52:53.951064 extend-filesystems[1536]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 15 23:52:53.951064 extend-filesystems[1536]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 15 23:52:53.951064 extend-filesystems[1536]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 15 23:52:53.945376 systemd-logind[1498]: New seat seat0. Jul 15 23:52:53.973977 bash[1553]: Updated "/home/core/.ssh/authorized_keys" Jul 15 23:52:53.974075 extend-filesystems[1488]: Resized filesystem in /dev/vda9 Jul 15 23:52:53.946194 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 15 23:52:53.946729 systemd-networkd[1484]: eth0: DHCPv4 address 10.0.0.86/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 15 23:52:53.948713 systemd[1]: Started systemd-logind.service - User Login Management. Jul 15 23:52:53.949824 systemd-timesyncd[1418]: Network configuration changed, trying to establish connection. Jul 15 23:52:53.951070 systemd-timesyncd[1418]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 15 23:52:53.951619 systemd-timesyncd[1418]: Initial clock synchronization to Tue 2025-07-15 23:52:53.895120 UTC. Jul 15 23:52:53.953276 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 15 23:52:53.953597 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 15 23:52:53.964229 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 15 23:52:53.969264 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 15 23:52:53.985382 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 15 23:52:53.987110 (ntainerd)[1563]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 15 23:52:54.078248 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 15 23:52:54.078673 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 15 23:52:54.093976 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 15 23:52:54.094782 locksmithd[1542]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 15 23:52:54.331136 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 23:52:54.364319 systemd-logind[1498]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 15 23:52:54.366240 sshd_keygen[1528]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 15 23:52:54.385235 systemd-logind[1498]: Watching system buttons on /dev/input/event2 (Power Button) Jul 15 23:52:54.413868 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 15 23:52:54.418051 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 15 23:52:54.452095 systemd[1]: issuegen.service: Deactivated successfully. Jul 15 23:52:54.452373 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 15 23:52:54.456689 kernel: kvm_amd: TSC scaling supported Jul 15 23:52:54.456751 kernel: kvm_amd: Nested Virtualization enabled Jul 15 23:52:54.456777 kernel: kvm_amd: Nested Paging enabled Jul 15 23:52:54.456806 kernel: kvm_amd: LBR virtualization supported Jul 15 23:52:54.456829 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jul 15 23:52:54.456890 kernel: kvm_amd: Virtual GIF supported Jul 15 23:52:54.525766 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 15 23:52:54.640795 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 23:52:54.656902 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 15 23:52:54.659767 kernel: EDAC MC: Ver: 3.0.0 Jul 15 23:52:54.663760 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 15 23:52:54.666277 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 15 23:52:54.667717 systemd[1]: Reached target getty.target - Login Prompts. Jul 15 23:52:54.692521 containerd[1563]: time="2025-07-15T23:52:54Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 15 23:52:54.693764 containerd[1563]: time="2025-07-15T23:52:54.693704609Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 15 23:52:54.708494 containerd[1563]: time="2025-07-15T23:52:54.708412058Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="15.924µs" Jul 15 23:52:54.708494 containerd[1563]: time="2025-07-15T23:52:54.708471641Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 15 23:52:54.708494 containerd[1563]: time="2025-07-15T23:52:54.708492737Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 15 23:52:54.708802 containerd[1563]: time="2025-07-15T23:52:54.708776308Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 15 23:52:54.708802 containerd[1563]: time="2025-07-15T23:52:54.708800189Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 15 23:52:54.708890 containerd[1563]: time="2025-07-15T23:52:54.708834404Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 15 23:52:54.708938 containerd[1563]: time="2025-07-15T23:52:54.708914224Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 15 23:52:54.708938 containerd[1563]: time="2025-07-15T23:52:54.708927932Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 15 23:52:54.709355 containerd[1563]: time="2025-07-15T23:52:54.709316352Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 15 23:52:54.709355 containerd[1563]: time="2025-07-15T23:52:54.709336649Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 15 23:52:54.709355 containerd[1563]: time="2025-07-15T23:52:54.709347881Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 15 23:52:54.709355 containerd[1563]: time="2025-07-15T23:52:54.709356287Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 15 23:52:54.709505 containerd[1563]: time="2025-07-15T23:52:54.709479068Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 15 23:52:54.709856 containerd[1563]: time="2025-07-15T23:52:54.709809981Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 15 23:52:54.709856 containerd[1563]: time="2025-07-15T23:52:54.709852932Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 15 23:52:54.709935 containerd[1563]: time="2025-07-15T23:52:54.709865162Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 15 23:52:54.709935 containerd[1563]: time="2025-07-15T23:52:54.709917337Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 15 23:52:54.710325 containerd[1563]: time="2025-07-15T23:52:54.710264175Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 15 23:52:54.710710 containerd[1563]: time="2025-07-15T23:52:54.710445720Z" level=info msg="metadata content store policy set" policy=shared Jul 15 23:52:54.791003 tar[1512]: linux-amd64/LICENSE Jul 15 23:52:54.791162 tar[1512]: linux-amd64/README.md Jul 15 23:52:54.813257 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 15 23:52:54.909374 containerd[1563]: time="2025-07-15T23:52:54.909203564Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 15 23:52:54.909374 containerd[1563]: time="2025-07-15T23:52:54.909318657Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 15 23:52:54.909374 containerd[1563]: time="2025-07-15T23:52:54.909337317Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 15 23:52:54.909545 containerd[1563]: time="2025-07-15T23:52:54.909381764Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 15 23:52:54.909545 containerd[1563]: time="2025-07-15T23:52:54.909407863Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 15 23:52:54.909545 containerd[1563]: time="2025-07-15T23:52:54.909423358Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 15 23:52:54.909545 containerd[1563]: time="2025-07-15T23:52:54.909437325Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 15 23:52:54.909545 containerd[1563]: time="2025-07-15T23:52:54.909457023Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 15 23:52:54.909545 containerd[1563]: time="2025-07-15T23:52:54.909471620Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 15 23:52:54.909545 containerd[1563]: time="2025-07-15T23:52:54.909484419Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 15 23:52:54.909545 containerd[1563]: time="2025-07-15T23:52:54.909497337Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 15 23:52:54.909545 containerd[1563]: time="2025-07-15T23:52:54.909517974Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 15 23:52:54.909802 containerd[1563]: time="2025-07-15T23:52:54.909778781Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 15 23:52:54.909842 containerd[1563]: time="2025-07-15T23:52:54.909821322Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 15 23:52:54.909877 containerd[1563]: time="2025-07-15T23:52:54.909856285Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 15 23:52:54.909897 containerd[1563]: time="2025-07-15T23:52:54.909878249Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 15 23:52:54.909897 containerd[1563]: time="2025-07-15T23:52:54.909892876Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 15 23:52:54.909932 containerd[1563]: time="2025-07-15T23:52:54.909906813Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 15 23:52:54.909932 containerd[1563]: time="2025-07-15T23:52:54.909920621Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 15 23:52:54.909987 containerd[1563]: time="2025-07-15T23:52:54.909935896Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 15 23:52:54.909987 containerd[1563]: time="2025-07-15T23:52:54.909951441Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 15 23:52:54.909987 containerd[1563]: time="2025-07-15T23:52:54.909965978Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 15 23:52:54.909987 containerd[1563]: time="2025-07-15T23:52:54.909979196Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 15 23:52:54.910138 containerd[1563]: time="2025-07-15T23:52:54.910104602Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 15 23:52:54.910138 containerd[1563]: time="2025-07-15T23:52:54.910130840Z" level=info msg="Start snapshots syncer" Jul 15 23:52:54.910201 containerd[1563]: time="2025-07-15T23:52:54.910175627Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 15 23:52:54.911924 containerd[1563]: time="2025-07-15T23:52:54.911834488Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 15 23:52:54.912114 containerd[1563]: time="2025-07-15T23:52:54.911951078Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 15 23:52:54.912155 containerd[1563]: time="2025-07-15T23:52:54.912135879Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 15 23:52:54.912410 containerd[1563]: time="2025-07-15T23:52:54.912375839Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 15 23:52:54.912435 containerd[1563]: time="2025-07-15T23:52:54.912418110Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 15 23:52:54.912455 containerd[1563]: time="2025-07-15T23:52:54.912434933Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 15 23:52:54.912476 containerd[1563]: time="2025-07-15T23:52:54.912451237Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 15 23:52:54.912476 containerd[1563]: time="2025-07-15T23:52:54.912468100Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 15 23:52:54.912513 containerd[1563]: time="2025-07-15T23:52:54.912484613Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 15 23:52:54.912513 containerd[1563]: time="2025-07-15T23:52:54.912500557Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 15 23:52:54.912560 containerd[1563]: time="2025-07-15T23:52:54.912541201Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 15 23:52:54.912581 containerd[1563]: time="2025-07-15T23:52:54.912557814Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 15 23:52:54.912581 containerd[1563]: time="2025-07-15T23:52:54.912573828Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 15 23:52:54.912642 containerd[1563]: time="2025-07-15T23:52:54.912620273Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 15 23:52:54.912694 containerd[1563]: time="2025-07-15T23:52:54.912663962Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 15 23:52:54.912694 containerd[1563]: time="2025-07-15T23:52:54.912680016Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 15 23:52:54.912782 containerd[1563]: time="2025-07-15T23:52:54.912693784Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 15 23:52:54.912782 containerd[1563]: time="2025-07-15T23:52:54.912706133Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 15 23:52:54.912782 containerd[1563]: time="2025-07-15T23:52:54.912721369Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 15 23:52:54.912782 containerd[1563]: time="2025-07-15T23:52:54.912744950Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 15 23:52:54.912782 containerd[1563]: time="2025-07-15T23:52:54.912772376Z" level=info msg="runtime interface created" Jul 15 23:52:54.912782 containerd[1563]: time="2025-07-15T23:52:54.912780693Z" level=info msg="created NRI interface" Jul 15 23:52:54.912951 containerd[1563]: time="2025-07-15T23:52:54.912793152Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 15 23:52:54.912951 containerd[1563]: time="2025-07-15T23:52:54.912813599Z" level=info msg="Connect containerd service" Jul 15 23:52:54.912951 containerd[1563]: time="2025-07-15T23:52:54.912857677Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 15 23:52:54.915722 containerd[1563]: time="2025-07-15T23:52:54.915676586Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 15 23:52:55.054743 containerd[1563]: time="2025-07-15T23:52:55.054676040Z" level=info msg="Start subscribing containerd event" Jul 15 23:52:55.054874 containerd[1563]: time="2025-07-15T23:52:55.054774662Z" level=info msg="Start recovering state" Jul 15 23:52:55.054980 containerd[1563]: time="2025-07-15T23:52:55.054964986Z" level=info msg="Start event monitor" Jul 15 23:52:55.055007 containerd[1563]: time="2025-07-15T23:52:55.054989471Z" level=info msg="Start cni network conf syncer for default" Jul 15 23:52:55.055007 containerd[1563]: time="2025-07-15T23:52:55.054999906Z" level=info msg="Start streaming server" Jul 15 23:52:55.055053 containerd[1563]: time="2025-07-15T23:52:55.055026130Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 15 23:52:55.055053 containerd[1563]: time="2025-07-15T23:52:55.055034487Z" level=info msg="runtime interface starting up..." Jul 15 23:52:55.055107 containerd[1563]: time="2025-07-15T23:52:55.055028496Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 15 23:52:55.055133 containerd[1563]: time="2025-07-15T23:52:55.055041498Z" level=info msg="starting plugins..." Jul 15 23:52:55.055155 containerd[1563]: time="2025-07-15T23:52:55.055132230Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 15 23:52:55.055155 containerd[1563]: time="2025-07-15T23:52:55.055136106Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 15 23:52:55.055349 containerd[1563]: time="2025-07-15T23:52:55.055333818Z" level=info msg="containerd successfully booted in 0.363746s" Jul 15 23:52:55.055518 systemd[1]: Started containerd.service - containerd container runtime. Jul 15 23:52:55.504897 systemd-networkd[1484]: eth0: Gained IPv6LL Jul 15 23:52:55.508382 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 15 23:52:55.510413 systemd[1]: Reached target network-online.target - Network is Online. Jul 15 23:52:55.513233 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 15 23:52:55.515882 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:52:55.518277 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 15 23:52:55.554007 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 15 23:52:55.557917 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 15 23:52:55.558332 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 15 23:52:55.560178 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 15 23:52:56.916243 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:52:56.918148 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 15 23:52:56.919666 systemd[1]: Startup finished in 3.044s (kernel) + 7.032s (initrd) + 6.309s (userspace) = 16.385s. Jul 15 23:52:56.962686 (kubelet)[1660]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 23:52:57.404500 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 15 23:52:57.406098 systemd[1]: Started sshd@0-10.0.0.86:22-10.0.0.1:41924.service - OpenSSH per-connection server daemon (10.0.0.1:41924). Jul 15 23:52:57.483798 sshd[1672]: Accepted publickey for core from 10.0.0.1 port 41924 ssh2: RSA SHA256:wrO5NCJWuMjqDZoRWCG1KDLSAbftsNF14I2QAREtKoA Jul 15 23:52:57.486625 sshd-session[1672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:52:57.494855 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 15 23:52:57.496074 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 15 23:52:57.503911 systemd-logind[1498]: New session 1 of user core. Jul 15 23:52:57.542597 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 15 23:52:57.546193 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 15 23:52:57.573535 (systemd)[1676]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 15 23:52:57.576481 systemd-logind[1498]: New session c1 of user core. Jul 15 23:52:57.609475 kubelet[1660]: E0715 23:52:57.609395 1660 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 23:52:57.613986 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 23:52:57.614189 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 23:52:57.614600 systemd[1]: kubelet.service: Consumed 1.838s CPU time, 264.9M memory peak. Jul 15 23:52:57.741333 systemd[1676]: Queued start job for default target default.target. Jul 15 23:52:57.765872 systemd[1676]: Created slice app.slice - User Application Slice. Jul 15 23:52:57.765914 systemd[1676]: Reached target paths.target - Paths. Jul 15 23:52:57.765974 systemd[1676]: Reached target timers.target - Timers. Jul 15 23:52:57.768162 systemd[1676]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 15 23:52:57.893814 systemd[1676]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 15 23:52:57.894014 systemd[1676]: Reached target sockets.target - Sockets. Jul 15 23:52:57.894078 systemd[1676]: Reached target basic.target - Basic System. Jul 15 23:52:57.894127 systemd[1676]: Reached target default.target - Main User Target. Jul 15 23:52:57.894183 systemd[1676]: Startup finished in 306ms. Jul 15 23:52:57.894470 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 15 23:52:57.896609 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 15 23:52:57.969410 systemd[1]: Started sshd@1-10.0.0.86:22-10.0.0.1:41928.service - OpenSSH per-connection server daemon (10.0.0.1:41928). Jul 15 23:52:58.028427 sshd[1688]: Accepted publickey for core from 10.0.0.1 port 41928 ssh2: RSA SHA256:wrO5NCJWuMjqDZoRWCG1KDLSAbftsNF14I2QAREtKoA Jul 15 23:52:58.030214 sshd-session[1688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:52:58.036354 systemd-logind[1498]: New session 2 of user core. Jul 15 23:52:58.045839 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 15 23:52:58.101809 sshd[1690]: Connection closed by 10.0.0.1 port 41928 Jul 15 23:52:58.102225 sshd-session[1688]: pam_unix(sshd:session): session closed for user core Jul 15 23:52:58.115488 systemd[1]: sshd@1-10.0.0.86:22-10.0.0.1:41928.service: Deactivated successfully. Jul 15 23:52:58.117662 systemd[1]: session-2.scope: Deactivated successfully. Jul 15 23:52:58.118376 systemd-logind[1498]: Session 2 logged out. Waiting for processes to exit. Jul 15 23:52:58.121463 systemd[1]: Started sshd@2-10.0.0.86:22-10.0.0.1:55888.service - OpenSSH per-connection server daemon (10.0.0.1:55888). Jul 15 23:52:58.122249 systemd-logind[1498]: Removed session 2. Jul 15 23:52:58.172040 sshd[1696]: Accepted publickey for core from 10.0.0.1 port 55888 ssh2: RSA SHA256:wrO5NCJWuMjqDZoRWCG1KDLSAbftsNF14I2QAREtKoA Jul 15 23:52:58.173850 sshd-session[1696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:52:58.178521 systemd-logind[1498]: New session 3 of user core. Jul 15 23:52:58.191774 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 15 23:52:58.240425 sshd[1698]: Connection closed by 10.0.0.1 port 55888 Jul 15 23:52:58.240761 sshd-session[1696]: pam_unix(sshd:session): session closed for user core Jul 15 23:52:58.254057 systemd[1]: sshd@2-10.0.0.86:22-10.0.0.1:55888.service: Deactivated successfully. Jul 15 23:52:58.255617 systemd[1]: session-3.scope: Deactivated successfully. Jul 15 23:52:58.256388 systemd-logind[1498]: Session 3 logged out. Waiting for processes to exit. Jul 15 23:52:58.258844 systemd[1]: Started sshd@3-10.0.0.86:22-10.0.0.1:55898.service - OpenSSH per-connection server daemon (10.0.0.1:55898). Jul 15 23:52:58.259541 systemd-logind[1498]: Removed session 3. Jul 15 23:52:58.307037 sshd[1704]: Accepted publickey for core from 10.0.0.1 port 55898 ssh2: RSA SHA256:wrO5NCJWuMjqDZoRWCG1KDLSAbftsNF14I2QAREtKoA Jul 15 23:52:58.309151 sshd-session[1704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:52:58.313513 systemd-logind[1498]: New session 4 of user core. Jul 15 23:52:58.323791 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 15 23:52:58.378000 sshd[1706]: Connection closed by 10.0.0.1 port 55898 Jul 15 23:52:58.378423 sshd-session[1704]: pam_unix(sshd:session): session closed for user core Jul 15 23:52:58.394005 systemd[1]: sshd@3-10.0.0.86:22-10.0.0.1:55898.service: Deactivated successfully. Jul 15 23:52:58.395804 systemd[1]: session-4.scope: Deactivated successfully. Jul 15 23:52:58.396570 systemd-logind[1498]: Session 4 logged out. Waiting for processes to exit. Jul 15 23:52:58.399539 systemd[1]: Started sshd@4-10.0.0.86:22-10.0.0.1:55910.service - OpenSSH per-connection server daemon (10.0.0.1:55910). Jul 15 23:52:58.400141 systemd-logind[1498]: Removed session 4. Jul 15 23:52:58.458416 sshd[1712]: Accepted publickey for core from 10.0.0.1 port 55910 ssh2: RSA SHA256:wrO5NCJWuMjqDZoRWCG1KDLSAbftsNF14I2QAREtKoA Jul 15 23:52:58.460205 sshd-session[1712]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:52:58.464974 systemd-logind[1498]: New session 5 of user core. Jul 15 23:52:58.474802 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 15 23:52:58.535704 sudo[1715]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 15 23:52:58.536132 sudo[1715]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 23:52:58.559239 sudo[1715]: pam_unix(sudo:session): session closed for user root Jul 15 23:52:58.561353 sshd[1714]: Connection closed by 10.0.0.1 port 55910 Jul 15 23:52:58.561815 sshd-session[1712]: pam_unix(sshd:session): session closed for user core Jul 15 23:52:58.576051 systemd[1]: sshd@4-10.0.0.86:22-10.0.0.1:55910.service: Deactivated successfully. Jul 15 23:52:58.577877 systemd[1]: session-5.scope: Deactivated successfully. Jul 15 23:52:58.578755 systemd-logind[1498]: Session 5 logged out. Waiting for processes to exit. Jul 15 23:52:58.581545 systemd[1]: Started sshd@5-10.0.0.86:22-10.0.0.1:55924.service - OpenSSH per-connection server daemon (10.0.0.1:55924). Jul 15 23:52:58.582168 systemd-logind[1498]: Removed session 5. Jul 15 23:52:58.634394 sshd[1721]: Accepted publickey for core from 10.0.0.1 port 55924 ssh2: RSA SHA256:wrO5NCJWuMjqDZoRWCG1KDLSAbftsNF14I2QAREtKoA Jul 15 23:52:58.636054 sshd-session[1721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:52:58.641906 systemd-logind[1498]: New session 6 of user core. Jul 15 23:52:58.651940 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 15 23:52:58.707276 sudo[1725]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 15 23:52:58.707746 sudo[1725]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 23:52:58.716070 sudo[1725]: pam_unix(sudo:session): session closed for user root Jul 15 23:52:58.724495 sudo[1724]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 15 23:52:58.724941 sudo[1724]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 23:52:58.736988 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 15 23:52:58.797982 augenrules[1747]: No rules Jul 15 23:52:58.799899 systemd[1]: audit-rules.service: Deactivated successfully. Jul 15 23:52:58.800233 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 15 23:52:58.801719 sudo[1724]: pam_unix(sudo:session): session closed for user root Jul 15 23:52:58.803429 sshd[1723]: Connection closed by 10.0.0.1 port 55924 Jul 15 23:52:58.803839 sshd-session[1721]: pam_unix(sshd:session): session closed for user core Jul 15 23:52:58.813897 systemd[1]: sshd@5-10.0.0.86:22-10.0.0.1:55924.service: Deactivated successfully. Jul 15 23:52:58.816072 systemd[1]: session-6.scope: Deactivated successfully. Jul 15 23:52:58.816919 systemd-logind[1498]: Session 6 logged out. Waiting for processes to exit. Jul 15 23:52:58.820015 systemd[1]: Started sshd@6-10.0.0.86:22-10.0.0.1:55932.service - OpenSSH per-connection server daemon (10.0.0.1:55932). Jul 15 23:52:58.820721 systemd-logind[1498]: Removed session 6. Jul 15 23:52:58.881340 sshd[1756]: Accepted publickey for core from 10.0.0.1 port 55932 ssh2: RSA SHA256:wrO5NCJWuMjqDZoRWCG1KDLSAbftsNF14I2QAREtKoA Jul 15 23:52:58.883246 sshd-session[1756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:52:58.888381 systemd-logind[1498]: New session 7 of user core. Jul 15 23:52:58.897894 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 15 23:52:58.953632 sudo[1759]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 15 23:52:58.954052 sudo[1759]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 23:52:59.529817 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 15 23:52:59.554176 (dockerd)[1779]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 15 23:53:00.063761 dockerd[1779]: time="2025-07-15T23:53:00.063676282Z" level=info msg="Starting up" Jul 15 23:53:00.064569 dockerd[1779]: time="2025-07-15T23:53:00.064529869Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 15 23:53:00.601797 dockerd[1779]: time="2025-07-15T23:53:00.601716832Z" level=info msg="Loading containers: start." Jul 15 23:53:00.613700 kernel: Initializing XFRM netlink socket Jul 15 23:53:01.065602 systemd-networkd[1484]: docker0: Link UP Jul 15 23:53:01.274292 dockerd[1779]: time="2025-07-15T23:53:01.274197751Z" level=info msg="Loading containers: done." Jul 15 23:53:01.319489 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3599092557-merged.mount: Deactivated successfully. Jul 15 23:53:01.453183 dockerd[1779]: time="2025-07-15T23:53:01.453095566Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 15 23:53:01.453354 dockerd[1779]: time="2025-07-15T23:53:01.453243754Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 15 23:53:01.453480 dockerd[1779]: time="2025-07-15T23:53:01.453444595Z" level=info msg="Initializing buildkit" Jul 15 23:53:01.881133 dockerd[1779]: time="2025-07-15T23:53:01.881086186Z" level=info msg="Completed buildkit initialization" Jul 15 23:53:01.890365 dockerd[1779]: time="2025-07-15T23:53:01.890303889Z" level=info msg="Daemon has completed initialization" Jul 15 23:53:01.890530 dockerd[1779]: time="2025-07-15T23:53:01.890414060Z" level=info msg="API listen on /run/docker.sock" Jul 15 23:53:01.890611 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 15 23:53:02.996081 containerd[1563]: time="2025-07-15T23:53:02.995994654Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\"" Jul 15 23:53:03.615261 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4042312179.mount: Deactivated successfully. Jul 15 23:53:05.086310 containerd[1563]: time="2025-07-15T23:53:05.086196683Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:53:05.087141 containerd[1563]: time="2025-07-15T23:53:05.087083274Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.11: active requests=0, bytes read=28077759" Jul 15 23:53:05.089704 containerd[1563]: time="2025-07-15T23:53:05.089642359Z" level=info msg="ImageCreate event name:\"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:53:05.093318 containerd[1563]: time="2025-07-15T23:53:05.093263053Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:53:05.094481 containerd[1563]: time="2025-07-15T23:53:05.094396653Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.11\" with image id \"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde\", size \"28074559\" in 2.098326668s" Jul 15 23:53:05.094481 containerd[1563]: time="2025-07-15T23:53:05.094462314Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\" returns image reference \"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\"" Jul 15 23:53:05.098721 containerd[1563]: time="2025-07-15T23:53:05.098669161Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\"" Jul 15 23:53:06.544186 containerd[1563]: time="2025-07-15T23:53:06.544113234Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:53:06.545016 containerd[1563]: time="2025-07-15T23:53:06.544979105Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.11: active requests=0, bytes read=24713245" Jul 15 23:53:06.546566 containerd[1563]: time="2025-07-15T23:53:06.546511092Z" level=info msg="ImageCreate event name:\"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:53:06.549444 containerd[1563]: time="2025-07-15T23:53:06.549406993Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:53:06.550359 containerd[1563]: time="2025-07-15T23:53:06.550326247Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.11\" with image id \"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848\", size \"26315079\" in 1.451615798s" Jul 15 23:53:06.550359 containerd[1563]: time="2025-07-15T23:53:06.550359026Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\" returns image reference \"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\"" Jul 15 23:53:06.551102 containerd[1563]: time="2025-07-15T23:53:06.550918685Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\"" Jul 15 23:53:07.865141 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 15 23:53:07.868608 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:53:08.997766 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:53:09.019994 (kubelet)[2058]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 23:53:09.168395 kubelet[2058]: E0715 23:53:09.168305 2058 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 23:53:09.175548 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 23:53:09.175800 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 23:53:09.176261 systemd[1]: kubelet.service: Consumed 410ms CPU time, 111.9M memory peak. Jul 15 23:53:10.203673 containerd[1563]: time="2025-07-15T23:53:10.203560811Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:53:10.205874 containerd[1563]: time="2025-07-15T23:53:10.205780525Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.11: active requests=0, bytes read=18783700" Jul 15 23:53:10.210233 containerd[1563]: time="2025-07-15T23:53:10.210064085Z" level=info msg="ImageCreate event name:\"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:53:10.214808 containerd[1563]: time="2025-07-15T23:53:10.214705540Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:53:10.215780 containerd[1563]: time="2025-07-15T23:53:10.215717030Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.11\" with image id \"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d\", size \"20385552\" in 3.664757649s" Jul 15 23:53:10.215780 containerd[1563]: time="2025-07-15T23:53:10.215763279Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\" returns image reference \"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\"" Jul 15 23:53:10.216446 containerd[1563]: time="2025-07-15T23:53:10.216403403Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\"" Jul 15 23:53:11.449810 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2518000068.mount: Deactivated successfully. Jul 15 23:53:13.016953 containerd[1563]: time="2025-07-15T23:53:13.016865649Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:53:13.036438 containerd[1563]: time="2025-07-15T23:53:13.036369194Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.11: active requests=0, bytes read=30383612" Jul 15 23:53:13.077759 containerd[1563]: time="2025-07-15T23:53:13.077708910Z" level=info msg="ImageCreate event name:\"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:53:13.119685 containerd[1563]: time="2025-07-15T23:53:13.119596651Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:53:13.120337 containerd[1563]: time="2025-07-15T23:53:13.120278571Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.11\" with image id \"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\", repo tag \"registry.k8s.io/kube-proxy:v1.31.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79\", size \"30382631\" in 2.90383703s" Jul 15 23:53:13.120337 containerd[1563]: time="2025-07-15T23:53:13.120330165Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\" returns image reference \"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\"" Jul 15 23:53:13.121279 containerd[1563]: time="2025-07-15T23:53:13.121222865Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 15 23:53:13.735981 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1897711412.mount: Deactivated successfully. Jul 15 23:53:17.284894 containerd[1563]: time="2025-07-15T23:53:17.284810159Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:53:17.286370 containerd[1563]: time="2025-07-15T23:53:17.286303579Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jul 15 23:53:17.288577 containerd[1563]: time="2025-07-15T23:53:17.288515397Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:53:17.292478 containerd[1563]: time="2025-07-15T23:53:17.292409620Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:53:17.293748 containerd[1563]: time="2025-07-15T23:53:17.293692011Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 4.172416409s" Jul 15 23:53:17.293748 containerd[1563]: time="2025-07-15T23:53:17.293735018Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 15 23:53:17.294383 containerd[1563]: time="2025-07-15T23:53:17.294336719Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 15 23:53:17.872903 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1291639919.mount: Deactivated successfully. Jul 15 23:53:17.880777 containerd[1563]: time="2025-07-15T23:53:17.880702251Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 23:53:17.881459 containerd[1563]: time="2025-07-15T23:53:17.881394652Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 15 23:53:17.882858 containerd[1563]: time="2025-07-15T23:53:17.882801487Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 23:53:17.885187 containerd[1563]: time="2025-07-15T23:53:17.885125527Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 23:53:17.886014 containerd[1563]: time="2025-07-15T23:53:17.885966099Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 591.596043ms" Jul 15 23:53:17.886014 containerd[1563]: time="2025-07-15T23:53:17.886002759Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 15 23:53:17.886555 containerd[1563]: time="2025-07-15T23:53:17.886517527Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 15 23:53:19.305092 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 15 23:53:19.307825 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:53:19.317500 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3771549437.mount: Deactivated successfully. Jul 15 23:53:19.541387 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:53:19.561120 (kubelet)[2146]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 23:53:19.604378 kubelet[2146]: E0715 23:53:19.604313 2146 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 23:53:19.608939 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 23:53:19.609134 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 23:53:19.609518 systemd[1]: kubelet.service: Consumed 232ms CPU time, 108.3M memory peak. Jul 15 23:53:24.393969 containerd[1563]: time="2025-07-15T23:53:24.393889037Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:53:24.395015 containerd[1563]: time="2025-07-15T23:53:24.394963521Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" Jul 15 23:53:24.398968 containerd[1563]: time="2025-07-15T23:53:24.398910806Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:53:24.401970 containerd[1563]: time="2025-07-15T23:53:24.401923709Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:53:24.403185 containerd[1563]: time="2025-07-15T23:53:24.403145946Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 6.516446358s" Jul 15 23:53:24.403248 containerd[1563]: time="2025-07-15T23:53:24.403185660Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jul 15 23:53:26.958169 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:53:26.958370 systemd[1]: kubelet.service: Consumed 232ms CPU time, 108.3M memory peak. Jul 15 23:53:26.960794 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:53:26.986545 systemd[1]: Reload requested from client PID 2230 ('systemctl') (unit session-7.scope)... Jul 15 23:53:26.986561 systemd[1]: Reloading... Jul 15 23:53:27.073879 zram_generator::config[2272]: No configuration found. Jul 15 23:53:27.404583 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 23:53:27.534172 systemd[1]: Reloading finished in 547 ms. Jul 15 23:53:27.599376 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 15 23:53:27.599474 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 15 23:53:27.599792 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:53:27.599833 systemd[1]: kubelet.service: Consumed 179ms CPU time, 98.3M memory peak. Jul 15 23:53:27.601418 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:53:27.783171 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:53:27.796191 (kubelet)[2320]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 15 23:53:27.846365 kubelet[2320]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 23:53:27.846365 kubelet[2320]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 15 23:53:27.846365 kubelet[2320]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 23:53:27.846940 kubelet[2320]: I0715 23:53:27.846449 2320 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 15 23:53:28.036908 kubelet[2320]: I0715 23:53:28.036219 2320 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 15 23:53:28.036908 kubelet[2320]: I0715 23:53:28.036264 2320 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 15 23:53:28.036908 kubelet[2320]: I0715 23:53:28.036780 2320 server.go:934] "Client rotation is on, will bootstrap in background" Jul 15 23:53:28.088517 kubelet[2320]: E0715 23:53:28.088443 2320 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.86:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.86:6443: connect: connection refused" logger="UnhandledError" Jul 15 23:53:28.092037 kubelet[2320]: I0715 23:53:28.091978 2320 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 23:53:28.138862 kubelet[2320]: I0715 23:53:28.138812 2320 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 15 23:53:28.154017 kubelet[2320]: I0715 23:53:28.153964 2320 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 15 23:53:28.154191 kubelet[2320]: I0715 23:53:28.154171 2320 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 15 23:53:28.154412 kubelet[2320]: I0715 23:53:28.154358 2320 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 15 23:53:28.154599 kubelet[2320]: I0715 23:53:28.154396 2320 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 15 23:53:28.154764 kubelet[2320]: I0715 23:53:28.154622 2320 topology_manager.go:138] "Creating topology manager with none policy" Jul 15 23:53:28.154764 kubelet[2320]: I0715 23:53:28.154631 2320 container_manager_linux.go:300] "Creating device plugin manager" Jul 15 23:53:28.154853 kubelet[2320]: I0715 23:53:28.154833 2320 state_mem.go:36] "Initialized new in-memory state store" Jul 15 23:53:28.197801 kubelet[2320]: W0715 23:53:28.197708 2320 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.86:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.86:6443: connect: connection refused Jul 15 23:53:28.197801 kubelet[2320]: E0715 23:53:28.197802 2320 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.86:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.86:6443: connect: connection refused" logger="UnhandledError" Jul 15 23:53:28.209738 kubelet[2320]: I0715 23:53:28.209680 2320 kubelet.go:408] "Attempting to sync node with API server" Jul 15 23:53:28.209738 kubelet[2320]: I0715 23:53:28.209741 2320 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 15 23:53:28.209938 kubelet[2320]: I0715 23:53:28.209811 2320 kubelet.go:314] "Adding apiserver pod source" Jul 15 23:53:28.209938 kubelet[2320]: I0715 23:53:28.209852 2320 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 15 23:53:28.211609 kubelet[2320]: W0715 23:53:28.211549 2320 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.86:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.86:6443: connect: connection refused Jul 15 23:53:28.211682 kubelet[2320]: E0715 23:53:28.211620 2320 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.86:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.86:6443: connect: connection refused" logger="UnhandledError" Jul 15 23:53:28.216447 kubelet[2320]: I0715 23:53:28.216121 2320 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 15 23:53:28.216686 kubelet[2320]: I0715 23:53:28.216641 2320 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 15 23:53:28.216795 kubelet[2320]: W0715 23:53:28.216768 2320 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 15 23:53:28.236341 kubelet[2320]: I0715 23:53:28.235694 2320 server.go:1274] "Started kubelet" Jul 15 23:53:28.236934 kubelet[2320]: I0715 23:53:28.236773 2320 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 15 23:53:28.238034 kubelet[2320]: I0715 23:53:28.237298 2320 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 15 23:53:28.238034 kubelet[2320]: I0715 23:53:28.237374 2320 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 15 23:53:28.238940 kubelet[2320]: I0715 23:53:28.238898 2320 server.go:449] "Adding debug handlers to kubelet server" Jul 15 23:53:28.240057 kubelet[2320]: I0715 23:53:28.240035 2320 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 15 23:53:28.241733 kubelet[2320]: I0715 23:53:28.241700 2320 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 15 23:53:28.245016 kubelet[2320]: I0715 23:53:28.244990 2320 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 15 23:53:28.245189 kubelet[2320]: I0715 23:53:28.245170 2320 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 15 23:53:28.245289 kubelet[2320]: I0715 23:53:28.245271 2320 reconciler.go:26] "Reconciler: start to sync state" Jul 15 23:53:28.245818 kubelet[2320]: W0715 23:53:28.245770 2320 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.86:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.86:6443: connect: connection refused Jul 15 23:53:28.245871 kubelet[2320]: E0715 23:53:28.245820 2320 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.86:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.86:6443: connect: connection refused" logger="UnhandledError" Jul 15 23:53:28.246607 kubelet[2320]: I0715 23:53:28.246577 2320 factory.go:221] Registration of the systemd container factory successfully Jul 15 23:53:28.246833 kubelet[2320]: E0715 23:53:28.246801 2320 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:53:28.246977 kubelet[2320]: I0715 23:53:28.246896 2320 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 15 23:53:28.247690 kubelet[2320]: E0715 23:53:28.247663 2320 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.86:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.86:6443: connect: connection refused" interval="200ms" Jul 15 23:53:28.248058 kubelet[2320]: E0715 23:53:28.248030 2320 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 15 23:53:28.249405 kubelet[2320]: I0715 23:53:28.249383 2320 factory.go:221] Registration of the containerd container factory successfully Jul 15 23:53:28.276674 kubelet[2320]: E0715 23:53:28.273860 2320 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.86:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.86:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.185291e2c9d8c015 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-15 23:53:28.235606037 +0000 UTC m=+0.435032831,LastTimestamp:2025-07-15 23:53:28.235606037 +0000 UTC m=+0.435032831,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 15 23:53:28.282680 kubelet[2320]: I0715 23:53:28.282638 2320 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 15 23:53:28.282680 kubelet[2320]: I0715 23:53:28.282676 2320 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 15 23:53:28.282832 kubelet[2320]: I0715 23:53:28.282695 2320 state_mem.go:36] "Initialized new in-memory state store" Jul 15 23:53:28.297163 kubelet[2320]: I0715 23:53:28.287168 2320 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 15 23:53:28.297163 kubelet[2320]: I0715 23:53:28.288769 2320 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 15 23:53:28.297163 kubelet[2320]: I0715 23:53:28.288847 2320 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 15 23:53:28.297163 kubelet[2320]: I0715 23:53:28.289086 2320 kubelet.go:2321] "Starting kubelet main sync loop" Jul 15 23:53:28.297163 kubelet[2320]: E0715 23:53:28.289143 2320 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 15 23:53:28.297163 kubelet[2320]: W0715 23:53:28.289745 2320 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.86:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.86:6443: connect: connection refused Jul 15 23:53:28.297163 kubelet[2320]: E0715 23:53:28.289783 2320 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.86:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.86:6443: connect: connection refused" logger="UnhandledError" Jul 15 23:53:28.347489 kubelet[2320]: E0715 23:53:28.347423 2320 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:53:28.390336 kubelet[2320]: E0715 23:53:28.390252 2320 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 15 23:53:28.447980 kubelet[2320]: E0715 23:53:28.447907 2320 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:53:28.448576 kubelet[2320]: E0715 23:53:28.448505 2320 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.86:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.86:6443: connect: connection refused" interval="400ms" Jul 15 23:53:28.549192 kubelet[2320]: E0715 23:53:28.549008 2320 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:53:28.590678 kubelet[2320]: E0715 23:53:28.590586 2320 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 15 23:53:28.649411 kubelet[2320]: E0715 23:53:28.649335 2320 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:53:28.749729 kubelet[2320]: E0715 23:53:28.749669 2320 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:53:28.849921 kubelet[2320]: E0715 23:53:28.849737 2320 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:53:28.849921 kubelet[2320]: E0715 23:53:28.849774 2320 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.86:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.86:6443: connect: connection refused" interval="800ms" Jul 15 23:53:28.950507 kubelet[2320]: E0715 23:53:28.950400 2320 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:53:28.991765 kubelet[2320]: E0715 23:53:28.991615 2320 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 15 23:53:29.051322 kubelet[2320]: E0715 23:53:29.051204 2320 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:53:29.074112 kubelet[2320]: W0715 23:53:29.074010 2320 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.86:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.86:6443: connect: connection refused Jul 15 23:53:29.074112 kubelet[2320]: E0715 23:53:29.074107 2320 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.86:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.86:6443: connect: connection refused" logger="UnhandledError" Jul 15 23:53:29.152366 kubelet[2320]: E0715 23:53:29.152176 2320 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:53:29.253203 kubelet[2320]: E0715 23:53:29.253125 2320 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:53:29.354167 kubelet[2320]: E0715 23:53:29.354087 2320 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:53:29.455018 kubelet[2320]: E0715 23:53:29.454823 2320 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:53:29.555676 kubelet[2320]: E0715 23:53:29.555577 2320 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:53:29.635797 kubelet[2320]: W0715 23:53:29.635701 2320 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.86:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.86:6443: connect: connection refused Jul 15 23:53:29.635797 kubelet[2320]: E0715 23:53:29.635790 2320 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.86:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.86:6443: connect: connection refused" logger="UnhandledError" Jul 15 23:53:29.651061 kubelet[2320]: E0715 23:53:29.650969 2320 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.86:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.86:6443: connect: connection refused" interval="1.6s" Jul 15 23:53:29.656156 kubelet[2320]: E0715 23:53:29.656096 2320 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:53:30.357156 kubelet[2320]: E0715 23:53:29.756224 2320 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:53:30.357156 kubelet[2320]: E0715 23:53:29.792478 2320 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 15 23:53:30.357156 kubelet[2320]: W0715 23:53:29.805272 2320 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.86:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.86:6443: connect: connection refused Jul 15 23:53:30.357156 kubelet[2320]: E0715 23:53:29.805375 2320 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.86:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.86:6443: connect: connection refused" logger="UnhandledError" Jul 15 23:53:30.357156 kubelet[2320]: W0715 23:53:29.839258 2320 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.86:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.86:6443: connect: connection refused Jul 15 23:53:30.357156 kubelet[2320]: E0715 23:53:29.839322 2320 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.86:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.86:6443: connect: connection refused" logger="UnhandledError" Jul 15 23:53:30.357156 kubelet[2320]: E0715 23:53:29.857036 2320 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:53:30.357156 kubelet[2320]: E0715 23:53:29.957702 2320 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:53:30.357156 kubelet[2320]: E0715 23:53:30.058305 2320 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:53:30.358853 kubelet[2320]: E0715 23:53:30.136768 2320 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.86:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.86:6443: connect: connection refused" logger="UnhandledError" Jul 15 23:53:30.358853 kubelet[2320]: E0715 23:53:30.158415 2320 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:53:30.358853 kubelet[2320]: E0715 23:53:30.258879 2320 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:53:30.359927 kubelet[2320]: E0715 23:53:30.359861 2320 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:53:30.460617 kubelet[2320]: E0715 23:53:30.460489 2320 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:53:30.561301 kubelet[2320]: E0715 23:53:30.561226 2320 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:53:30.662259 kubelet[2320]: E0715 23:53:30.662065 2320 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:53:30.763173 kubelet[2320]: E0715 23:53:30.763058 2320 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:53:30.778257 kubelet[2320]: I0715 23:53:30.778184 2320 policy_none.go:49] "None policy: Start" Jul 15 23:53:30.779167 kubelet[2320]: I0715 23:53:30.779133 2320 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 15 23:53:30.779218 kubelet[2320]: I0715 23:53:30.779172 2320 state_mem.go:35] "Initializing new in-memory state store" Jul 15 23:53:30.863916 kubelet[2320]: E0715 23:53:30.863804 2320 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:53:30.964692 kubelet[2320]: E0715 23:53:30.964504 2320 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:53:31.002103 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 15 23:53:31.016775 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 15 23:53:31.020802 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 15 23:53:31.031977 kubelet[2320]: I0715 23:53:31.031907 2320 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 15 23:53:31.032264 kubelet[2320]: I0715 23:53:31.032228 2320 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 15 23:53:31.032388 kubelet[2320]: I0715 23:53:31.032252 2320 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 15 23:53:31.032554 kubelet[2320]: I0715 23:53:31.032527 2320 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 15 23:53:31.033844 kubelet[2320]: E0715 23:53:31.033815 2320 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 15 23:53:31.133871 kubelet[2320]: I0715 23:53:31.133684 2320 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 15 23:53:31.134342 kubelet[2320]: E0715 23:53:31.134274 2320 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.86:6443/api/v1/nodes\": dial tcp 10.0.0.86:6443: connect: connection refused" node="localhost" Jul 15 23:53:31.239856 kubelet[2320]: E0715 23:53:31.239574 2320 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.86:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.86:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.185291e2c9d8c015 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-15 23:53:28.235606037 +0000 UTC m=+0.435032831,LastTimestamp:2025-07-15 23:53:28.235606037 +0000 UTC m=+0.435032831,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 15 23:53:31.252494 kubelet[2320]: E0715 23:53:31.252412 2320 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.86:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.86:6443: connect: connection refused" interval="3.2s" Jul 15 23:53:31.335880 kubelet[2320]: I0715 23:53:31.335826 2320 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 15 23:53:31.336378 kubelet[2320]: E0715 23:53:31.336337 2320 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.86:6443/api/v1/nodes\": dial tcp 10.0.0.86:6443: connect: connection refused" node="localhost" Jul 15 23:53:31.404335 systemd[1]: Created slice kubepods-burstable-pod407c569889bb86d746b0274843003fd0.slice - libcontainer container kubepods-burstable-pod407c569889bb86d746b0274843003fd0.slice. Jul 15 23:53:31.432315 systemd[1]: Created slice kubepods-burstable-pod27e4a50e94f48ec00f6bd509cb48ed05.slice - libcontainer container kubepods-burstable-pod27e4a50e94f48ec00f6bd509cb48ed05.slice. Jul 15 23:53:31.455153 systemd[1]: Created slice kubepods-burstable-pod0ce724f969e915d58fa8479c730e171e.slice - libcontainer container kubepods-burstable-pod0ce724f969e915d58fa8479c730e171e.slice. Jul 15 23:53:31.466559 kubelet[2320]: I0715 23:53:31.466516 2320 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0ce724f969e915d58fa8479c730e171e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0ce724f969e915d58fa8479c730e171e\") " pod="kube-system/kube-apiserver-localhost" Jul 15 23:53:31.466975 kubelet[2320]: I0715 23:53:31.466562 2320 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0ce724f969e915d58fa8479c730e171e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0ce724f969e915d58fa8479c730e171e\") " pod="kube-system/kube-apiserver-localhost" Jul 15 23:53:31.466975 kubelet[2320]: I0715 23:53:31.466592 2320 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 23:53:31.466975 kubelet[2320]: I0715 23:53:31.466612 2320 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 23:53:31.466975 kubelet[2320]: I0715 23:53:31.466631 2320 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/27e4a50e94f48ec00f6bd509cb48ed05-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"27e4a50e94f48ec00f6bd509cb48ed05\") " pod="kube-system/kube-scheduler-localhost" Jul 15 23:53:31.466975 kubelet[2320]: I0715 23:53:31.466676 2320 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 23:53:31.467175 kubelet[2320]: I0715 23:53:31.466700 2320 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 23:53:31.467175 kubelet[2320]: I0715 23:53:31.466722 2320 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 23:53:31.467175 kubelet[2320]: I0715 23:53:31.466741 2320 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0ce724f969e915d58fa8479c730e171e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0ce724f969e915d58fa8479c730e171e\") " pod="kube-system/kube-apiserver-localhost" Jul 15 23:53:31.729684 containerd[1563]: time="2025-07-15T23:53:31.729523267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:407c569889bb86d746b0274843003fd0,Namespace:kube-system,Attempt:0,}" Jul 15 23:53:31.738050 kubelet[2320]: I0715 23:53:31.738000 2320 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 15 23:53:31.738529 kubelet[2320]: E0715 23:53:31.738466 2320 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.86:6443/api/v1/nodes\": dial tcp 10.0.0.86:6443: connect: connection refused" node="localhost" Jul 15 23:53:31.753444 containerd[1563]: time="2025-07-15T23:53:31.753381689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:27e4a50e94f48ec00f6bd509cb48ed05,Namespace:kube-system,Attempt:0,}" Jul 15 23:53:31.758893 containerd[1563]: time="2025-07-15T23:53:31.758858804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0ce724f969e915d58fa8479c730e171e,Namespace:kube-system,Attempt:0,}" Jul 15 23:53:31.771998 kubelet[2320]: W0715 23:53:31.771944 2320 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.86:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.86:6443: connect: connection refused Jul 15 23:53:31.771998 kubelet[2320]: E0715 23:53:31.771995 2320 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.86:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.86:6443: connect: connection refused" logger="UnhandledError" Jul 15 23:53:31.885417 kubelet[2320]: W0715 23:53:31.885363 2320 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.86:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.86:6443: connect: connection refused Jul 15 23:53:31.885417 kubelet[2320]: E0715 23:53:31.885420 2320 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.86:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.86:6443: connect: connection refused" logger="UnhandledError" Jul 15 23:53:32.054550 kubelet[2320]: W0715 23:53:32.054397 2320 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.86:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.86:6443: connect: connection refused Jul 15 23:53:32.054550 kubelet[2320]: E0715 23:53:32.054471 2320 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.86:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.86:6443: connect: connection refused" logger="UnhandledError" Jul 15 23:53:32.375730 containerd[1563]: time="2025-07-15T23:53:32.375504153Z" level=info msg="connecting to shim 84afb50f4f14d454c66c9865c9e306811a291956ae9206f6a57bb7e07f468bc3" address="unix:///run/containerd/s/90ef142afb821d1552f0f8d6aed661953ef409003ce36935f9c002e8b6d04eb0" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:53:32.378427 containerd[1563]: time="2025-07-15T23:53:32.378359840Z" level=info msg="connecting to shim cc8a8cb9cb157ed9611c5dc06c9de0004a12c4447ac0e8291e901bad168af0f9" address="unix:///run/containerd/s/fe940781092621390d94aad64cb8e074667885af10ca0e772d4f14f1f3f07655" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:53:32.392478 kubelet[2320]: W0715 23:53:32.392425 2320 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.86:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.86:6443: connect: connection refused Jul 15 23:53:32.392678 kubelet[2320]: E0715 23:53:32.392625 2320 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.86:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.86:6443: connect: connection refused" logger="UnhandledError" Jul 15 23:53:32.410955 systemd[1]: Started cri-containerd-84afb50f4f14d454c66c9865c9e306811a291956ae9206f6a57bb7e07f468bc3.scope - libcontainer container 84afb50f4f14d454c66c9865c9e306811a291956ae9206f6a57bb7e07f468bc3. Jul 15 23:53:32.421155 systemd[1]: Started cri-containerd-cc8a8cb9cb157ed9611c5dc06c9de0004a12c4447ac0e8291e901bad168af0f9.scope - libcontainer container cc8a8cb9cb157ed9611c5dc06c9de0004a12c4447ac0e8291e901bad168af0f9. Jul 15 23:53:32.422978 containerd[1563]: time="2025-07-15T23:53:32.422927117Z" level=info msg="connecting to shim 5199ab7513f6324e0af05ff67dd11912eee4029e8895b9ff6b4ef1333dc169ca" address="unix:///run/containerd/s/fe22d45dad857f19a6317f790736ea9b694088b0c389491695fa42c871b663da" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:53:32.453989 systemd[1]: Started cri-containerd-5199ab7513f6324e0af05ff67dd11912eee4029e8895b9ff6b4ef1333dc169ca.scope - libcontainer container 5199ab7513f6324e0af05ff67dd11912eee4029e8895b9ff6b4ef1333dc169ca. Jul 15 23:53:32.476617 containerd[1563]: time="2025-07-15T23:53:32.476520197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:27e4a50e94f48ec00f6bd509cb48ed05,Namespace:kube-system,Attempt:0,} returns sandbox id \"84afb50f4f14d454c66c9865c9e306811a291956ae9206f6a57bb7e07f468bc3\"" Jul 15 23:53:32.482484 containerd[1563]: time="2025-07-15T23:53:32.482427510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:407c569889bb86d746b0274843003fd0,Namespace:kube-system,Attempt:0,} returns sandbox id \"cc8a8cb9cb157ed9611c5dc06c9de0004a12c4447ac0e8291e901bad168af0f9\"" Jul 15 23:53:32.483183 containerd[1563]: time="2025-07-15T23:53:32.483148735Z" level=info msg="CreateContainer within sandbox \"84afb50f4f14d454c66c9865c9e306811a291956ae9206f6a57bb7e07f468bc3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 15 23:53:32.485841 containerd[1563]: time="2025-07-15T23:53:32.485806209Z" level=info msg="CreateContainer within sandbox \"cc8a8cb9cb157ed9611c5dc06c9de0004a12c4447ac0e8291e901bad168af0f9\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 15 23:53:32.499623 containerd[1563]: time="2025-07-15T23:53:32.499353020Z" level=info msg="Container 268485203868f209eb2f0a8e4bfdc247309d8798c898a9b7edf90ea07d4bab55: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:53:32.504689 containerd[1563]: time="2025-07-15T23:53:32.504612302Z" level=info msg="Container 4279e639b9969896d087c56dbbb4b8c08b16c6560498ef5810241f530380f2ab: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:53:32.508063 containerd[1563]: time="2025-07-15T23:53:32.508006587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0ce724f969e915d58fa8479c730e171e,Namespace:kube-system,Attempt:0,} returns sandbox id \"5199ab7513f6324e0af05ff67dd11912eee4029e8895b9ff6b4ef1333dc169ca\"" Jul 15 23:53:32.510775 containerd[1563]: time="2025-07-15T23:53:32.510717355Z" level=info msg="CreateContainer within sandbox \"5199ab7513f6324e0af05ff67dd11912eee4029e8895b9ff6b4ef1333dc169ca\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 15 23:53:32.511581 containerd[1563]: time="2025-07-15T23:53:32.511515072Z" level=info msg="CreateContainer within sandbox \"84afb50f4f14d454c66c9865c9e306811a291956ae9206f6a57bb7e07f468bc3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"268485203868f209eb2f0a8e4bfdc247309d8798c898a9b7edf90ea07d4bab55\"" Jul 15 23:53:32.512087 containerd[1563]: time="2025-07-15T23:53:32.512044115Z" level=info msg="StartContainer for \"268485203868f209eb2f0a8e4bfdc247309d8798c898a9b7edf90ea07d4bab55\"" Jul 15 23:53:32.513290 containerd[1563]: time="2025-07-15T23:53:32.513246157Z" level=info msg="connecting to shim 268485203868f209eb2f0a8e4bfdc247309d8798c898a9b7edf90ea07d4bab55" address="unix:///run/containerd/s/90ef142afb821d1552f0f8d6aed661953ef409003ce36935f9c002e8b6d04eb0" protocol=ttrpc version=3 Jul 15 23:53:32.517460 containerd[1563]: time="2025-07-15T23:53:32.517396362Z" level=info msg="CreateContainer within sandbox \"cc8a8cb9cb157ed9611c5dc06c9de0004a12c4447ac0e8291e901bad168af0f9\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4279e639b9969896d087c56dbbb4b8c08b16c6560498ef5810241f530380f2ab\"" Jul 15 23:53:32.518309 containerd[1563]: time="2025-07-15T23:53:32.518268456Z" level=info msg="StartContainer for \"4279e639b9969896d087c56dbbb4b8c08b16c6560498ef5810241f530380f2ab\"" Jul 15 23:53:32.519553 containerd[1563]: time="2025-07-15T23:53:32.519501647Z" level=info msg="connecting to shim 4279e639b9969896d087c56dbbb4b8c08b16c6560498ef5810241f530380f2ab" address="unix:///run/containerd/s/fe940781092621390d94aad64cb8e074667885af10ca0e772d4f14f1f3f07655" protocol=ttrpc version=3 Jul 15 23:53:32.526511 containerd[1563]: time="2025-07-15T23:53:32.526463300Z" level=info msg="Container 15251bb7acfd22608f7dcfce537e3b4c00d5314390e4044a6c7f65a028700f75: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:53:32.538893 systemd[1]: Started cri-containerd-268485203868f209eb2f0a8e4bfdc247309d8798c898a9b7edf90ea07d4bab55.scope - libcontainer container 268485203868f209eb2f0a8e4bfdc247309d8798c898a9b7edf90ea07d4bab55. Jul 15 23:53:32.539945 containerd[1563]: time="2025-07-15T23:53:32.539897614Z" level=info msg="CreateContainer within sandbox \"5199ab7513f6324e0af05ff67dd11912eee4029e8895b9ff6b4ef1333dc169ca\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"15251bb7acfd22608f7dcfce537e3b4c00d5314390e4044a6c7f65a028700f75\"" Jul 15 23:53:32.540662 containerd[1563]: time="2025-07-15T23:53:32.540607952Z" level=info msg="StartContainer for \"15251bb7acfd22608f7dcfce537e3b4c00d5314390e4044a6c7f65a028700f75\"" Jul 15 23:53:32.540867 kubelet[2320]: I0715 23:53:32.540823 2320 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 15 23:53:32.541339 kubelet[2320]: E0715 23:53:32.541309 2320 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.86:6443/api/v1/nodes\": dial tcp 10.0.0.86:6443: connect: connection refused" node="localhost" Jul 15 23:53:32.543929 systemd[1]: Started cri-containerd-4279e639b9969896d087c56dbbb4b8c08b16c6560498ef5810241f530380f2ab.scope - libcontainer container 4279e639b9969896d087c56dbbb4b8c08b16c6560498ef5810241f530380f2ab. Jul 15 23:53:32.544536 containerd[1563]: time="2025-07-15T23:53:32.544000835Z" level=info msg="connecting to shim 15251bb7acfd22608f7dcfce537e3b4c00d5314390e4044a6c7f65a028700f75" address="unix:///run/containerd/s/fe22d45dad857f19a6317f790736ea9b694088b0c389491695fa42c871b663da" protocol=ttrpc version=3 Jul 15 23:53:32.570849 systemd[1]: Started cri-containerd-15251bb7acfd22608f7dcfce537e3b4c00d5314390e4044a6c7f65a028700f75.scope - libcontainer container 15251bb7acfd22608f7dcfce537e3b4c00d5314390e4044a6c7f65a028700f75. Jul 15 23:53:32.612489 containerd[1563]: time="2025-07-15T23:53:32.612388761Z" level=info msg="StartContainer for \"4279e639b9969896d087c56dbbb4b8c08b16c6560498ef5810241f530380f2ab\" returns successfully" Jul 15 23:53:32.613956 containerd[1563]: time="2025-07-15T23:53:32.613900992Z" level=info msg="StartContainer for \"268485203868f209eb2f0a8e4bfdc247309d8798c898a9b7edf90ea07d4bab55\" returns successfully" Jul 15 23:53:32.635688 containerd[1563]: time="2025-07-15T23:53:32.635304474Z" level=info msg="StartContainer for \"15251bb7acfd22608f7dcfce537e3b4c00d5314390e4044a6c7f65a028700f75\" returns successfully" Jul 15 23:53:34.143060 kubelet[2320]: I0715 23:53:34.143022 2320 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 15 23:53:34.158917 kubelet[2320]: I0715 23:53:34.158848 2320 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 15 23:53:34.158917 kubelet[2320]: E0715 23:53:34.158900 2320 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 15 23:53:34.169354 kubelet[2320]: E0715 23:53:34.169302 2320 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:53:34.269674 kubelet[2320]: E0715 23:53:34.269599 2320 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:53:34.370127 kubelet[2320]: E0715 23:53:34.370071 2320 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:53:34.470894 kubelet[2320]: E0715 23:53:34.470732 2320 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:53:34.571603 kubelet[2320]: E0715 23:53:34.571523 2320 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:53:34.672427 kubelet[2320]: E0715 23:53:34.672363 2320 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:53:34.772909 kubelet[2320]: E0715 23:53:34.772842 2320 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:53:34.873541 kubelet[2320]: E0715 23:53:34.873490 2320 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:53:34.974339 kubelet[2320]: E0715 23:53:34.974281 2320 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:53:35.075556 kubelet[2320]: E0715 23:53:35.075406 2320 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:53:35.176273 kubelet[2320]: E0715 23:53:35.176219 2320 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:53:35.276799 kubelet[2320]: E0715 23:53:35.276739 2320 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:53:35.378034 kubelet[2320]: E0715 23:53:35.377836 2320 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:53:35.478572 kubelet[2320]: E0715 23:53:35.478520 2320 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:53:35.579194 kubelet[2320]: E0715 23:53:35.579130 2320 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:53:35.680041 kubelet[2320]: E0715 23:53:35.679886 2320 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:53:35.780429 kubelet[2320]: E0715 23:53:35.780337 2320 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:53:35.881052 kubelet[2320]: E0715 23:53:35.880996 2320 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:53:35.981779 kubelet[2320]: E0715 23:53:35.981724 2320 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:53:36.082644 kubelet[2320]: E0715 23:53:36.082595 2320 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:53:36.183590 kubelet[2320]: E0715 23:53:36.183527 2320 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:53:36.284137 kubelet[2320]: E0715 23:53:36.284025 2320 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:53:36.306203 systemd[1]: Reload requested from client PID 2594 ('systemctl') (unit session-7.scope)... Jul 15 23:53:36.306226 systemd[1]: Reloading... Jul 15 23:53:36.384608 kubelet[2320]: E0715 23:53:36.384192 2320 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:53:36.406709 zram_generator::config[2638]: No configuration found. Jul 15 23:53:36.485295 kubelet[2320]: E0715 23:53:36.485241 2320 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:53:36.529322 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 23:53:36.586173 kubelet[2320]: E0715 23:53:36.586021 2320 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:53:36.668700 systemd[1]: Reloading finished in 361 ms. Jul 15 23:53:36.686889 kubelet[2320]: E0715 23:53:36.686847 2320 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:53:36.701189 kubelet[2320]: I0715 23:53:36.700775 2320 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 23:53:36.700972 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:53:36.723129 systemd[1]: kubelet.service: Deactivated successfully. Jul 15 23:53:36.723523 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:53:36.723587 systemd[1]: kubelet.service: Consumed 813ms CPU time, 131.1M memory peak. Jul 15 23:53:36.725742 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:53:37.041845 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:53:37.053116 (kubelet)[2682]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 15 23:53:37.090539 kubelet[2682]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 23:53:37.090539 kubelet[2682]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 15 23:53:37.090539 kubelet[2682]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 23:53:37.091316 kubelet[2682]: I0715 23:53:37.090614 2682 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 15 23:53:37.098078 kubelet[2682]: I0715 23:53:37.098035 2682 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 15 23:53:37.098078 kubelet[2682]: I0715 23:53:37.098062 2682 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 15 23:53:37.098325 kubelet[2682]: I0715 23:53:37.098302 2682 server.go:934] "Client rotation is on, will bootstrap in background" Jul 15 23:53:37.099664 kubelet[2682]: I0715 23:53:37.099625 2682 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 15 23:53:37.102060 kubelet[2682]: I0715 23:53:37.101992 2682 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 23:53:37.106132 kubelet[2682]: I0715 23:53:37.106099 2682 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 15 23:53:37.112026 kubelet[2682]: I0715 23:53:37.111982 2682 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 15 23:53:37.112166 kubelet[2682]: I0715 23:53:37.112153 2682 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 15 23:53:37.112370 kubelet[2682]: I0715 23:53:37.112296 2682 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 15 23:53:37.112610 kubelet[2682]: I0715 23:53:37.112349 2682 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 15 23:53:37.112726 kubelet[2682]: I0715 23:53:37.112622 2682 topology_manager.go:138] "Creating topology manager with none policy" Jul 15 23:53:37.112726 kubelet[2682]: I0715 23:53:37.112636 2682 container_manager_linux.go:300] "Creating device plugin manager" Jul 15 23:53:37.112726 kubelet[2682]: I0715 23:53:37.112718 2682 state_mem.go:36] "Initialized new in-memory state store" Jul 15 23:53:37.112978 kubelet[2682]: I0715 23:53:37.112940 2682 kubelet.go:408] "Attempting to sync node with API server" Jul 15 23:53:37.112978 kubelet[2682]: I0715 23:53:37.112965 2682 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 15 23:53:37.113244 kubelet[2682]: I0715 23:53:37.113016 2682 kubelet.go:314] "Adding apiserver pod source" Jul 15 23:53:37.113244 kubelet[2682]: I0715 23:53:37.113033 2682 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 15 23:53:37.114184 kubelet[2682]: I0715 23:53:37.114137 2682 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 15 23:53:37.114841 kubelet[2682]: I0715 23:53:37.114781 2682 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 15 23:53:37.115301 kubelet[2682]: I0715 23:53:37.115280 2682 server.go:1274] "Started kubelet" Jul 15 23:53:37.116281 kubelet[2682]: I0715 23:53:37.116252 2682 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 15 23:53:37.117299 kubelet[2682]: I0715 23:53:37.117125 2682 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 15 23:53:37.117299 kubelet[2682]: I0715 23:53:37.117134 2682 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 15 23:53:37.117299 kubelet[2682]: I0715 23:53:37.117214 2682 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 15 23:53:37.118532 kubelet[2682]: I0715 23:53:37.118505 2682 server.go:449] "Adding debug handlers to kubelet server" Jul 15 23:53:37.122476 kubelet[2682]: I0715 23:53:37.122433 2682 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 15 23:53:37.124726 kubelet[2682]: E0715 23:53:37.124690 2682 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:53:37.124781 kubelet[2682]: I0715 23:53:37.124751 2682 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 15 23:53:37.126172 kubelet[2682]: I0715 23:53:37.124918 2682 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 15 23:53:37.126172 kubelet[2682]: I0715 23:53:37.125065 2682 reconciler.go:26] "Reconciler: start to sync state" Jul 15 23:53:37.128276 kubelet[2682]: I0715 23:53:37.127460 2682 factory.go:221] Registration of the systemd container factory successfully Jul 15 23:53:37.128276 kubelet[2682]: I0715 23:53:37.127579 2682 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 15 23:53:37.129480 kubelet[2682]: E0715 23:53:37.129457 2682 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 15 23:53:37.130045 kubelet[2682]: I0715 23:53:37.130007 2682 factory.go:221] Registration of the containerd container factory successfully Jul 15 23:53:37.138285 kubelet[2682]: I0715 23:53:37.138204 2682 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 15 23:53:37.139756 kubelet[2682]: I0715 23:53:37.139724 2682 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 15 23:53:37.140918 kubelet[2682]: I0715 23:53:37.140885 2682 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 15 23:53:37.140970 kubelet[2682]: I0715 23:53:37.140931 2682 kubelet.go:2321] "Starting kubelet main sync loop" Jul 15 23:53:37.141034 kubelet[2682]: E0715 23:53:37.140988 2682 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 15 23:53:37.167477 kubelet[2682]: I0715 23:53:37.167447 2682 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 15 23:53:37.167477 kubelet[2682]: I0715 23:53:37.167470 2682 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 15 23:53:37.167477 kubelet[2682]: I0715 23:53:37.167489 2682 state_mem.go:36] "Initialized new in-memory state store" Jul 15 23:53:37.167702 kubelet[2682]: I0715 23:53:37.167672 2682 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 15 23:53:37.167702 kubelet[2682]: I0715 23:53:37.167683 2682 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 15 23:53:37.167702 kubelet[2682]: I0715 23:53:37.167702 2682 policy_none.go:49] "None policy: Start" Jul 15 23:53:37.168343 kubelet[2682]: I0715 23:53:37.168325 2682 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 15 23:53:37.168381 kubelet[2682]: I0715 23:53:37.168357 2682 state_mem.go:35] "Initializing new in-memory state store" Jul 15 23:53:37.168520 kubelet[2682]: I0715 23:53:37.168496 2682 state_mem.go:75] "Updated machine memory state" Jul 15 23:53:37.173577 kubelet[2682]: I0715 23:53:37.173543 2682 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 15 23:53:37.173844 kubelet[2682]: I0715 23:53:37.173819 2682 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 15 23:53:37.173878 kubelet[2682]: I0715 23:53:37.173846 2682 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 15 23:53:37.174085 kubelet[2682]: I0715 23:53:37.174066 2682 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 15 23:53:37.275628 kubelet[2682]: I0715 23:53:37.275573 2682 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 15 23:53:37.290469 kubelet[2682]: I0715 23:53:37.290428 2682 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jul 15 23:53:37.290668 kubelet[2682]: I0715 23:53:37.290557 2682 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 15 23:53:37.312742 sudo[2715]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 15 23:53:37.313106 sudo[2715]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 15 23:53:37.426390 kubelet[2682]: I0715 23:53:37.426347 2682 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0ce724f969e915d58fa8479c730e171e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0ce724f969e915d58fa8479c730e171e\") " pod="kube-system/kube-apiserver-localhost" Jul 15 23:53:37.426390 kubelet[2682]: I0715 23:53:37.426391 2682 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 23:53:37.426561 kubelet[2682]: I0715 23:53:37.426409 2682 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 23:53:37.426561 kubelet[2682]: I0715 23:53:37.426425 2682 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 23:53:37.426561 kubelet[2682]: I0715 23:53:37.426452 2682 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/27e4a50e94f48ec00f6bd509cb48ed05-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"27e4a50e94f48ec00f6bd509cb48ed05\") " pod="kube-system/kube-scheduler-localhost" Jul 15 23:53:37.426561 kubelet[2682]: I0715 23:53:37.426468 2682 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0ce724f969e915d58fa8479c730e171e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0ce724f969e915d58fa8479c730e171e\") " pod="kube-system/kube-apiserver-localhost" Jul 15 23:53:37.426561 kubelet[2682]: I0715 23:53:37.426481 2682 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0ce724f969e915d58fa8479c730e171e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0ce724f969e915d58fa8479c730e171e\") " pod="kube-system/kube-apiserver-localhost" Jul 15 23:53:37.426720 kubelet[2682]: I0715 23:53:37.426495 2682 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 23:53:37.426720 kubelet[2682]: I0715 23:53:37.426508 2682 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 23:53:37.798271 sudo[2715]: pam_unix(sudo:session): session closed for user root Jul 15 23:53:38.114224 kubelet[2682]: I0715 23:53:38.114064 2682 apiserver.go:52] "Watching apiserver" Jul 15 23:53:38.125077 kubelet[2682]: I0715 23:53:38.125031 2682 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 15 23:53:38.558747 kubelet[2682]: E0715 23:53:38.558703 2682 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 15 23:53:39.536126 update_engine[1505]: I20250715 23:53:39.535979 1505 update_attempter.cc:509] Updating boot flags... Jul 15 23:53:39.945874 kubelet[2682]: I0715 23:53:39.945470 2682 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.94539582 podStartE2EDuration="2.94539582s" podCreationTimestamp="2025-07-15 23:53:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:53:39.748042875 +0000 UTC m=+2.690579498" watchObservedRunningTime="2025-07-15 23:53:39.94539582 +0000 UTC m=+2.887932443" Jul 15 23:53:40.341810 kubelet[2682]: I0715 23:53:40.341602 2682 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.341570741 podStartE2EDuration="3.341570741s" podCreationTimestamp="2025-07-15 23:53:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:53:39.946872526 +0000 UTC m=+2.889409159" watchObservedRunningTime="2025-07-15 23:53:40.341570741 +0000 UTC m=+3.284107364" Jul 15 23:53:40.886472 kubelet[2682]: I0715 23:53:40.886368 2682 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.886336049 podStartE2EDuration="3.886336049s" podCreationTimestamp="2025-07-15 23:53:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:53:40.342852634 +0000 UTC m=+3.285389277" watchObservedRunningTime="2025-07-15 23:53:40.886336049 +0000 UTC m=+3.828872662" Jul 15 23:53:42.023641 sudo[1759]: pam_unix(sudo:session): session closed for user root Jul 15 23:53:42.025981 sshd[1758]: Connection closed by 10.0.0.1 port 55932 Jul 15 23:53:42.043453 sshd-session[1756]: pam_unix(sshd:session): session closed for user core Jul 15 23:53:42.089227 systemd[1]: sshd@6-10.0.0.86:22-10.0.0.1:55932.service: Deactivated successfully. Jul 15 23:53:42.091497 systemd[1]: session-7.scope: Deactivated successfully. Jul 15 23:53:42.091781 systemd[1]: session-7.scope: Consumed 5.154s CPU time, 262.1M memory peak. Jul 15 23:53:42.093696 systemd-logind[1498]: Session 7 logged out. Waiting for processes to exit. Jul 15 23:53:42.095283 systemd-logind[1498]: Removed session 7. Jul 15 23:53:45.318284 kubelet[2682]: I0715 23:53:45.318242 2682 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 15 23:53:45.318857 containerd[1563]: time="2025-07-15T23:53:45.318621440Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 15 23:53:45.319138 kubelet[2682]: I0715 23:53:45.318890 2682 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 15 23:53:45.471582 systemd[1]: Created slice kubepods-besteffort-pod6a4b8f64_a159_40a0_950a_96ab3461b568.slice - libcontainer container kubepods-besteffort-pod6a4b8f64_a159_40a0_950a_96ab3461b568.slice. Jul 15 23:53:45.472054 kubelet[2682]: I0715 23:53:45.471821 2682 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hngvd\" (UniqueName: \"kubernetes.io/projected/6a4b8f64-a159-40a0-950a-96ab3461b568-kube-api-access-hngvd\") pod \"kube-proxy-2jpdn\" (UID: \"6a4b8f64-a159-40a0-950a-96ab3461b568\") " pod="kube-system/kube-proxy-2jpdn" Jul 15 23:53:45.472054 kubelet[2682]: I0715 23:53:45.471882 2682 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6a4b8f64-a159-40a0-950a-96ab3461b568-kube-proxy\") pod \"kube-proxy-2jpdn\" (UID: \"6a4b8f64-a159-40a0-950a-96ab3461b568\") " pod="kube-system/kube-proxy-2jpdn" Jul 15 23:53:45.472054 kubelet[2682]: I0715 23:53:45.471906 2682 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6a4b8f64-a159-40a0-950a-96ab3461b568-xtables-lock\") pod \"kube-proxy-2jpdn\" (UID: \"6a4b8f64-a159-40a0-950a-96ab3461b568\") " pod="kube-system/kube-proxy-2jpdn" Jul 15 23:53:45.472054 kubelet[2682]: I0715 23:53:45.471925 2682 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6a4b8f64-a159-40a0-950a-96ab3461b568-lib-modules\") pod \"kube-proxy-2jpdn\" (UID: \"6a4b8f64-a159-40a0-950a-96ab3461b568\") " pod="kube-system/kube-proxy-2jpdn" Jul 15 23:53:45.486210 systemd[1]: Created slice kubepods-burstable-pod6010ff85_230a_4b4f_a347_cfa7fcb042f6.slice - libcontainer container kubepods-burstable-pod6010ff85_230a_4b4f_a347_cfa7fcb042f6.slice. Jul 15 23:53:45.572763 kubelet[2682]: I0715 23:53:45.572584 2682 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6010ff85-230a-4b4f-a347-cfa7fcb042f6-bpf-maps\") pod \"cilium-96tsw\" (UID: \"6010ff85-230a-4b4f-a347-cfa7fcb042f6\") " pod="kube-system/cilium-96tsw" Jul 15 23:53:45.572763 kubelet[2682]: I0715 23:53:45.572640 2682 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6010ff85-230a-4b4f-a347-cfa7fcb042f6-etc-cni-netd\") pod \"cilium-96tsw\" (UID: \"6010ff85-230a-4b4f-a347-cfa7fcb042f6\") " pod="kube-system/cilium-96tsw" Jul 15 23:53:45.572763 kubelet[2682]: I0715 23:53:45.572700 2682 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6010ff85-230a-4b4f-a347-cfa7fcb042f6-clustermesh-secrets\") pod \"cilium-96tsw\" (UID: \"6010ff85-230a-4b4f-a347-cfa7fcb042f6\") " pod="kube-system/cilium-96tsw" Jul 15 23:53:45.572763 kubelet[2682]: I0715 23:53:45.572728 2682 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6010ff85-230a-4b4f-a347-cfa7fcb042f6-cilium-run\") pod \"cilium-96tsw\" (UID: \"6010ff85-230a-4b4f-a347-cfa7fcb042f6\") " pod="kube-system/cilium-96tsw" Jul 15 23:53:45.572763 kubelet[2682]: I0715 23:53:45.572748 2682 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6c98x\" (UniqueName: \"kubernetes.io/projected/6010ff85-230a-4b4f-a347-cfa7fcb042f6-kube-api-access-6c98x\") pod \"cilium-96tsw\" (UID: \"6010ff85-230a-4b4f-a347-cfa7fcb042f6\") " pod="kube-system/cilium-96tsw" Jul 15 23:53:45.573083 kubelet[2682]: I0715 23:53:45.572787 2682 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6010ff85-230a-4b4f-a347-cfa7fcb042f6-cilium-config-path\") pod \"cilium-96tsw\" (UID: \"6010ff85-230a-4b4f-a347-cfa7fcb042f6\") " pod="kube-system/cilium-96tsw" Jul 15 23:53:45.573083 kubelet[2682]: I0715 23:53:45.572829 2682 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6010ff85-230a-4b4f-a347-cfa7fcb042f6-xtables-lock\") pod \"cilium-96tsw\" (UID: \"6010ff85-230a-4b4f-a347-cfa7fcb042f6\") " pod="kube-system/cilium-96tsw" Jul 15 23:53:45.573083 kubelet[2682]: I0715 23:53:45.572862 2682 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6010ff85-230a-4b4f-a347-cfa7fcb042f6-lib-modules\") pod \"cilium-96tsw\" (UID: \"6010ff85-230a-4b4f-a347-cfa7fcb042f6\") " pod="kube-system/cilium-96tsw" Jul 15 23:53:45.573083 kubelet[2682]: I0715 23:53:45.572883 2682 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6010ff85-230a-4b4f-a347-cfa7fcb042f6-cilium-cgroup\") pod \"cilium-96tsw\" (UID: \"6010ff85-230a-4b4f-a347-cfa7fcb042f6\") " pod="kube-system/cilium-96tsw" Jul 15 23:53:45.573083 kubelet[2682]: I0715 23:53:45.572902 2682 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6010ff85-230a-4b4f-a347-cfa7fcb042f6-cni-path\") pod \"cilium-96tsw\" (UID: \"6010ff85-230a-4b4f-a347-cfa7fcb042f6\") " pod="kube-system/cilium-96tsw" Jul 15 23:53:45.573083 kubelet[2682]: I0715 23:53:45.572920 2682 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6010ff85-230a-4b4f-a347-cfa7fcb042f6-host-proc-sys-net\") pod \"cilium-96tsw\" (UID: \"6010ff85-230a-4b4f-a347-cfa7fcb042f6\") " pod="kube-system/cilium-96tsw" Jul 15 23:53:45.573275 kubelet[2682]: I0715 23:53:45.572941 2682 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6010ff85-230a-4b4f-a347-cfa7fcb042f6-hubble-tls\") pod \"cilium-96tsw\" (UID: \"6010ff85-230a-4b4f-a347-cfa7fcb042f6\") " pod="kube-system/cilium-96tsw" Jul 15 23:53:45.573275 kubelet[2682]: I0715 23:53:45.572962 2682 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6010ff85-230a-4b4f-a347-cfa7fcb042f6-hostproc\") pod \"cilium-96tsw\" (UID: \"6010ff85-230a-4b4f-a347-cfa7fcb042f6\") " pod="kube-system/cilium-96tsw" Jul 15 23:53:45.573275 kubelet[2682]: I0715 23:53:45.572985 2682 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6010ff85-230a-4b4f-a347-cfa7fcb042f6-host-proc-sys-kernel\") pod \"cilium-96tsw\" (UID: \"6010ff85-230a-4b4f-a347-cfa7fcb042f6\") " pod="kube-system/cilium-96tsw" Jul 15 23:53:45.783590 containerd[1563]: time="2025-07-15T23:53:45.783540553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2jpdn,Uid:6a4b8f64-a159-40a0-950a-96ab3461b568,Namespace:kube-system,Attempt:0,}" Jul 15 23:53:45.790374 containerd[1563]: time="2025-07-15T23:53:45.790331323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-96tsw,Uid:6010ff85-230a-4b4f-a347-cfa7fcb042f6,Namespace:kube-system,Attempt:0,}" Jul 15 23:53:46.087308 systemd[1]: Created slice kubepods-besteffort-pod837b0db6_d2fc_4a6e_b85e_fd7c9dcde65b.slice - libcontainer container kubepods-besteffort-pod837b0db6_d2fc_4a6e_b85e_fd7c9dcde65b.slice. Jul 15 23:53:46.091993 containerd[1563]: time="2025-07-15T23:53:46.091935609Z" level=info msg="connecting to shim a03c3a23b6b5323d2f6209ac379fadd9f30966c2c643d299fec2af1b0b6048bf" address="unix:///run/containerd/s/a35c482f590ee2fa71d86b2c09ac1009cbf1fcf74a761f14a1202a4780bea0c8" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:53:46.098956 containerd[1563]: time="2025-07-15T23:53:46.098724321Z" level=info msg="connecting to shim d02915aeab77e9e3a16d93ad856d61ec091619090b03916fa2990270f68125af" address="unix:///run/containerd/s/2a3b48f05c3b7a25b8b59455341e3da99d7e99ea8f276bb32bce400ea21666e8" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:53:46.140070 systemd[1]: Started cri-containerd-a03c3a23b6b5323d2f6209ac379fadd9f30966c2c643d299fec2af1b0b6048bf.scope - libcontainer container a03c3a23b6b5323d2f6209ac379fadd9f30966c2c643d299fec2af1b0b6048bf. Jul 15 23:53:46.144706 systemd[1]: Started cri-containerd-d02915aeab77e9e3a16d93ad856d61ec091619090b03916fa2990270f68125af.scope - libcontainer container d02915aeab77e9e3a16d93ad856d61ec091619090b03916fa2990270f68125af. Jul 15 23:53:46.175553 containerd[1563]: time="2025-07-15T23:53:46.175429384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-96tsw,Uid:6010ff85-230a-4b4f-a347-cfa7fcb042f6,Namespace:kube-system,Attempt:0,} returns sandbox id \"a03c3a23b6b5323d2f6209ac379fadd9f30966c2c643d299fec2af1b0b6048bf\"" Jul 15 23:53:46.177967 kubelet[2682]: I0715 23:53:46.177667 2682 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/837b0db6-d2fc-4a6e-b85e-fd7c9dcde65b-cilium-config-path\") pod \"cilium-operator-5d85765b45-ntmgh\" (UID: \"837b0db6-d2fc-4a6e-b85e-fd7c9dcde65b\") " pod="kube-system/cilium-operator-5d85765b45-ntmgh" Jul 15 23:53:46.177967 kubelet[2682]: I0715 23:53:46.177737 2682 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnwbj\" (UniqueName: \"kubernetes.io/projected/837b0db6-d2fc-4a6e-b85e-fd7c9dcde65b-kube-api-access-rnwbj\") pod \"cilium-operator-5d85765b45-ntmgh\" (UID: \"837b0db6-d2fc-4a6e-b85e-fd7c9dcde65b\") " pod="kube-system/cilium-operator-5d85765b45-ntmgh" Jul 15 23:53:46.180111 containerd[1563]: time="2025-07-15T23:53:46.180052830Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 15 23:53:46.184563 containerd[1563]: time="2025-07-15T23:53:46.184472909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2jpdn,Uid:6a4b8f64-a159-40a0-950a-96ab3461b568,Namespace:kube-system,Attempt:0,} returns sandbox id \"d02915aeab77e9e3a16d93ad856d61ec091619090b03916fa2990270f68125af\"" Jul 15 23:53:46.189851 containerd[1563]: time="2025-07-15T23:53:46.189777468Z" level=info msg="CreateContainer within sandbox \"d02915aeab77e9e3a16d93ad856d61ec091619090b03916fa2990270f68125af\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 15 23:53:46.207412 containerd[1563]: time="2025-07-15T23:53:46.207341350Z" level=info msg="Container f0e16f8b69ecd18c32c410d85458386c98bd3c20ab76d8478101e850d9a55256: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:53:46.216148 containerd[1563]: time="2025-07-15T23:53:46.216073880Z" level=info msg="CreateContainer within sandbox \"d02915aeab77e9e3a16d93ad856d61ec091619090b03916fa2990270f68125af\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f0e16f8b69ecd18c32c410d85458386c98bd3c20ab76d8478101e850d9a55256\"" Jul 15 23:53:46.216920 containerd[1563]: time="2025-07-15T23:53:46.216806373Z" level=info msg="StartContainer for \"f0e16f8b69ecd18c32c410d85458386c98bd3c20ab76d8478101e850d9a55256\"" Jul 15 23:53:46.218239 containerd[1563]: time="2025-07-15T23:53:46.218213610Z" level=info msg="connecting to shim f0e16f8b69ecd18c32c410d85458386c98bd3c20ab76d8478101e850d9a55256" address="unix:///run/containerd/s/2a3b48f05c3b7a25b8b59455341e3da99d7e99ea8f276bb32bce400ea21666e8" protocol=ttrpc version=3 Jul 15 23:53:46.244990 systemd[1]: Started cri-containerd-f0e16f8b69ecd18c32c410d85458386c98bd3c20ab76d8478101e850d9a55256.scope - libcontainer container f0e16f8b69ecd18c32c410d85458386c98bd3c20ab76d8478101e850d9a55256. Jul 15 23:53:46.305351 containerd[1563]: time="2025-07-15T23:53:46.305287241Z" level=info msg="StartContainer for \"f0e16f8b69ecd18c32c410d85458386c98bd3c20ab76d8478101e850d9a55256\" returns successfully" Jul 15 23:53:46.395699 containerd[1563]: time="2025-07-15T23:53:46.395556665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-ntmgh,Uid:837b0db6-d2fc-4a6e-b85e-fd7c9dcde65b,Namespace:kube-system,Attempt:0,}" Jul 15 23:53:46.422493 containerd[1563]: time="2025-07-15T23:53:46.422425519Z" level=info msg="connecting to shim 8507ade49ad9747708cb8ecaca2f3c3dbabfee5106aa722eb891b9d34c7b6082" address="unix:///run/containerd/s/d39687cd9d7de7c43f8c3250ffc290e82432ea863559bc7b6f0ae260b54681b8" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:53:46.458855 systemd[1]: Started cri-containerd-8507ade49ad9747708cb8ecaca2f3c3dbabfee5106aa722eb891b9d34c7b6082.scope - libcontainer container 8507ade49ad9747708cb8ecaca2f3c3dbabfee5106aa722eb891b9d34c7b6082. Jul 15 23:53:46.520973 containerd[1563]: time="2025-07-15T23:53:46.520911938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-ntmgh,Uid:837b0db6-d2fc-4a6e-b85e-fd7c9dcde65b,Namespace:kube-system,Attempt:0,} returns sandbox id \"8507ade49ad9747708cb8ecaca2f3c3dbabfee5106aa722eb891b9d34c7b6082\"" Jul 15 23:53:47.186828 kubelet[2682]: I0715 23:53:47.186723 2682 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2jpdn" podStartSLOduration=2.186699059 podStartE2EDuration="2.186699059s" podCreationTimestamp="2025-07-15 23:53:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:53:47.186512261 +0000 UTC m=+10.129048894" watchObservedRunningTime="2025-07-15 23:53:47.186699059 +0000 UTC m=+10.129235682" Jul 15 23:53:50.633739 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1858922934.mount: Deactivated successfully. Jul 15 23:53:55.341218 containerd[1563]: time="2025-07-15T23:53:55.341125378Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:53:55.342521 containerd[1563]: time="2025-07-15T23:53:55.342460521Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jul 15 23:53:55.344207 containerd[1563]: time="2025-07-15T23:53:55.344110351Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:53:55.345583 containerd[1563]: time="2025-07-15T23:53:55.345528354Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 9.165411229s" Jul 15 23:53:55.345682 containerd[1563]: time="2025-07-15T23:53:55.345583864Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 15 23:53:55.347090 containerd[1563]: time="2025-07-15T23:53:55.346994923Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 15 23:53:55.348597 containerd[1563]: time="2025-07-15T23:53:55.348554560Z" level=info msg="CreateContainer within sandbox \"a03c3a23b6b5323d2f6209ac379fadd9f30966c2c643d299fec2af1b0b6048bf\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 15 23:53:55.360805 containerd[1563]: time="2025-07-15T23:53:55.360735869Z" level=info msg="Container bcefd6437d59c34169812fbbb69e436982a250c4593bb27944bb24c2cf8d1327: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:53:55.371810 containerd[1563]: time="2025-07-15T23:53:55.371743876Z" level=info msg="CreateContainer within sandbox \"a03c3a23b6b5323d2f6209ac379fadd9f30966c2c643d299fec2af1b0b6048bf\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"bcefd6437d59c34169812fbbb69e436982a250c4593bb27944bb24c2cf8d1327\"" Jul 15 23:53:55.372464 containerd[1563]: time="2025-07-15T23:53:55.372427031Z" level=info msg="StartContainer for \"bcefd6437d59c34169812fbbb69e436982a250c4593bb27944bb24c2cf8d1327\"" Jul 15 23:53:55.373835 containerd[1563]: time="2025-07-15T23:53:55.373771920Z" level=info msg="connecting to shim bcefd6437d59c34169812fbbb69e436982a250c4593bb27944bb24c2cf8d1327" address="unix:///run/containerd/s/a35c482f590ee2fa71d86b2c09ac1009cbf1fcf74a761f14a1202a4780bea0c8" protocol=ttrpc version=3 Jul 15 23:53:55.433894 systemd[1]: Started cri-containerd-bcefd6437d59c34169812fbbb69e436982a250c4593bb27944bb24c2cf8d1327.scope - libcontainer container bcefd6437d59c34169812fbbb69e436982a250c4593bb27944bb24c2cf8d1327. Jul 15 23:53:55.484792 systemd[1]: cri-containerd-bcefd6437d59c34169812fbbb69e436982a250c4593bb27944bb24c2cf8d1327.scope: Deactivated successfully. Jul 15 23:53:55.486681 containerd[1563]: time="2025-07-15T23:53:55.486605472Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bcefd6437d59c34169812fbbb69e436982a250c4593bb27944bb24c2cf8d1327\" id:\"bcefd6437d59c34169812fbbb69e436982a250c4593bb27944bb24c2cf8d1327\" pid:3121 exited_at:{seconds:1752623635 nanos:485956130}" Jul 15 23:53:55.747442 containerd[1563]: time="2025-07-15T23:53:55.747360500Z" level=info msg="received exit event container_id:\"bcefd6437d59c34169812fbbb69e436982a250c4593bb27944bb24c2cf8d1327\" id:\"bcefd6437d59c34169812fbbb69e436982a250c4593bb27944bb24c2cf8d1327\" pid:3121 exited_at:{seconds:1752623635 nanos:485956130}" Jul 15 23:53:55.749041 containerd[1563]: time="2025-07-15T23:53:55.748937438Z" level=info msg="StartContainer for \"bcefd6437d59c34169812fbbb69e436982a250c4593bb27944bb24c2cf8d1327\" returns successfully" Jul 15 23:53:55.772668 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bcefd6437d59c34169812fbbb69e436982a250c4593bb27944bb24c2cf8d1327-rootfs.mount: Deactivated successfully. Jul 15 23:53:57.312712 containerd[1563]: time="2025-07-15T23:53:57.312665395Z" level=info msg="CreateContainer within sandbox \"a03c3a23b6b5323d2f6209ac379fadd9f30966c2c643d299fec2af1b0b6048bf\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 15 23:53:57.772588 containerd[1563]: time="2025-07-15T23:53:57.772519232Z" level=info msg="Container e706a8586576c3859096ff43c98fc7dc705450db619678ece4928d5a7a853e07: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:53:57.783429 containerd[1563]: time="2025-07-15T23:53:57.783342485Z" level=info msg="CreateContainer within sandbox \"a03c3a23b6b5323d2f6209ac379fadd9f30966c2c643d299fec2af1b0b6048bf\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e706a8586576c3859096ff43c98fc7dc705450db619678ece4928d5a7a853e07\"" Jul 15 23:53:57.784126 containerd[1563]: time="2025-07-15T23:53:57.784046543Z" level=info msg="StartContainer for \"e706a8586576c3859096ff43c98fc7dc705450db619678ece4928d5a7a853e07\"" Jul 15 23:53:57.785421 containerd[1563]: time="2025-07-15T23:53:57.785360147Z" level=info msg="connecting to shim e706a8586576c3859096ff43c98fc7dc705450db619678ece4928d5a7a853e07" address="unix:///run/containerd/s/a35c482f590ee2fa71d86b2c09ac1009cbf1fcf74a761f14a1202a4780bea0c8" protocol=ttrpc version=3 Jul 15 23:53:57.826966 systemd[1]: Started cri-containerd-e706a8586576c3859096ff43c98fc7dc705450db619678ece4928d5a7a853e07.scope - libcontainer container e706a8586576c3859096ff43c98fc7dc705450db619678ece4928d5a7a853e07. Jul 15 23:53:57.863160 containerd[1563]: time="2025-07-15T23:53:57.863119081Z" level=info msg="StartContainer for \"e706a8586576c3859096ff43c98fc7dc705450db619678ece4928d5a7a853e07\" returns successfully" Jul 15 23:53:57.878613 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 15 23:53:57.878962 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 15 23:53:57.880229 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 15 23:53:57.882554 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 15 23:53:57.885545 containerd[1563]: time="2025-07-15T23:53:57.885440002Z" level=info msg="received exit event container_id:\"e706a8586576c3859096ff43c98fc7dc705450db619678ece4928d5a7a853e07\" id:\"e706a8586576c3859096ff43c98fc7dc705450db619678ece4928d5a7a853e07\" pid:3168 exited_at:{seconds:1752623637 nanos:884296125}" Jul 15 23:53:57.885545 containerd[1563]: time="2025-07-15T23:53:57.885502805Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e706a8586576c3859096ff43c98fc7dc705450db619678ece4928d5a7a853e07\" id:\"e706a8586576c3859096ff43c98fc7dc705450db619678ece4928d5a7a853e07\" pid:3168 exited_at:{seconds:1752623637 nanos:884296125}" Jul 15 23:53:57.885910 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 15 23:53:57.886709 systemd[1]: cri-containerd-e706a8586576c3859096ff43c98fc7dc705450db619678ece4928d5a7a853e07.scope: Deactivated successfully. Jul 15 23:53:57.919086 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 15 23:53:58.371318 containerd[1563]: time="2025-07-15T23:53:58.370720664Z" level=info msg="CreateContainer within sandbox \"a03c3a23b6b5323d2f6209ac379fadd9f30966c2c643d299fec2af1b0b6048bf\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 15 23:53:58.492633 containerd[1563]: time="2025-07-15T23:53:58.492553000Z" level=info msg="Container af44173d5d2439837b711d44cf0115b7630b0318a590d58a368f060bfb624536: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:53:58.530116 containerd[1563]: time="2025-07-15T23:53:58.530047372Z" level=info msg="CreateContainer within sandbox \"a03c3a23b6b5323d2f6209ac379fadd9f30966c2c643d299fec2af1b0b6048bf\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"af44173d5d2439837b711d44cf0115b7630b0318a590d58a368f060bfb624536\"" Jul 15 23:53:58.530758 containerd[1563]: time="2025-07-15T23:53:58.530712453Z" level=info msg="StartContainer for \"af44173d5d2439837b711d44cf0115b7630b0318a590d58a368f060bfb624536\"" Jul 15 23:53:58.532535 containerd[1563]: time="2025-07-15T23:53:58.532480780Z" level=info msg="connecting to shim af44173d5d2439837b711d44cf0115b7630b0318a590d58a368f060bfb624536" address="unix:///run/containerd/s/a35c482f590ee2fa71d86b2c09ac1009cbf1fcf74a761f14a1202a4780bea0c8" protocol=ttrpc version=3 Jul 15 23:53:58.563858 systemd[1]: Started cri-containerd-af44173d5d2439837b711d44cf0115b7630b0318a590d58a368f060bfb624536.scope - libcontainer container af44173d5d2439837b711d44cf0115b7630b0318a590d58a368f060bfb624536. Jul 15 23:53:58.615828 systemd[1]: cri-containerd-af44173d5d2439837b711d44cf0115b7630b0318a590d58a368f060bfb624536.scope: Deactivated successfully. Jul 15 23:53:58.616643 containerd[1563]: time="2025-07-15T23:53:58.616607420Z" level=info msg="TaskExit event in podsandbox handler container_id:\"af44173d5d2439837b711d44cf0115b7630b0318a590d58a368f060bfb624536\" id:\"af44173d5d2439837b711d44cf0115b7630b0318a590d58a368f060bfb624536\" pid:3216 exited_at:{seconds:1752623638 nanos:616318539}" Jul 15 23:53:58.772672 containerd[1563]: time="2025-07-15T23:53:58.772571254Z" level=info msg="received exit event container_id:\"af44173d5d2439837b711d44cf0115b7630b0318a590d58a368f060bfb624536\" id:\"af44173d5d2439837b711d44cf0115b7630b0318a590d58a368f060bfb624536\" pid:3216 exited_at:{seconds:1752623638 nanos:616318539}" Jul 15 23:53:58.774736 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e706a8586576c3859096ff43c98fc7dc705450db619678ece4928d5a7a853e07-rootfs.mount: Deactivated successfully. Jul 15 23:53:58.775410 containerd[1563]: time="2025-07-15T23:53:58.775374149Z" level=info msg="StartContainer for \"af44173d5d2439837b711d44cf0115b7630b0318a590d58a368f060bfb624536\" returns successfully" Jul 15 23:53:58.797789 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-af44173d5d2439837b711d44cf0115b7630b0318a590d58a368f060bfb624536-rootfs.mount: Deactivated successfully. Jul 15 23:53:59.003771 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1941952587.mount: Deactivated successfully. Jul 15 23:53:59.322971 containerd[1563]: time="2025-07-15T23:53:59.322908930Z" level=info msg="CreateContainer within sandbox \"a03c3a23b6b5323d2f6209ac379fadd9f30966c2c643d299fec2af1b0b6048bf\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 15 23:53:59.626665 containerd[1563]: time="2025-07-15T23:53:59.626412297Z" level=info msg="Container ca114de7feacd501431c0afd1395452be6caec1bc0e2e23423a5041073ae04d0: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:53:59.640514 containerd[1563]: time="2025-07-15T23:53:59.640464176Z" level=info msg="CreateContainer within sandbox \"a03c3a23b6b5323d2f6209ac379fadd9f30966c2c643d299fec2af1b0b6048bf\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ca114de7feacd501431c0afd1395452be6caec1bc0e2e23423a5041073ae04d0\"" Jul 15 23:53:59.641187 containerd[1563]: time="2025-07-15T23:53:59.641151577Z" level=info msg="StartContainer for \"ca114de7feacd501431c0afd1395452be6caec1bc0e2e23423a5041073ae04d0\"" Jul 15 23:53:59.642245 containerd[1563]: time="2025-07-15T23:53:59.642204218Z" level=info msg="connecting to shim ca114de7feacd501431c0afd1395452be6caec1bc0e2e23423a5041073ae04d0" address="unix:///run/containerd/s/a35c482f590ee2fa71d86b2c09ac1009cbf1fcf74a761f14a1202a4780bea0c8" protocol=ttrpc version=3 Jul 15 23:53:59.648752 containerd[1563]: time="2025-07-15T23:53:59.648701472Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:53:59.649970 containerd[1563]: time="2025-07-15T23:53:59.649943103Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jul 15 23:53:59.652344 containerd[1563]: time="2025-07-15T23:53:59.651799236Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:53:59.653089 containerd[1563]: time="2025-07-15T23:53:59.653051858Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 4.305768482s" Jul 15 23:53:59.653152 containerd[1563]: time="2025-07-15T23:53:59.653085743Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 15 23:53:59.655714 containerd[1563]: time="2025-07-15T23:53:59.655675257Z" level=info msg="CreateContainer within sandbox \"8507ade49ad9747708cb8ecaca2f3c3dbabfee5106aa722eb891b9d34c7b6082\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 15 23:53:59.667275 containerd[1563]: time="2025-07-15T23:53:59.667215770Z" level=info msg="Container cb17dcf19d23dbc2e078028ba4b0fc45775c03e49c6336a6a154c497e9638c1f: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:53:59.670845 systemd[1]: Started cri-containerd-ca114de7feacd501431c0afd1395452be6caec1bc0e2e23423a5041073ae04d0.scope - libcontainer container ca114de7feacd501431c0afd1395452be6caec1bc0e2e23423a5041073ae04d0. Jul 15 23:53:59.675521 containerd[1563]: time="2025-07-15T23:53:59.675265889Z" level=info msg="CreateContainer within sandbox \"8507ade49ad9747708cb8ecaca2f3c3dbabfee5106aa722eb891b9d34c7b6082\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"cb17dcf19d23dbc2e078028ba4b0fc45775c03e49c6336a6a154c497e9638c1f\"" Jul 15 23:53:59.677411 containerd[1563]: time="2025-07-15T23:53:59.676130430Z" level=info msg="StartContainer for \"cb17dcf19d23dbc2e078028ba4b0fc45775c03e49c6336a6a154c497e9638c1f\"" Jul 15 23:53:59.677411 containerd[1563]: time="2025-07-15T23:53:59.677107215Z" level=info msg="connecting to shim cb17dcf19d23dbc2e078028ba4b0fc45775c03e49c6336a6a154c497e9638c1f" address="unix:///run/containerd/s/d39687cd9d7de7c43f8c3250ffc290e82432ea863559bc7b6f0ae260b54681b8" protocol=ttrpc version=3 Jul 15 23:53:59.703841 systemd[1]: Started cri-containerd-cb17dcf19d23dbc2e078028ba4b0fc45775c03e49c6336a6a154c497e9638c1f.scope - libcontainer container cb17dcf19d23dbc2e078028ba4b0fc45775c03e49c6336a6a154c497e9638c1f. Jul 15 23:53:59.708645 systemd[1]: cri-containerd-ca114de7feacd501431c0afd1395452be6caec1bc0e2e23423a5041073ae04d0.scope: Deactivated successfully. Jul 15 23:53:59.711557 containerd[1563]: time="2025-07-15T23:53:59.711504006Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ca114de7feacd501431c0afd1395452be6caec1bc0e2e23423a5041073ae04d0\" id:\"ca114de7feacd501431c0afd1395452be6caec1bc0e2e23423a5041073ae04d0\" pid:3271 exited_at:{seconds:1752623639 nanos:711203183}" Jul 15 23:53:59.713762 containerd[1563]: time="2025-07-15T23:53:59.713643722Z" level=info msg="received exit event container_id:\"ca114de7feacd501431c0afd1395452be6caec1bc0e2e23423a5041073ae04d0\" id:\"ca114de7feacd501431c0afd1395452be6caec1bc0e2e23423a5041073ae04d0\" pid:3271 exited_at:{seconds:1752623639 nanos:711203183}" Jul 15 23:53:59.723580 containerd[1563]: time="2025-07-15T23:53:59.723491273Z" level=info msg="StartContainer for \"ca114de7feacd501431c0afd1395452be6caec1bc0e2e23423a5041073ae04d0\" returns successfully" Jul 15 23:53:59.794746 containerd[1563]: time="2025-07-15T23:53:59.794695327Z" level=info msg="StartContainer for \"cb17dcf19d23dbc2e078028ba4b0fc45775c03e49c6336a6a154c497e9638c1f\" returns successfully" Jul 15 23:54:00.329924 containerd[1563]: time="2025-07-15T23:54:00.329855395Z" level=info msg="CreateContainer within sandbox \"a03c3a23b6b5323d2f6209ac379fadd9f30966c2c643d299fec2af1b0b6048bf\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 15 23:54:00.817698 containerd[1563]: time="2025-07-15T23:54:00.814628792Z" level=info msg="Container 9188b9ebaee1ce19087503f92559f6e114252b98dff8f2951bc0c760067dc79d: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:54:00.821589 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4164789519.mount: Deactivated successfully. Jul 15 23:54:00.835923 containerd[1563]: time="2025-07-15T23:54:00.835862192Z" level=info msg="CreateContainer within sandbox \"a03c3a23b6b5323d2f6209ac379fadd9f30966c2c643d299fec2af1b0b6048bf\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9188b9ebaee1ce19087503f92559f6e114252b98dff8f2951bc0c760067dc79d\"" Jul 15 23:54:00.836609 containerd[1563]: time="2025-07-15T23:54:00.836580142Z" level=info msg="StartContainer for \"9188b9ebaee1ce19087503f92559f6e114252b98dff8f2951bc0c760067dc79d\"" Jul 15 23:54:00.837933 containerd[1563]: time="2025-07-15T23:54:00.837849675Z" level=info msg="connecting to shim 9188b9ebaee1ce19087503f92559f6e114252b98dff8f2951bc0c760067dc79d" address="unix:///run/containerd/s/a35c482f590ee2fa71d86b2c09ac1009cbf1fcf74a761f14a1202a4780bea0c8" protocol=ttrpc version=3 Jul 15 23:54:00.841756 kubelet[2682]: I0715 23:54:00.841391 2682 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-ntmgh" podStartSLOduration=1.709323583 podStartE2EDuration="14.841361025s" podCreationTimestamp="2025-07-15 23:53:46 +0000 UTC" firstStartedPulling="2025-07-15 23:53:46.521985169 +0000 UTC m=+9.464521792" lastFinishedPulling="2025-07-15 23:53:59.654022611 +0000 UTC m=+22.596559234" observedRunningTime="2025-07-15 23:54:00.814416738 +0000 UTC m=+23.756953361" watchObservedRunningTime="2025-07-15 23:54:00.841361025 +0000 UTC m=+23.783897648" Jul 15 23:54:00.868910 systemd[1]: Started cri-containerd-9188b9ebaee1ce19087503f92559f6e114252b98dff8f2951bc0c760067dc79d.scope - libcontainer container 9188b9ebaee1ce19087503f92559f6e114252b98dff8f2951bc0c760067dc79d. Jul 15 23:54:00.935828 containerd[1563]: time="2025-07-15T23:54:00.935758150Z" level=info msg="StartContainer for \"9188b9ebaee1ce19087503f92559f6e114252b98dff8f2951bc0c760067dc79d\" returns successfully" Jul 15 23:54:01.103525 containerd[1563]: time="2025-07-15T23:54:01.103213765Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9188b9ebaee1ce19087503f92559f6e114252b98dff8f2951bc0c760067dc79d\" id:\"9b1b01a5f09a0ab65aec8fde350852c86760d4448259bb97cf472e0b01553734\" pid:3375 exited_at:{seconds:1752623641 nanos:102850412}" Jul 15 23:54:01.157235 kubelet[2682]: I0715 23:54:01.157175 2682 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 15 23:54:02.301788 kubelet[2682]: I0715 23:54:02.301369 2682 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-96tsw" podStartSLOduration=8.133296677 podStartE2EDuration="17.301346365s" podCreationTimestamp="2025-07-15 23:53:45 +0000 UTC" firstStartedPulling="2025-07-15 23:53:46.178613015 +0000 UTC m=+9.121149638" lastFinishedPulling="2025-07-15 23:53:55.346662683 +0000 UTC m=+18.289199326" observedRunningTime="2025-07-15 23:54:02.298820799 +0000 UTC m=+25.241357442" watchObservedRunningTime="2025-07-15 23:54:02.301346365 +0000 UTC m=+25.243882988" Jul 15 23:54:02.315850 systemd[1]: Created slice kubepods-burstable-podc6d2f982_f7b8_4a04_a483_7f6f79313d9f.slice - libcontainer container kubepods-burstable-podc6d2f982_f7b8_4a04_a483_7f6f79313d9f.slice. Jul 15 23:54:02.325845 systemd[1]: Created slice kubepods-burstable-pod1e286c60_e981_412d_82f4_ab0b04c37569.slice - libcontainer container kubepods-burstable-pod1e286c60_e981_412d_82f4_ab0b04c37569.slice. Jul 15 23:54:02.391337 kubelet[2682]: I0715 23:54:02.391265 2682 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnnrb\" (UniqueName: \"kubernetes.io/projected/c6d2f982-f7b8-4a04-a483-7f6f79313d9f-kube-api-access-dnnrb\") pod \"coredns-7c65d6cfc9-bw6sz\" (UID: \"c6d2f982-f7b8-4a04-a483-7f6f79313d9f\") " pod="kube-system/coredns-7c65d6cfc9-bw6sz" Jul 15 23:54:02.391337 kubelet[2682]: I0715 23:54:02.391338 2682 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c6d2f982-f7b8-4a04-a483-7f6f79313d9f-config-volume\") pod \"coredns-7c65d6cfc9-bw6sz\" (UID: \"c6d2f982-f7b8-4a04-a483-7f6f79313d9f\") " pod="kube-system/coredns-7c65d6cfc9-bw6sz" Jul 15 23:54:02.391337 kubelet[2682]: I0715 23:54:02.391369 2682 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1e286c60-e981-412d-82f4-ab0b04c37569-config-volume\") pod \"coredns-7c65d6cfc9-h5w5w\" (UID: \"1e286c60-e981-412d-82f4-ab0b04c37569\") " pod="kube-system/coredns-7c65d6cfc9-h5w5w" Jul 15 23:54:02.391634 kubelet[2682]: I0715 23:54:02.391404 2682 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5rpf\" (UniqueName: \"kubernetes.io/projected/1e286c60-e981-412d-82f4-ab0b04c37569-kube-api-access-t5rpf\") pod \"coredns-7c65d6cfc9-h5w5w\" (UID: \"1e286c60-e981-412d-82f4-ab0b04c37569\") " pod="kube-system/coredns-7c65d6cfc9-h5w5w" Jul 15 23:54:02.622726 containerd[1563]: time="2025-07-15T23:54:02.622572263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-bw6sz,Uid:c6d2f982-f7b8-4a04-a483-7f6f79313d9f,Namespace:kube-system,Attempt:0,}" Jul 15 23:54:02.631033 containerd[1563]: time="2025-07-15T23:54:02.630985613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-h5w5w,Uid:1e286c60-e981-412d-82f4-ab0b04c37569,Namespace:kube-system,Attempt:0,}" Jul 15 23:54:04.740861 systemd-networkd[1484]: cilium_host: Link UP Jul 15 23:54:04.741061 systemd-networkd[1484]: cilium_net: Link UP Jul 15 23:54:04.741241 systemd-networkd[1484]: cilium_net: Gained carrier Jul 15 23:54:04.741433 systemd-networkd[1484]: cilium_host: Gained carrier Jul 15 23:54:04.877526 systemd-networkd[1484]: cilium_vxlan: Link UP Jul 15 23:54:04.877541 systemd-networkd[1484]: cilium_vxlan: Gained carrier Jul 15 23:54:04.928904 systemd-networkd[1484]: cilium_net: Gained IPv6LL Jul 15 23:54:05.120921 systemd-networkd[1484]: cilium_host: Gained IPv6LL Jul 15 23:54:05.145691 kernel: NET: Registered PF_ALG protocol family Jul 15 23:54:05.941569 systemd-networkd[1484]: lxc_health: Link UP Jul 15 23:54:05.943938 systemd-networkd[1484]: lxc_health: Gained carrier Jul 15 23:54:06.208459 systemd-networkd[1484]: lxce098477ea463: Link UP Jul 15 23:54:06.209053 systemd-networkd[1484]: lxc408e05b14037: Link UP Jul 15 23:54:06.210699 kernel: eth0: renamed from tmp2d400 Jul 15 23:54:06.212690 kernel: eth0: renamed from tmp2d044 Jul 15 23:54:06.215354 systemd-networkd[1484]: lxc408e05b14037: Gained carrier Jul 15 23:54:06.215704 systemd-networkd[1484]: lxce098477ea463: Gained carrier Jul 15 23:54:06.416967 systemd-networkd[1484]: cilium_vxlan: Gained IPv6LL Jul 15 23:54:07.632982 systemd-networkd[1484]: lxce098477ea463: Gained IPv6LL Jul 15 23:54:07.696939 systemd-networkd[1484]: lxc408e05b14037: Gained IPv6LL Jul 15 23:54:07.824852 systemd-networkd[1484]: lxc_health: Gained IPv6LL Jul 15 23:54:10.230538 containerd[1563]: time="2025-07-15T23:54:10.230452382Z" level=info msg="connecting to shim 2d400fd77d27d7733097f9251cf3169bf6a39ba584d329bbfe82550492e905f8" address="unix:///run/containerd/s/807319297d78f5fe3745adcd3c33018e657dcca85904ec0e93c74c1bcf5471fc" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:54:10.259852 systemd[1]: Started cri-containerd-2d400fd77d27d7733097f9251cf3169bf6a39ba584d329bbfe82550492e905f8.scope - libcontainer container 2d400fd77d27d7733097f9251cf3169bf6a39ba584d329bbfe82550492e905f8. Jul 15 23:54:10.275788 systemd-resolved[1415]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 23:54:10.509892 containerd[1563]: time="2025-07-15T23:54:10.509758559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-h5w5w,Uid:1e286c60-e981-412d-82f4-ab0b04c37569,Namespace:kube-system,Attempt:0,} returns sandbox id \"2d400fd77d27d7733097f9251cf3169bf6a39ba584d329bbfe82550492e905f8\"" Jul 15 23:54:10.512978 containerd[1563]: time="2025-07-15T23:54:10.512937629Z" level=info msg="CreateContainer within sandbox \"2d400fd77d27d7733097f9251cf3169bf6a39ba584d329bbfe82550492e905f8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 15 23:54:10.543455 containerd[1563]: time="2025-07-15T23:54:10.543405364Z" level=info msg="connecting to shim 2d0445a7e470f3baedc380472b9087d9ba6537f4033883d0936ff2c4b8e67264" address="unix:///run/containerd/s/18e7872ba395ed51a6d7549d73d0b14b534a3ee1e29ea19b276f111a4ed1daa0" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:54:10.569703 containerd[1563]: time="2025-07-15T23:54:10.568374169Z" level=info msg="Container 3619d7c6f8aa07c44becd84bd038491785b6af508de836508583c9090978b18b: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:54:10.580144 containerd[1563]: time="2025-07-15T23:54:10.579715881Z" level=info msg="CreateContainer within sandbox \"2d400fd77d27d7733097f9251cf3169bf6a39ba584d329bbfe82550492e905f8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3619d7c6f8aa07c44becd84bd038491785b6af508de836508583c9090978b18b\"" Jul 15 23:54:10.579925 systemd[1]: Started cri-containerd-2d0445a7e470f3baedc380472b9087d9ba6537f4033883d0936ff2c4b8e67264.scope - libcontainer container 2d0445a7e470f3baedc380472b9087d9ba6537f4033883d0936ff2c4b8e67264. Jul 15 23:54:10.580944 containerd[1563]: time="2025-07-15T23:54:10.580918796Z" level=info msg="StartContainer for \"3619d7c6f8aa07c44becd84bd038491785b6af508de836508583c9090978b18b\"" Jul 15 23:54:10.582930 containerd[1563]: time="2025-07-15T23:54:10.582818516Z" level=info msg="connecting to shim 3619d7c6f8aa07c44becd84bd038491785b6af508de836508583c9090978b18b" address="unix:///run/containerd/s/807319297d78f5fe3745adcd3c33018e657dcca85904ec0e93c74c1bcf5471fc" protocol=ttrpc version=3 Jul 15 23:54:10.598501 systemd-resolved[1415]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 23:54:10.608857 systemd[1]: Started cri-containerd-3619d7c6f8aa07c44becd84bd038491785b6af508de836508583c9090978b18b.scope - libcontainer container 3619d7c6f8aa07c44becd84bd038491785b6af508de836508583c9090978b18b. Jul 15 23:54:10.638686 containerd[1563]: time="2025-07-15T23:54:10.638609169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-bw6sz,Uid:c6d2f982-f7b8-4a04-a483-7f6f79313d9f,Namespace:kube-system,Attempt:0,} returns sandbox id \"2d0445a7e470f3baedc380472b9087d9ba6537f4033883d0936ff2c4b8e67264\"" Jul 15 23:54:10.641501 containerd[1563]: time="2025-07-15T23:54:10.641291036Z" level=info msg="CreateContainer within sandbox \"2d0445a7e470f3baedc380472b9087d9ba6537f4033883d0936ff2c4b8e67264\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 15 23:54:10.652894 containerd[1563]: time="2025-07-15T23:54:10.652850842Z" level=info msg="StartContainer for \"3619d7c6f8aa07c44becd84bd038491785b6af508de836508583c9090978b18b\" returns successfully" Jul 15 23:54:10.658376 containerd[1563]: time="2025-07-15T23:54:10.658315205Z" level=info msg="Container bdc8ef0f62dbbb5ecc5e8c3b8b31faf903d5f7cf666d9bf5aa6bbcb59c54c191: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:54:10.666958 containerd[1563]: time="2025-07-15T23:54:10.666913233Z" level=info msg="CreateContainer within sandbox \"2d0445a7e470f3baedc380472b9087d9ba6537f4033883d0936ff2c4b8e67264\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bdc8ef0f62dbbb5ecc5e8c3b8b31faf903d5f7cf666d9bf5aa6bbcb59c54c191\"" Jul 15 23:54:10.667452 containerd[1563]: time="2025-07-15T23:54:10.667417141Z" level=info msg="StartContainer for \"bdc8ef0f62dbbb5ecc5e8c3b8b31faf903d5f7cf666d9bf5aa6bbcb59c54c191\"" Jul 15 23:54:10.668555 containerd[1563]: time="2025-07-15T23:54:10.668525687Z" level=info msg="connecting to shim bdc8ef0f62dbbb5ecc5e8c3b8b31faf903d5f7cf666d9bf5aa6bbcb59c54c191" address="unix:///run/containerd/s/18e7872ba395ed51a6d7549d73d0b14b534a3ee1e29ea19b276f111a4ed1daa0" protocol=ttrpc version=3 Jul 15 23:54:10.693854 systemd[1]: Started cri-containerd-bdc8ef0f62dbbb5ecc5e8c3b8b31faf903d5f7cf666d9bf5aa6bbcb59c54c191.scope - libcontainer container bdc8ef0f62dbbb5ecc5e8c3b8b31faf903d5f7cf666d9bf5aa6bbcb59c54c191. Jul 15 23:54:10.731767 containerd[1563]: time="2025-07-15T23:54:10.731637662Z" level=info msg="StartContainer for \"bdc8ef0f62dbbb5ecc5e8c3b8b31faf903d5f7cf666d9bf5aa6bbcb59c54c191\" returns successfully" Jul 15 23:54:11.169768 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount652990579.mount: Deactivated successfully. Jul 15 23:54:11.559513 kubelet[2682]: I0715 23:54:11.559406 2682 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-h5w5w" podStartSLOduration=25.559388846 podStartE2EDuration="25.559388846s" podCreationTimestamp="2025-07-15 23:53:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:54:11.558903945 +0000 UTC m=+34.501440568" watchObservedRunningTime="2025-07-15 23:54:11.559388846 +0000 UTC m=+34.501925469" Jul 15 23:54:11.656673 kubelet[2682]: I0715 23:54:11.656571 2682 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-bw6sz" podStartSLOduration=25.656546329 podStartE2EDuration="25.656546329s" podCreationTimestamp="2025-07-15 23:53:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:54:11.59744851 +0000 UTC m=+34.539985123" watchObservedRunningTime="2025-07-15 23:54:11.656546329 +0000 UTC m=+34.599082952" Jul 15 23:54:20.105742 systemd[1]: Started sshd@7-10.0.0.86:22-10.0.0.1:49020.service - OpenSSH per-connection server daemon (10.0.0.1:49020). Jul 15 23:54:20.182060 sshd[4031]: Accepted publickey for core from 10.0.0.1 port 49020 ssh2: RSA SHA256:wrO5NCJWuMjqDZoRWCG1KDLSAbftsNF14I2QAREtKoA Jul 15 23:54:20.185291 sshd-session[4031]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:54:20.192736 systemd-logind[1498]: New session 8 of user core. Jul 15 23:54:20.199147 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 15 23:54:20.362581 sshd[4033]: Connection closed by 10.0.0.1 port 49020 Jul 15 23:54:20.362969 sshd-session[4031]: pam_unix(sshd:session): session closed for user core Jul 15 23:54:20.367931 systemd[1]: sshd@7-10.0.0.86:22-10.0.0.1:49020.service: Deactivated successfully. Jul 15 23:54:20.370468 systemd[1]: session-8.scope: Deactivated successfully. Jul 15 23:54:20.371793 systemd-logind[1498]: Session 8 logged out. Waiting for processes to exit. Jul 15 23:54:20.374101 systemd-logind[1498]: Removed session 8. Jul 15 23:54:25.389062 systemd[1]: Started sshd@8-10.0.0.86:22-10.0.0.1:49028.service - OpenSSH per-connection server daemon (10.0.0.1:49028). Jul 15 23:54:25.446526 sshd[4047]: Accepted publickey for core from 10.0.0.1 port 49028 ssh2: RSA SHA256:wrO5NCJWuMjqDZoRWCG1KDLSAbftsNF14I2QAREtKoA Jul 15 23:54:25.448597 sshd-session[4047]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:54:25.454746 systemd-logind[1498]: New session 9 of user core. Jul 15 23:54:25.466032 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 15 23:54:25.601619 sshd[4049]: Connection closed by 10.0.0.1 port 49028 Jul 15 23:54:25.602001 sshd-session[4047]: pam_unix(sshd:session): session closed for user core Jul 15 23:54:25.608111 systemd[1]: sshd@8-10.0.0.86:22-10.0.0.1:49028.service: Deactivated successfully. Jul 15 23:54:25.610723 systemd[1]: session-9.scope: Deactivated successfully. Jul 15 23:54:25.611679 systemd-logind[1498]: Session 9 logged out. Waiting for processes to exit. Jul 15 23:54:25.613333 systemd-logind[1498]: Removed session 9. Jul 15 23:54:30.618828 systemd[1]: Started sshd@9-10.0.0.86:22-10.0.0.1:48586.service - OpenSSH per-connection server daemon (10.0.0.1:48586). Jul 15 23:54:30.674579 sshd[4064]: Accepted publickey for core from 10.0.0.1 port 48586 ssh2: RSA SHA256:wrO5NCJWuMjqDZoRWCG1KDLSAbftsNF14I2QAREtKoA Jul 15 23:54:30.676596 sshd-session[4064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:54:30.681404 systemd-logind[1498]: New session 10 of user core. Jul 15 23:54:30.690789 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 15 23:54:30.898247 sshd[4066]: Connection closed by 10.0.0.1 port 48586 Jul 15 23:54:30.898514 sshd-session[4064]: pam_unix(sshd:session): session closed for user core Jul 15 23:54:30.903483 systemd[1]: sshd@9-10.0.0.86:22-10.0.0.1:48586.service: Deactivated successfully. Jul 15 23:54:30.906147 systemd[1]: session-10.scope: Deactivated successfully. Jul 15 23:54:30.907239 systemd-logind[1498]: Session 10 logged out. Waiting for processes to exit. Jul 15 23:54:30.909197 systemd-logind[1498]: Removed session 10. Jul 15 23:54:35.917350 systemd[1]: Started sshd@10-10.0.0.86:22-10.0.0.1:48588.service - OpenSSH per-connection server daemon (10.0.0.1:48588). Jul 15 23:54:35.983282 sshd[4080]: Accepted publickey for core from 10.0.0.1 port 48588 ssh2: RSA SHA256:wrO5NCJWuMjqDZoRWCG1KDLSAbftsNF14I2QAREtKoA Jul 15 23:54:35.985666 sshd-session[4080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:54:35.991607 systemd-logind[1498]: New session 11 of user core. Jul 15 23:54:36.004065 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 15 23:54:36.132783 sshd[4082]: Connection closed by 10.0.0.1 port 48588 Jul 15 23:54:36.133223 sshd-session[4080]: pam_unix(sshd:session): session closed for user core Jul 15 23:54:36.137368 systemd[1]: sshd@10-10.0.0.86:22-10.0.0.1:48588.service: Deactivated successfully. Jul 15 23:54:36.139570 systemd[1]: session-11.scope: Deactivated successfully. Jul 15 23:54:36.140546 systemd-logind[1498]: Session 11 logged out. Waiting for processes to exit. Jul 15 23:54:36.142271 systemd-logind[1498]: Removed session 11. Jul 15 23:54:41.155934 systemd[1]: Started sshd@11-10.0.0.86:22-10.0.0.1:37282.service - OpenSSH per-connection server daemon (10.0.0.1:37282). Jul 15 23:54:41.228708 sshd[4100]: Accepted publickey for core from 10.0.0.1 port 37282 ssh2: RSA SHA256:wrO5NCJWuMjqDZoRWCG1KDLSAbftsNF14I2QAREtKoA Jul 15 23:54:41.230634 sshd-session[4100]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:54:41.235995 systemd-logind[1498]: New session 12 of user core. Jul 15 23:54:41.242818 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 15 23:54:41.370718 sshd[4102]: Connection closed by 10.0.0.1 port 37282 Jul 15 23:54:41.371342 sshd-session[4100]: pam_unix(sshd:session): session closed for user core Jul 15 23:54:41.387560 systemd[1]: sshd@11-10.0.0.86:22-10.0.0.1:37282.service: Deactivated successfully. Jul 15 23:54:41.390470 systemd[1]: session-12.scope: Deactivated successfully. Jul 15 23:54:41.391625 systemd-logind[1498]: Session 12 logged out. Waiting for processes to exit. Jul 15 23:54:41.396525 systemd[1]: Started sshd@12-10.0.0.86:22-10.0.0.1:37288.service - OpenSSH per-connection server daemon (10.0.0.1:37288). Jul 15 23:54:41.397312 systemd-logind[1498]: Removed session 12. Jul 15 23:54:41.461828 sshd[4116]: Accepted publickey for core from 10.0.0.1 port 37288 ssh2: RSA SHA256:wrO5NCJWuMjqDZoRWCG1KDLSAbftsNF14I2QAREtKoA Jul 15 23:54:41.463783 sshd-session[4116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:54:41.470165 systemd-logind[1498]: New session 13 of user core. Jul 15 23:54:41.480962 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 15 23:54:41.640836 sshd[4118]: Connection closed by 10.0.0.1 port 37288 Jul 15 23:54:41.641306 sshd-session[4116]: pam_unix(sshd:session): session closed for user core Jul 15 23:54:41.657497 systemd[1]: sshd@12-10.0.0.86:22-10.0.0.1:37288.service: Deactivated successfully. Jul 15 23:54:41.660383 systemd[1]: session-13.scope: Deactivated successfully. Jul 15 23:54:41.662471 systemd-logind[1498]: Session 13 logged out. Waiting for processes to exit. Jul 15 23:54:41.668186 systemd[1]: Started sshd@13-10.0.0.86:22-10.0.0.1:37292.service - OpenSSH per-connection server daemon (10.0.0.1:37292). Jul 15 23:54:41.671211 systemd-logind[1498]: Removed session 13. Jul 15 23:54:41.723714 sshd[4129]: Accepted publickey for core from 10.0.0.1 port 37292 ssh2: RSA SHA256:wrO5NCJWuMjqDZoRWCG1KDLSAbftsNF14I2QAREtKoA Jul 15 23:54:41.725807 sshd-session[4129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:54:41.731137 systemd-logind[1498]: New session 14 of user core. Jul 15 23:54:41.740883 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 15 23:54:41.863189 sshd[4131]: Connection closed by 10.0.0.1 port 37292 Jul 15 23:54:41.863599 sshd-session[4129]: pam_unix(sshd:session): session closed for user core Jul 15 23:54:41.869539 systemd[1]: sshd@13-10.0.0.86:22-10.0.0.1:37292.service: Deactivated successfully. Jul 15 23:54:41.871796 systemd[1]: session-14.scope: Deactivated successfully. Jul 15 23:54:41.872794 systemd-logind[1498]: Session 14 logged out. Waiting for processes to exit. Jul 15 23:54:41.874380 systemd-logind[1498]: Removed session 14. Jul 15 23:54:46.875750 systemd[1]: Started sshd@14-10.0.0.86:22-10.0.0.1:37310.service - OpenSSH per-connection server daemon (10.0.0.1:37310). Jul 15 23:54:46.935216 sshd[4146]: Accepted publickey for core from 10.0.0.1 port 37310 ssh2: RSA SHA256:wrO5NCJWuMjqDZoRWCG1KDLSAbftsNF14I2QAREtKoA Jul 15 23:54:46.936862 sshd-session[4146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:54:46.941698 systemd-logind[1498]: New session 15 of user core. Jul 15 23:54:46.955911 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 15 23:54:47.072431 sshd[4148]: Connection closed by 10.0.0.1 port 37310 Jul 15 23:54:47.072764 sshd-session[4146]: pam_unix(sshd:session): session closed for user core Jul 15 23:54:47.076799 systemd[1]: sshd@14-10.0.0.86:22-10.0.0.1:37310.service: Deactivated successfully. Jul 15 23:54:47.078640 systemd[1]: session-15.scope: Deactivated successfully. Jul 15 23:54:47.079466 systemd-logind[1498]: Session 15 logged out. Waiting for processes to exit. Jul 15 23:54:47.080713 systemd-logind[1498]: Removed session 15. Jul 15 23:54:49.142354 kubelet[2682]: E0715 23:54:49.142290 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:54:52.087027 systemd[1]: Started sshd@15-10.0.0.86:22-10.0.0.1:42290.service - OpenSSH per-connection server daemon (10.0.0.1:42290). Jul 15 23:54:52.144412 sshd[4161]: Accepted publickey for core from 10.0.0.1 port 42290 ssh2: RSA SHA256:wrO5NCJWuMjqDZoRWCG1KDLSAbftsNF14I2QAREtKoA Jul 15 23:54:52.146209 sshd-session[4161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:54:52.151096 systemd-logind[1498]: New session 16 of user core. Jul 15 23:54:52.161789 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 15 23:54:52.280101 sshd[4163]: Connection closed by 10.0.0.1 port 42290 Jul 15 23:54:52.281046 sshd-session[4161]: pam_unix(sshd:session): session closed for user core Jul 15 23:54:52.296946 systemd[1]: sshd@15-10.0.0.86:22-10.0.0.1:42290.service: Deactivated successfully. Jul 15 23:54:52.299471 systemd[1]: session-16.scope: Deactivated successfully. Jul 15 23:54:52.300479 systemd-logind[1498]: Session 16 logged out. Waiting for processes to exit. Jul 15 23:54:52.305101 systemd[1]: Started sshd@16-10.0.0.86:22-10.0.0.1:42304.service - OpenSSH per-connection server daemon (10.0.0.1:42304). Jul 15 23:54:52.305814 systemd-logind[1498]: Removed session 16. Jul 15 23:54:52.364758 sshd[4177]: Accepted publickey for core from 10.0.0.1 port 42304 ssh2: RSA SHA256:wrO5NCJWuMjqDZoRWCG1KDLSAbftsNF14I2QAREtKoA Jul 15 23:54:52.366414 sshd-session[4177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:54:52.371109 systemd-logind[1498]: New session 17 of user core. Jul 15 23:54:52.378872 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 15 23:54:52.628856 sshd[4179]: Connection closed by 10.0.0.1 port 42304 Jul 15 23:54:52.629138 sshd-session[4177]: pam_unix(sshd:session): session closed for user core Jul 15 23:54:52.642726 systemd[1]: sshd@16-10.0.0.86:22-10.0.0.1:42304.service: Deactivated successfully. Jul 15 23:54:52.644835 systemd[1]: session-17.scope: Deactivated successfully. Jul 15 23:54:52.645907 systemd-logind[1498]: Session 17 logged out. Waiting for processes to exit. Jul 15 23:54:52.649612 systemd[1]: Started sshd@17-10.0.0.86:22-10.0.0.1:42318.service - OpenSSH per-connection server daemon (10.0.0.1:42318). Jul 15 23:54:52.651062 systemd-logind[1498]: Removed session 17. Jul 15 23:54:52.720597 sshd[4190]: Accepted publickey for core from 10.0.0.1 port 42318 ssh2: RSA SHA256:wrO5NCJWuMjqDZoRWCG1KDLSAbftsNF14I2QAREtKoA Jul 15 23:54:52.722461 sshd-session[4190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:54:52.727836 systemd-logind[1498]: New session 18 of user core. Jul 15 23:54:52.738934 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 15 23:54:53.855190 sshd[4192]: Connection closed by 10.0.0.1 port 42318 Jul 15 23:54:53.855782 sshd-session[4190]: pam_unix(sshd:session): session closed for user core Jul 15 23:54:53.870402 systemd[1]: sshd@17-10.0.0.86:22-10.0.0.1:42318.service: Deactivated successfully. Jul 15 23:54:53.873432 systemd[1]: session-18.scope: Deactivated successfully. Jul 15 23:54:53.874479 systemd-logind[1498]: Session 18 logged out. Waiting for processes to exit. Jul 15 23:54:53.881383 systemd[1]: Started sshd@18-10.0.0.86:22-10.0.0.1:42356.service - OpenSSH per-connection server daemon (10.0.0.1:42356). Jul 15 23:54:53.883442 systemd-logind[1498]: Removed session 18. Jul 15 23:54:53.935257 sshd[4212]: Accepted publickey for core from 10.0.0.1 port 42356 ssh2: RSA SHA256:wrO5NCJWuMjqDZoRWCG1KDLSAbftsNF14I2QAREtKoA Jul 15 23:54:53.937503 sshd-session[4212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:54:53.943601 systemd-logind[1498]: New session 19 of user core. Jul 15 23:54:53.958963 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 15 23:54:54.148183 kubelet[2682]: E0715 23:54:54.147993 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:54:54.224505 sshd[4214]: Connection closed by 10.0.0.1 port 42356 Jul 15 23:54:54.225064 sshd-session[4212]: pam_unix(sshd:session): session closed for user core Jul 15 23:54:54.238356 systemd[1]: sshd@18-10.0.0.86:22-10.0.0.1:42356.service: Deactivated successfully. Jul 15 23:54:54.241391 systemd[1]: session-19.scope: Deactivated successfully. Jul 15 23:54:54.242441 systemd-logind[1498]: Session 19 logged out. Waiting for processes to exit. Jul 15 23:54:54.246466 systemd[1]: Started sshd@19-10.0.0.86:22-10.0.0.1:42364.service - OpenSSH per-connection server daemon (10.0.0.1:42364). Jul 15 23:54:54.248914 systemd-logind[1498]: Removed session 19. Jul 15 23:54:54.301301 sshd[4226]: Accepted publickey for core from 10.0.0.1 port 42364 ssh2: RSA SHA256:wrO5NCJWuMjqDZoRWCG1KDLSAbftsNF14I2QAREtKoA Jul 15 23:54:54.303397 sshd-session[4226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:54:54.308283 systemd-logind[1498]: New session 20 of user core. Jul 15 23:54:54.318776 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 15 23:54:54.466529 sshd[4228]: Connection closed by 10.0.0.1 port 42364 Jul 15 23:54:54.466953 sshd-session[4226]: pam_unix(sshd:session): session closed for user core Jul 15 23:54:54.472518 systemd[1]: sshd@19-10.0.0.86:22-10.0.0.1:42364.service: Deactivated successfully. Jul 15 23:54:54.474871 systemd[1]: session-20.scope: Deactivated successfully. Jul 15 23:54:54.475889 systemd-logind[1498]: Session 20 logged out. Waiting for processes to exit. Jul 15 23:54:54.477360 systemd-logind[1498]: Removed session 20. Jul 15 23:54:59.483059 systemd[1]: Started sshd@20-10.0.0.86:22-10.0.0.1:45618.service - OpenSSH per-connection server daemon (10.0.0.1:45618). Jul 15 23:54:59.545347 sshd[4241]: Accepted publickey for core from 10.0.0.1 port 45618 ssh2: RSA SHA256:wrO5NCJWuMjqDZoRWCG1KDLSAbftsNF14I2QAREtKoA Jul 15 23:54:59.547621 sshd-session[4241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:54:59.553099 systemd-logind[1498]: New session 21 of user core. Jul 15 23:54:59.565006 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 15 23:54:59.684305 sshd[4243]: Connection closed by 10.0.0.1 port 45618 Jul 15 23:54:59.684728 sshd-session[4241]: pam_unix(sshd:session): session closed for user core Jul 15 23:54:59.690071 systemd[1]: sshd@20-10.0.0.86:22-10.0.0.1:45618.service: Deactivated successfully. Jul 15 23:54:59.692919 systemd[1]: session-21.scope: Deactivated successfully. Jul 15 23:54:59.693886 systemd-logind[1498]: Session 21 logged out. Waiting for processes to exit. Jul 15 23:54:59.695739 systemd-logind[1498]: Removed session 21. Jul 15 23:55:03.142430 kubelet[2682]: E0715 23:55:03.142376 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:55:04.141959 kubelet[2682]: E0715 23:55:04.141897 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:55:04.701318 systemd[1]: Started sshd@21-10.0.0.86:22-10.0.0.1:45684.service - OpenSSH per-connection server daemon (10.0.0.1:45684). Jul 15 23:55:04.756012 sshd[4259]: Accepted publickey for core from 10.0.0.1 port 45684 ssh2: RSA SHA256:wrO5NCJWuMjqDZoRWCG1KDLSAbftsNF14I2QAREtKoA Jul 15 23:55:04.757464 sshd-session[4259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:55:04.761760 systemd-logind[1498]: New session 22 of user core. Jul 15 23:55:04.772805 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 15 23:55:04.879613 sshd[4261]: Connection closed by 10.0.0.1 port 45684 Jul 15 23:55:04.879975 sshd-session[4259]: pam_unix(sshd:session): session closed for user core Jul 15 23:55:04.884860 systemd[1]: sshd@21-10.0.0.86:22-10.0.0.1:45684.service: Deactivated successfully. Jul 15 23:55:04.887020 systemd[1]: session-22.scope: Deactivated successfully. Jul 15 23:55:04.887791 systemd-logind[1498]: Session 22 logged out. Waiting for processes to exit. Jul 15 23:55:04.889038 systemd-logind[1498]: Removed session 22. Jul 15 23:55:07.141959 kubelet[2682]: E0715 23:55:07.141911 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:55:09.903795 systemd[1]: Started sshd@22-10.0.0.86:22-10.0.0.1:49720.service - OpenSSH per-connection server daemon (10.0.0.1:49720). Jul 15 23:55:09.953130 sshd[4274]: Accepted publickey for core from 10.0.0.1 port 49720 ssh2: RSA SHA256:wrO5NCJWuMjqDZoRWCG1KDLSAbftsNF14I2QAREtKoA Jul 15 23:55:09.954591 sshd-session[4274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:55:09.959307 systemd-logind[1498]: New session 23 of user core. Jul 15 23:55:09.969851 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 15 23:55:10.108782 sshd[4276]: Connection closed by 10.0.0.1 port 49720 Jul 15 23:55:10.109128 sshd-session[4274]: pam_unix(sshd:session): session closed for user core Jul 15 23:55:10.114409 systemd[1]: sshd@22-10.0.0.86:22-10.0.0.1:49720.service: Deactivated successfully. Jul 15 23:55:10.117254 systemd[1]: session-23.scope: Deactivated successfully. Jul 15 23:55:10.118251 systemd-logind[1498]: Session 23 logged out. Waiting for processes to exit. Jul 15 23:55:10.119968 systemd-logind[1498]: Removed session 23. Jul 15 23:55:15.128353 systemd[1]: Started sshd@23-10.0.0.86:22-10.0.0.1:49730.service - OpenSSH per-connection server daemon (10.0.0.1:49730). Jul 15 23:55:15.188363 sshd[4290]: Accepted publickey for core from 10.0.0.1 port 49730 ssh2: RSA SHA256:wrO5NCJWuMjqDZoRWCG1KDLSAbftsNF14I2QAREtKoA Jul 15 23:55:15.190444 sshd-session[4290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:55:15.195561 systemd-logind[1498]: New session 24 of user core. Jul 15 23:55:15.205811 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 15 23:55:15.323193 sshd[4292]: Connection closed by 10.0.0.1 port 49730 Jul 15 23:55:15.323587 sshd-session[4290]: pam_unix(sshd:session): session closed for user core Jul 15 23:55:15.340387 systemd[1]: sshd@23-10.0.0.86:22-10.0.0.1:49730.service: Deactivated successfully. Jul 15 23:55:15.342543 systemd[1]: session-24.scope: Deactivated successfully. Jul 15 23:55:15.343426 systemd-logind[1498]: Session 24 logged out. Waiting for processes to exit. Jul 15 23:55:15.346780 systemd[1]: Started sshd@24-10.0.0.86:22-10.0.0.1:49738.service - OpenSSH per-connection server daemon (10.0.0.1:49738). Jul 15 23:55:15.347612 systemd-logind[1498]: Removed session 24. Jul 15 23:55:15.402342 sshd[4305]: Accepted publickey for core from 10.0.0.1 port 49738 ssh2: RSA SHA256:wrO5NCJWuMjqDZoRWCG1KDLSAbftsNF14I2QAREtKoA Jul 15 23:55:15.404099 sshd-session[4305]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:55:15.409835 systemd-logind[1498]: New session 25 of user core. Jul 15 23:55:15.417839 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 15 23:55:16.142011 kubelet[2682]: E0715 23:55:16.141932 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:55:16.766432 containerd[1563]: time="2025-07-15T23:55:16.766376708Z" level=info msg="StopContainer for \"cb17dcf19d23dbc2e078028ba4b0fc45775c03e49c6336a6a154c497e9638c1f\" with timeout 30 (s)" Jul 15 23:55:16.772883 containerd[1563]: time="2025-07-15T23:55:16.772852561Z" level=info msg="Stop container \"cb17dcf19d23dbc2e078028ba4b0fc45775c03e49c6336a6a154c497e9638c1f\" with signal terminated" Jul 15 23:55:16.788179 systemd[1]: cri-containerd-cb17dcf19d23dbc2e078028ba4b0fc45775c03e49c6336a6a154c497e9638c1f.scope: Deactivated successfully. Jul 15 23:55:16.789915 containerd[1563]: time="2025-07-15T23:55:16.789843005Z" level=info msg="received exit event container_id:\"cb17dcf19d23dbc2e078028ba4b0fc45775c03e49c6336a6a154c497e9638c1f\" id:\"cb17dcf19d23dbc2e078028ba4b0fc45775c03e49c6336a6a154c497e9638c1f\" pid:3295 exited_at:{seconds:1752623716 nanos:789008524}" Jul 15 23:55:16.812258 containerd[1563]: time="2025-07-15T23:55:16.812205023Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cb17dcf19d23dbc2e078028ba4b0fc45775c03e49c6336a6a154c497e9638c1f\" id:\"cb17dcf19d23dbc2e078028ba4b0fc45775c03e49c6336a6a154c497e9638c1f\" pid:3295 exited_at:{seconds:1752623716 nanos:789008524}" Jul 15 23:55:16.816375 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cb17dcf19d23dbc2e078028ba4b0fc45775c03e49c6336a6a154c497e9638c1f-rootfs.mount: Deactivated successfully. Jul 15 23:55:16.823480 containerd[1563]: time="2025-07-15T23:55:16.823427856Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 15 23:55:16.826411 containerd[1563]: time="2025-07-15T23:55:16.826372346Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9188b9ebaee1ce19087503f92559f6e114252b98dff8f2951bc0c760067dc79d\" id:\"1d14c230c490f1b8091fb4ecd7d1fcb4f4462d7eac75792ccfc141bb44c1ad23\" pid:4342 exited_at:{seconds:1752623716 nanos:825895097}" Jul 15 23:55:16.829252 containerd[1563]: time="2025-07-15T23:55:16.829230012Z" level=info msg="StopContainer for \"cb17dcf19d23dbc2e078028ba4b0fc45775c03e49c6336a6a154c497e9638c1f\" returns successfully" Jul 15 23:55:16.830087 containerd[1563]: time="2025-07-15T23:55:16.830029886Z" level=info msg="StopPodSandbox for \"8507ade49ad9747708cb8ecaca2f3c3dbabfee5106aa722eb891b9d34c7b6082\"" Jul 15 23:55:16.830087 containerd[1563]: time="2025-07-15T23:55:16.830092223Z" level=info msg="Container to stop \"cb17dcf19d23dbc2e078028ba4b0fc45775c03e49c6336a6a154c497e9638c1f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 23:55:16.831734 containerd[1563]: time="2025-07-15T23:55:16.831691452Z" level=info msg="StopContainer for \"9188b9ebaee1ce19087503f92559f6e114252b98dff8f2951bc0c760067dc79d\" with timeout 2 (s)" Jul 15 23:55:16.831928 containerd[1563]: time="2025-07-15T23:55:16.831905394Z" level=info msg="Stop container \"9188b9ebaee1ce19087503f92559f6e114252b98dff8f2951bc0c760067dc79d\" with signal terminated" Jul 15 23:55:16.840701 systemd[1]: cri-containerd-8507ade49ad9747708cb8ecaca2f3c3dbabfee5106aa722eb891b9d34c7b6082.scope: Deactivated successfully. Jul 15 23:55:16.842212 systemd-networkd[1484]: lxc_health: Link DOWN Jul 15 23:55:16.842222 systemd-networkd[1484]: lxc_health: Lost carrier Jul 15 23:55:16.845199 containerd[1563]: time="2025-07-15T23:55:16.845167744Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8507ade49ad9747708cb8ecaca2f3c3dbabfee5106aa722eb891b9d34c7b6082\" id:\"8507ade49ad9747708cb8ecaca2f3c3dbabfee5106aa722eb891b9d34c7b6082\" pid:2976 exit_status:137 exited_at:{seconds:1752623716 nanos:844894289}" Jul 15 23:55:16.876768 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8507ade49ad9747708cb8ecaca2f3c3dbabfee5106aa722eb891b9d34c7b6082-rootfs.mount: Deactivated successfully. Jul 15 23:55:16.885472 systemd[1]: cri-containerd-9188b9ebaee1ce19087503f92559f6e114252b98dff8f2951bc0c760067dc79d.scope: Deactivated successfully. Jul 15 23:55:16.885865 systemd[1]: cri-containerd-9188b9ebaee1ce19087503f92559f6e114252b98dff8f2951bc0c760067dc79d.scope: Consumed 7.530s CPU time, 123.7M memory peak, 592K read from disk, 13.3M written to disk. Jul 15 23:55:16.886166 containerd[1563]: time="2025-07-15T23:55:16.886109436Z" level=info msg="received exit event container_id:\"9188b9ebaee1ce19087503f92559f6e114252b98dff8f2951bc0c760067dc79d\" id:\"9188b9ebaee1ce19087503f92559f6e114252b98dff8f2951bc0c760067dc79d\" pid:3342 exited_at:{seconds:1752623716 nanos:885763074}" Jul 15 23:55:16.890071 containerd[1563]: time="2025-07-15T23:55:16.890031624Z" level=info msg="shim disconnected" id=8507ade49ad9747708cb8ecaca2f3c3dbabfee5106aa722eb891b9d34c7b6082 namespace=k8s.io Jul 15 23:55:16.890071 containerd[1563]: time="2025-07-15T23:55:16.890062232Z" level=warning msg="cleaning up after shim disconnected" id=8507ade49ad9747708cb8ecaca2f3c3dbabfee5106aa722eb891b9d34c7b6082 namespace=k8s.io Jul 15 23:55:16.894057 containerd[1563]: time="2025-07-15T23:55:16.890070577Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 15 23:55:16.908618 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9188b9ebaee1ce19087503f92559f6e114252b98dff8f2951bc0c760067dc79d-rootfs.mount: Deactivated successfully. Jul 15 23:55:16.917946 containerd[1563]: time="2025-07-15T23:55:16.917877270Z" level=info msg="StopContainer for \"9188b9ebaee1ce19087503f92559f6e114252b98dff8f2951bc0c760067dc79d\" returns successfully" Jul 15 23:55:16.920006 containerd[1563]: time="2025-07-15T23:55:16.919918870Z" level=info msg="StopPodSandbox for \"a03c3a23b6b5323d2f6209ac379fadd9f30966c2c643d299fec2af1b0b6048bf\"" Jul 15 23:55:16.920006 containerd[1563]: time="2025-07-15T23:55:16.919981338Z" level=info msg="Container to stop \"af44173d5d2439837b711d44cf0115b7630b0318a590d58a368f060bfb624536\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 23:55:16.920006 containerd[1563]: time="2025-07-15T23:55:16.919991136Z" level=info msg="Container to stop \"bcefd6437d59c34169812fbbb69e436982a250c4593bb27944bb24c2cf8d1327\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 23:55:16.920006 containerd[1563]: time="2025-07-15T23:55:16.919998901Z" level=info msg="Container to stop \"e706a8586576c3859096ff43c98fc7dc705450db619678ece4928d5a7a853e07\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 23:55:16.920006 containerd[1563]: time="2025-07-15T23:55:16.920006936Z" level=info msg="Container to stop \"ca114de7feacd501431c0afd1395452be6caec1bc0e2e23423a5041073ae04d0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 23:55:16.920213 containerd[1563]: time="2025-07-15T23:55:16.920014059Z" level=info msg="Container to stop \"9188b9ebaee1ce19087503f92559f6e114252b98dff8f2951bc0c760067dc79d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 23:55:16.929072 systemd[1]: cri-containerd-a03c3a23b6b5323d2f6209ac379fadd9f30966c2c643d299fec2af1b0b6048bf.scope: Deactivated successfully. Jul 15 23:55:16.936053 containerd[1563]: time="2025-07-15T23:55:16.935952583Z" level=error msg="Failed to handle event container_id:\"8507ade49ad9747708cb8ecaca2f3c3dbabfee5106aa722eb891b9d34c7b6082\" id:\"8507ade49ad9747708cb8ecaca2f3c3dbabfee5106aa722eb891b9d34c7b6082\" pid:2976 exit_status:137 exited_at:{seconds:1752623716 nanos:844894289} for 8507ade49ad9747708cb8ecaca2f3c3dbabfee5106aa722eb891b9d34c7b6082" error="failed to handle container TaskExit event: failed to stop sandbox: failed to delete task: ttrpc: closed" Jul 15 23:55:16.938766 containerd[1563]: time="2025-07-15T23:55:16.936154113Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9188b9ebaee1ce19087503f92559f6e114252b98dff8f2951bc0c760067dc79d\" id:\"9188b9ebaee1ce19087503f92559f6e114252b98dff8f2951bc0c760067dc79d\" pid:3342 exited_at:{seconds:1752623716 nanos:885763074}" Jul 15 23:55:16.938766 containerd[1563]: time="2025-07-15T23:55:16.936221890Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a03c3a23b6b5323d2f6209ac379fadd9f30966c2c643d299fec2af1b0b6048bf\" id:\"a03c3a23b6b5323d2f6209ac379fadd9f30966c2c643d299fec2af1b0b6048bf\" pid:2845 exit_status:137 exited_at:{seconds:1752623716 nanos:930970090}" Jul 15 23:55:16.938721 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8507ade49ad9747708cb8ecaca2f3c3dbabfee5106aa722eb891b9d34c7b6082-shm.mount: Deactivated successfully. Jul 15 23:55:16.948672 containerd[1563]: time="2025-07-15T23:55:16.948605056Z" level=info msg="received exit event sandbox_id:\"8507ade49ad9747708cb8ecaca2f3c3dbabfee5106aa722eb891b9d34c7b6082\" exit_status:137 exited_at:{seconds:1752623716 nanos:844894289}" Jul 15 23:55:16.953848 containerd[1563]: time="2025-07-15T23:55:16.953792925Z" level=info msg="TearDown network for sandbox \"8507ade49ad9747708cb8ecaca2f3c3dbabfee5106aa722eb891b9d34c7b6082\" successfully" Jul 15 23:55:16.953848 containerd[1563]: time="2025-07-15T23:55:16.953842118Z" level=info msg="StopPodSandbox for \"8507ade49ad9747708cb8ecaca2f3c3dbabfee5106aa722eb891b9d34c7b6082\" returns successfully" Jul 15 23:55:16.960457 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a03c3a23b6b5323d2f6209ac379fadd9f30966c2c643d299fec2af1b0b6048bf-rootfs.mount: Deactivated successfully. Jul 15 23:55:16.963267 containerd[1563]: time="2025-07-15T23:55:16.963058326Z" level=info msg="shim disconnected" id=a03c3a23b6b5323d2f6209ac379fadd9f30966c2c643d299fec2af1b0b6048bf namespace=k8s.io Jul 15 23:55:16.963267 containerd[1563]: time="2025-07-15T23:55:16.963091258Z" level=warning msg="cleaning up after shim disconnected" id=a03c3a23b6b5323d2f6209ac379fadd9f30966c2c643d299fec2af1b0b6048bf namespace=k8s.io Jul 15 23:55:16.963267 containerd[1563]: time="2025-07-15T23:55:16.963098622Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 15 23:55:16.964776 containerd[1563]: time="2025-07-15T23:55:16.964740551Z" level=info msg="received exit event sandbox_id:\"a03c3a23b6b5323d2f6209ac379fadd9f30966c2c643d299fec2af1b0b6048bf\" exit_status:137 exited_at:{seconds:1752623716 nanos:930970090}" Jul 15 23:55:16.966544 containerd[1563]: time="2025-07-15T23:55:16.966491054Z" level=info msg="TearDown network for sandbox \"a03c3a23b6b5323d2f6209ac379fadd9f30966c2c643d299fec2af1b0b6048bf\" successfully" Jul 15 23:55:16.967116 containerd[1563]: time="2025-07-15T23:55:16.966631337Z" level=info msg="StopPodSandbox for \"a03c3a23b6b5323d2f6209ac379fadd9f30966c2c643d299fec2af1b0b6048bf\" returns successfully" Jul 15 23:55:17.166893 kubelet[2682]: I0715 23:55:17.166739 2682 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6010ff85-230a-4b4f-a347-cfa7fcb042f6-host-proc-sys-net\") pod \"6010ff85-230a-4b4f-a347-cfa7fcb042f6\" (UID: \"6010ff85-230a-4b4f-a347-cfa7fcb042f6\") " Jul 15 23:55:17.166893 kubelet[2682]: I0715 23:55:17.166791 2682 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6010ff85-230a-4b4f-a347-cfa7fcb042f6-cilium-run\") pod \"6010ff85-230a-4b4f-a347-cfa7fcb042f6\" (UID: \"6010ff85-230a-4b4f-a347-cfa7fcb042f6\") " Jul 15 23:55:17.166893 kubelet[2682]: I0715 23:55:17.166812 2682 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6010ff85-230a-4b4f-a347-cfa7fcb042f6-xtables-lock\") pod \"6010ff85-230a-4b4f-a347-cfa7fcb042f6\" (UID: \"6010ff85-230a-4b4f-a347-cfa7fcb042f6\") " Jul 15 23:55:17.166893 kubelet[2682]: I0715 23:55:17.166834 2682 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6010ff85-230a-4b4f-a347-cfa7fcb042f6-hubble-tls\") pod \"6010ff85-230a-4b4f-a347-cfa7fcb042f6\" (UID: \"6010ff85-230a-4b4f-a347-cfa7fcb042f6\") " Jul 15 23:55:17.166893 kubelet[2682]: I0715 23:55:17.166848 2682 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6010ff85-230a-4b4f-a347-cfa7fcb042f6-lib-modules\") pod \"6010ff85-230a-4b4f-a347-cfa7fcb042f6\" (UID: \"6010ff85-230a-4b4f-a347-cfa7fcb042f6\") " Jul 15 23:55:17.166893 kubelet[2682]: I0715 23:55:17.166869 2682 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6010ff85-230a-4b4f-a347-cfa7fcb042f6-etc-cni-netd\") pod \"6010ff85-230a-4b4f-a347-cfa7fcb042f6\" (UID: \"6010ff85-230a-4b4f-a347-cfa7fcb042f6\") " Jul 15 23:55:17.167630 kubelet[2682]: I0715 23:55:17.166876 2682 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6010ff85-230a-4b4f-a347-cfa7fcb042f6-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "6010ff85-230a-4b4f-a347-cfa7fcb042f6" (UID: "6010ff85-230a-4b4f-a347-cfa7fcb042f6"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 23:55:17.167630 kubelet[2682]: I0715 23:55:17.166876 2682 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6010ff85-230a-4b4f-a347-cfa7fcb042f6-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "6010ff85-230a-4b4f-a347-cfa7fcb042f6" (UID: "6010ff85-230a-4b4f-a347-cfa7fcb042f6"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 23:55:17.167630 kubelet[2682]: I0715 23:55:17.166891 2682 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6010ff85-230a-4b4f-a347-cfa7fcb042f6-cilium-cgroup\") pod \"6010ff85-230a-4b4f-a347-cfa7fcb042f6\" (UID: \"6010ff85-230a-4b4f-a347-cfa7fcb042f6\") " Jul 15 23:55:17.167630 kubelet[2682]: I0715 23:55:17.166933 2682 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6010ff85-230a-4b4f-a347-cfa7fcb042f6-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "6010ff85-230a-4b4f-a347-cfa7fcb042f6" (UID: "6010ff85-230a-4b4f-a347-cfa7fcb042f6"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 23:55:17.167630 kubelet[2682]: I0715 23:55:17.166958 2682 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/837b0db6-d2fc-4a6e-b85e-fd7c9dcde65b-cilium-config-path\") pod \"837b0db6-d2fc-4a6e-b85e-fd7c9dcde65b\" (UID: \"837b0db6-d2fc-4a6e-b85e-fd7c9dcde65b\") " Jul 15 23:55:17.167986 kubelet[2682]: I0715 23:55:17.166966 2682 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6010ff85-230a-4b4f-a347-cfa7fcb042f6-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6010ff85-230a-4b4f-a347-cfa7fcb042f6" (UID: "6010ff85-230a-4b4f-a347-cfa7fcb042f6"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 23:55:17.167986 kubelet[2682]: I0715 23:55:17.166980 2682 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6c98x\" (UniqueName: \"kubernetes.io/projected/6010ff85-230a-4b4f-a347-cfa7fcb042f6-kube-api-access-6c98x\") pod \"6010ff85-230a-4b4f-a347-cfa7fcb042f6\" (UID: \"6010ff85-230a-4b4f-a347-cfa7fcb042f6\") " Jul 15 23:55:17.167986 kubelet[2682]: I0715 23:55:17.166982 2682 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6010ff85-230a-4b4f-a347-cfa7fcb042f6-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "6010ff85-230a-4b4f-a347-cfa7fcb042f6" (UID: "6010ff85-230a-4b4f-a347-cfa7fcb042f6"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 23:55:17.167986 kubelet[2682]: I0715 23:55:17.166996 2682 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6010ff85-230a-4b4f-a347-cfa7fcb042f6-bpf-maps\") pod \"6010ff85-230a-4b4f-a347-cfa7fcb042f6\" (UID: \"6010ff85-230a-4b4f-a347-cfa7fcb042f6\") " Jul 15 23:55:17.167986 kubelet[2682]: I0715 23:55:17.167015 2682 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6010ff85-230a-4b4f-a347-cfa7fcb042f6-clustermesh-secrets\") pod \"6010ff85-230a-4b4f-a347-cfa7fcb042f6\" (UID: \"6010ff85-230a-4b4f-a347-cfa7fcb042f6\") " Jul 15 23:55:17.167986 kubelet[2682]: I0715 23:55:17.167027 2682 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6010ff85-230a-4b4f-a347-cfa7fcb042f6-hostproc\") pod \"6010ff85-230a-4b4f-a347-cfa7fcb042f6\" (UID: \"6010ff85-230a-4b4f-a347-cfa7fcb042f6\") " Jul 15 23:55:17.168182 kubelet[2682]: I0715 23:55:17.167043 2682 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnwbj\" (UniqueName: \"kubernetes.io/projected/837b0db6-d2fc-4a6e-b85e-fd7c9dcde65b-kube-api-access-rnwbj\") pod \"837b0db6-d2fc-4a6e-b85e-fd7c9dcde65b\" (UID: \"837b0db6-d2fc-4a6e-b85e-fd7c9dcde65b\") " Jul 15 23:55:17.168182 kubelet[2682]: I0715 23:55:17.167059 2682 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6010ff85-230a-4b4f-a347-cfa7fcb042f6-cilium-config-path\") pod \"6010ff85-230a-4b4f-a347-cfa7fcb042f6\" (UID: \"6010ff85-230a-4b4f-a347-cfa7fcb042f6\") " Jul 15 23:55:17.168182 kubelet[2682]: I0715 23:55:17.167072 2682 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6010ff85-230a-4b4f-a347-cfa7fcb042f6-host-proc-sys-kernel\") pod \"6010ff85-230a-4b4f-a347-cfa7fcb042f6\" (UID: \"6010ff85-230a-4b4f-a347-cfa7fcb042f6\") " Jul 15 23:55:17.168182 kubelet[2682]: I0715 23:55:17.167084 2682 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6010ff85-230a-4b4f-a347-cfa7fcb042f6-cni-path\") pod \"6010ff85-230a-4b4f-a347-cfa7fcb042f6\" (UID: \"6010ff85-230a-4b4f-a347-cfa7fcb042f6\") " Jul 15 23:55:17.168182 kubelet[2682]: I0715 23:55:17.167116 2682 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6010ff85-230a-4b4f-a347-cfa7fcb042f6-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 15 23:55:17.168182 kubelet[2682]: I0715 23:55:17.167133 2682 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6010ff85-230a-4b4f-a347-cfa7fcb042f6-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 15 23:55:17.168182 kubelet[2682]: I0715 23:55:17.167141 2682 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6010ff85-230a-4b4f-a347-cfa7fcb042f6-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 15 23:55:17.168426 kubelet[2682]: I0715 23:55:17.167149 2682 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6010ff85-230a-4b4f-a347-cfa7fcb042f6-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 15 23:55:17.168426 kubelet[2682]: I0715 23:55:17.167157 2682 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6010ff85-230a-4b4f-a347-cfa7fcb042f6-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 15 23:55:17.168426 kubelet[2682]: I0715 23:55:17.167176 2682 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6010ff85-230a-4b4f-a347-cfa7fcb042f6-cni-path" (OuterVolumeSpecName: "cni-path") pod "6010ff85-230a-4b4f-a347-cfa7fcb042f6" (UID: "6010ff85-230a-4b4f-a347-cfa7fcb042f6"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 23:55:17.168426 kubelet[2682]: I0715 23:55:17.167505 2682 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6010ff85-230a-4b4f-a347-cfa7fcb042f6-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6010ff85-230a-4b4f-a347-cfa7fcb042f6" (UID: "6010ff85-230a-4b4f-a347-cfa7fcb042f6"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 23:55:17.168426 kubelet[2682]: I0715 23:55:17.167530 2682 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6010ff85-230a-4b4f-a347-cfa7fcb042f6-hostproc" (OuterVolumeSpecName: "hostproc") pod "6010ff85-230a-4b4f-a347-cfa7fcb042f6" (UID: "6010ff85-230a-4b4f-a347-cfa7fcb042f6"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 23:55:17.168426 kubelet[2682]: I0715 23:55:17.167545 2682 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6010ff85-230a-4b4f-a347-cfa7fcb042f6-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "6010ff85-230a-4b4f-a347-cfa7fcb042f6" (UID: "6010ff85-230a-4b4f-a347-cfa7fcb042f6"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 23:55:17.170959 kubelet[2682]: I0715 23:55:17.170218 2682 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/837b0db6-d2fc-4a6e-b85e-fd7c9dcde65b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "837b0db6-d2fc-4a6e-b85e-fd7c9dcde65b" (UID: "837b0db6-d2fc-4a6e-b85e-fd7c9dcde65b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 15 23:55:17.170959 kubelet[2682]: I0715 23:55:17.170348 2682 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6010ff85-230a-4b4f-a347-cfa7fcb042f6-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "6010ff85-230a-4b4f-a347-cfa7fcb042f6" (UID: "6010ff85-230a-4b4f-a347-cfa7fcb042f6"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 23:55:17.172128 kubelet[2682]: I0715 23:55:17.172062 2682 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6010ff85-230a-4b4f-a347-cfa7fcb042f6-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "6010ff85-230a-4b4f-a347-cfa7fcb042f6" (UID: "6010ff85-230a-4b4f-a347-cfa7fcb042f6"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 15 23:55:17.172531 kubelet[2682]: I0715 23:55:17.172490 2682 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6010ff85-230a-4b4f-a347-cfa7fcb042f6-kube-api-access-6c98x" (OuterVolumeSpecName: "kube-api-access-6c98x") pod "6010ff85-230a-4b4f-a347-cfa7fcb042f6" (UID: "6010ff85-230a-4b4f-a347-cfa7fcb042f6"). InnerVolumeSpecName "kube-api-access-6c98x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 15 23:55:17.173687 kubelet[2682]: I0715 23:55:17.173614 2682 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6010ff85-230a-4b4f-a347-cfa7fcb042f6-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "6010ff85-230a-4b4f-a347-cfa7fcb042f6" (UID: "6010ff85-230a-4b4f-a347-cfa7fcb042f6"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 15 23:55:17.174357 kubelet[2682]: I0715 23:55:17.174316 2682 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/837b0db6-d2fc-4a6e-b85e-fd7c9dcde65b-kube-api-access-rnwbj" (OuterVolumeSpecName: "kube-api-access-rnwbj") pod "837b0db6-d2fc-4a6e-b85e-fd7c9dcde65b" (UID: "837b0db6-d2fc-4a6e-b85e-fd7c9dcde65b"). InnerVolumeSpecName "kube-api-access-rnwbj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 15 23:55:17.175565 kubelet[2682]: I0715 23:55:17.175527 2682 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6010ff85-230a-4b4f-a347-cfa7fcb042f6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6010ff85-230a-4b4f-a347-cfa7fcb042f6" (UID: "6010ff85-230a-4b4f-a347-cfa7fcb042f6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 15 23:55:17.201497 kubelet[2682]: E0715 23:55:17.201431 2682 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 15 23:55:17.267843 kubelet[2682]: I0715 23:55:17.267791 2682 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6010ff85-230a-4b4f-a347-cfa7fcb042f6-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 15 23:55:17.267843 kubelet[2682]: I0715 23:55:17.267836 2682 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6010ff85-230a-4b4f-a347-cfa7fcb042f6-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 15 23:55:17.267843 kubelet[2682]: I0715 23:55:17.267848 2682 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnwbj\" (UniqueName: \"kubernetes.io/projected/837b0db6-d2fc-4a6e-b85e-fd7c9dcde65b-kube-api-access-rnwbj\") on node \"localhost\" DevicePath \"\"" Jul 15 23:55:17.267843 kubelet[2682]: I0715 23:55:17.267858 2682 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6010ff85-230a-4b4f-a347-cfa7fcb042f6-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 15 23:55:17.268056 kubelet[2682]: I0715 23:55:17.267869 2682 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6010ff85-230a-4b4f-a347-cfa7fcb042f6-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 15 23:55:17.268056 kubelet[2682]: I0715 23:55:17.267878 2682 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6010ff85-230a-4b4f-a347-cfa7fcb042f6-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 15 23:55:17.268056 kubelet[2682]: I0715 23:55:17.267885 2682 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6010ff85-230a-4b4f-a347-cfa7fcb042f6-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 15 23:55:17.268056 kubelet[2682]: I0715 23:55:17.267893 2682 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6010ff85-230a-4b4f-a347-cfa7fcb042f6-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 15 23:55:17.268056 kubelet[2682]: I0715 23:55:17.267901 2682 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6010ff85-230a-4b4f-a347-cfa7fcb042f6-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 15 23:55:17.268056 kubelet[2682]: I0715 23:55:17.267909 2682 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6c98x\" (UniqueName: \"kubernetes.io/projected/6010ff85-230a-4b4f-a347-cfa7fcb042f6-kube-api-access-6c98x\") on node \"localhost\" DevicePath \"\"" Jul 15 23:55:17.268056 kubelet[2682]: I0715 23:55:17.267918 2682 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/837b0db6-d2fc-4a6e-b85e-fd7c9dcde65b-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 15 23:55:17.500591 kubelet[2682]: I0715 23:55:17.500498 2682 scope.go:117] "RemoveContainer" containerID="cb17dcf19d23dbc2e078028ba4b0fc45775c03e49c6336a6a154c497e9638c1f" Jul 15 23:55:17.503707 containerd[1563]: time="2025-07-15T23:55:17.503168859Z" level=info msg="RemoveContainer for \"cb17dcf19d23dbc2e078028ba4b0fc45775c03e49c6336a6a154c497e9638c1f\"" Jul 15 23:55:17.510638 systemd[1]: Removed slice kubepods-besteffort-pod837b0db6_d2fc_4a6e_b85e_fd7c9dcde65b.slice - libcontainer container kubepods-besteffort-pod837b0db6_d2fc_4a6e_b85e_fd7c9dcde65b.slice. Jul 15 23:55:17.513263 containerd[1563]: time="2025-07-15T23:55:17.513228203Z" level=info msg="RemoveContainer for \"cb17dcf19d23dbc2e078028ba4b0fc45775c03e49c6336a6a154c497e9638c1f\" returns successfully" Jul 15 23:55:17.513574 kubelet[2682]: I0715 23:55:17.513545 2682 scope.go:117] "RemoveContainer" containerID="cb17dcf19d23dbc2e078028ba4b0fc45775c03e49c6336a6a154c497e9638c1f" Jul 15 23:55:17.513828 containerd[1563]: time="2025-07-15T23:55:17.513788917Z" level=error msg="ContainerStatus for \"cb17dcf19d23dbc2e078028ba4b0fc45775c03e49c6336a6a154c497e9638c1f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cb17dcf19d23dbc2e078028ba4b0fc45775c03e49c6336a6a154c497e9638c1f\": not found" Jul 15 23:55:17.518288 systemd[1]: Removed slice kubepods-burstable-pod6010ff85_230a_4b4f_a347_cfa7fcb042f6.slice - libcontainer container kubepods-burstable-pod6010ff85_230a_4b4f_a347_cfa7fcb042f6.slice. Jul 15 23:55:17.518710 systemd[1]: kubepods-burstable-pod6010ff85_230a_4b4f_a347_cfa7fcb042f6.slice: Consumed 7.660s CPU time, 124M memory peak, 600K read from disk, 13.3M written to disk. Jul 15 23:55:17.520529 kubelet[2682]: E0715 23:55:17.520492 2682 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cb17dcf19d23dbc2e078028ba4b0fc45775c03e49c6336a6a154c497e9638c1f\": not found" containerID="cb17dcf19d23dbc2e078028ba4b0fc45775c03e49c6336a6a154c497e9638c1f" Jul 15 23:55:17.520617 kubelet[2682]: I0715 23:55:17.520532 2682 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cb17dcf19d23dbc2e078028ba4b0fc45775c03e49c6336a6a154c497e9638c1f"} err="failed to get container status \"cb17dcf19d23dbc2e078028ba4b0fc45775c03e49c6336a6a154c497e9638c1f\": rpc error: code = NotFound desc = an error occurred when try to find container \"cb17dcf19d23dbc2e078028ba4b0fc45775c03e49c6336a6a154c497e9638c1f\": not found" Jul 15 23:55:17.520617 kubelet[2682]: I0715 23:55:17.520599 2682 scope.go:117] "RemoveContainer" containerID="9188b9ebaee1ce19087503f92559f6e114252b98dff8f2951bc0c760067dc79d" Jul 15 23:55:17.522876 containerd[1563]: time="2025-07-15T23:55:17.522620160Z" level=info msg="RemoveContainer for \"9188b9ebaee1ce19087503f92559f6e114252b98dff8f2951bc0c760067dc79d\"" Jul 15 23:55:17.529989 containerd[1563]: time="2025-07-15T23:55:17.529932416Z" level=info msg="RemoveContainer for \"9188b9ebaee1ce19087503f92559f6e114252b98dff8f2951bc0c760067dc79d\" returns successfully" Jul 15 23:55:17.530169 kubelet[2682]: I0715 23:55:17.530119 2682 scope.go:117] "RemoveContainer" containerID="ca114de7feacd501431c0afd1395452be6caec1bc0e2e23423a5041073ae04d0" Jul 15 23:55:17.531847 containerd[1563]: time="2025-07-15T23:55:17.531810559Z" level=info msg="RemoveContainer for \"ca114de7feacd501431c0afd1395452be6caec1bc0e2e23423a5041073ae04d0\"" Jul 15 23:55:17.537469 containerd[1563]: time="2025-07-15T23:55:17.537417547Z" level=info msg="RemoveContainer for \"ca114de7feacd501431c0afd1395452be6caec1bc0e2e23423a5041073ae04d0\" returns successfully" Jul 15 23:55:17.537888 kubelet[2682]: I0715 23:55:17.537860 2682 scope.go:117] "RemoveContainer" containerID="af44173d5d2439837b711d44cf0115b7630b0318a590d58a368f060bfb624536" Jul 15 23:55:17.548733 containerd[1563]: time="2025-07-15T23:55:17.548687186Z" level=info msg="RemoveContainer for \"af44173d5d2439837b711d44cf0115b7630b0318a590d58a368f060bfb624536\"" Jul 15 23:55:17.558226 containerd[1563]: time="2025-07-15T23:55:17.558193430Z" level=info msg="RemoveContainer for \"af44173d5d2439837b711d44cf0115b7630b0318a590d58a368f060bfb624536\" returns successfully" Jul 15 23:55:17.558427 kubelet[2682]: I0715 23:55:17.558396 2682 scope.go:117] "RemoveContainer" containerID="e706a8586576c3859096ff43c98fc7dc705450db619678ece4928d5a7a853e07" Jul 15 23:55:17.559723 containerd[1563]: time="2025-07-15T23:55:17.559678022Z" level=info msg="RemoveContainer for \"e706a8586576c3859096ff43c98fc7dc705450db619678ece4928d5a7a853e07\"" Jul 15 23:55:17.563523 containerd[1563]: time="2025-07-15T23:55:17.563475025Z" level=info msg="RemoveContainer for \"e706a8586576c3859096ff43c98fc7dc705450db619678ece4928d5a7a853e07\" returns successfully" Jul 15 23:55:17.563712 kubelet[2682]: I0715 23:55:17.563692 2682 scope.go:117] "RemoveContainer" containerID="bcefd6437d59c34169812fbbb69e436982a250c4593bb27944bb24c2cf8d1327" Jul 15 23:55:17.565044 containerd[1563]: time="2025-07-15T23:55:17.565009792Z" level=info msg="RemoveContainer for \"bcefd6437d59c34169812fbbb69e436982a250c4593bb27944bb24c2cf8d1327\"" Jul 15 23:55:17.569920 containerd[1563]: time="2025-07-15T23:55:17.569886987Z" level=info msg="RemoveContainer for \"bcefd6437d59c34169812fbbb69e436982a250c4593bb27944bb24c2cf8d1327\" returns successfully" Jul 15 23:55:17.570088 kubelet[2682]: I0715 23:55:17.570058 2682 scope.go:117] "RemoveContainer" containerID="9188b9ebaee1ce19087503f92559f6e114252b98dff8f2951bc0c760067dc79d" Jul 15 23:55:17.570295 containerd[1563]: time="2025-07-15T23:55:17.570257353Z" level=error msg="ContainerStatus for \"9188b9ebaee1ce19087503f92559f6e114252b98dff8f2951bc0c760067dc79d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9188b9ebaee1ce19087503f92559f6e114252b98dff8f2951bc0c760067dc79d\": not found" Jul 15 23:55:17.570444 kubelet[2682]: E0715 23:55:17.570416 2682 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9188b9ebaee1ce19087503f92559f6e114252b98dff8f2951bc0c760067dc79d\": not found" containerID="9188b9ebaee1ce19087503f92559f6e114252b98dff8f2951bc0c760067dc79d" Jul 15 23:55:17.570497 kubelet[2682]: I0715 23:55:17.570456 2682 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9188b9ebaee1ce19087503f92559f6e114252b98dff8f2951bc0c760067dc79d"} err="failed to get container status \"9188b9ebaee1ce19087503f92559f6e114252b98dff8f2951bc0c760067dc79d\": rpc error: code = NotFound desc = an error occurred when try to find container \"9188b9ebaee1ce19087503f92559f6e114252b98dff8f2951bc0c760067dc79d\": not found" Jul 15 23:55:17.570497 kubelet[2682]: I0715 23:55:17.570484 2682 scope.go:117] "RemoveContainer" containerID="ca114de7feacd501431c0afd1395452be6caec1bc0e2e23423a5041073ae04d0" Jul 15 23:55:17.570700 containerd[1563]: time="2025-07-15T23:55:17.570664429Z" level=error msg="ContainerStatus for \"ca114de7feacd501431c0afd1395452be6caec1bc0e2e23423a5041073ae04d0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ca114de7feacd501431c0afd1395452be6caec1bc0e2e23423a5041073ae04d0\": not found" Jul 15 23:55:17.570806 kubelet[2682]: E0715 23:55:17.570782 2682 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ca114de7feacd501431c0afd1395452be6caec1bc0e2e23423a5041073ae04d0\": not found" containerID="ca114de7feacd501431c0afd1395452be6caec1bc0e2e23423a5041073ae04d0" Jul 15 23:55:17.570860 kubelet[2682]: I0715 23:55:17.570806 2682 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ca114de7feacd501431c0afd1395452be6caec1bc0e2e23423a5041073ae04d0"} err="failed to get container status \"ca114de7feacd501431c0afd1395452be6caec1bc0e2e23423a5041073ae04d0\": rpc error: code = NotFound desc = an error occurred when try to find container \"ca114de7feacd501431c0afd1395452be6caec1bc0e2e23423a5041073ae04d0\": not found" Jul 15 23:55:17.570860 kubelet[2682]: I0715 23:55:17.570825 2682 scope.go:117] "RemoveContainer" containerID="af44173d5d2439837b711d44cf0115b7630b0318a590d58a368f060bfb624536" Jul 15 23:55:17.571021 containerd[1563]: time="2025-07-15T23:55:17.570988298Z" level=error msg="ContainerStatus for \"af44173d5d2439837b711d44cf0115b7630b0318a590d58a368f060bfb624536\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"af44173d5d2439837b711d44cf0115b7630b0318a590d58a368f060bfb624536\": not found" Jul 15 23:55:17.571147 kubelet[2682]: E0715 23:55:17.571122 2682 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"af44173d5d2439837b711d44cf0115b7630b0318a590d58a368f060bfb624536\": not found" containerID="af44173d5d2439837b711d44cf0115b7630b0318a590d58a368f060bfb624536" Jul 15 23:55:17.571188 kubelet[2682]: I0715 23:55:17.571149 2682 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"af44173d5d2439837b711d44cf0115b7630b0318a590d58a368f060bfb624536"} err="failed to get container status \"af44173d5d2439837b711d44cf0115b7630b0318a590d58a368f060bfb624536\": rpc error: code = NotFound desc = an error occurred when try to find container \"af44173d5d2439837b711d44cf0115b7630b0318a590d58a368f060bfb624536\": not found" Jul 15 23:55:17.571188 kubelet[2682]: I0715 23:55:17.571166 2682 scope.go:117] "RemoveContainer" containerID="e706a8586576c3859096ff43c98fc7dc705450db619678ece4928d5a7a853e07" Jul 15 23:55:17.571382 containerd[1563]: time="2025-07-15T23:55:17.571346703Z" level=error msg="ContainerStatus for \"e706a8586576c3859096ff43c98fc7dc705450db619678ece4928d5a7a853e07\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e706a8586576c3859096ff43c98fc7dc705450db619678ece4928d5a7a853e07\": not found" Jul 15 23:55:17.571521 kubelet[2682]: E0715 23:55:17.571477 2682 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e706a8586576c3859096ff43c98fc7dc705450db619678ece4928d5a7a853e07\": not found" containerID="e706a8586576c3859096ff43c98fc7dc705450db619678ece4928d5a7a853e07" Jul 15 23:55:17.571574 kubelet[2682]: I0715 23:55:17.571510 2682 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e706a8586576c3859096ff43c98fc7dc705450db619678ece4928d5a7a853e07"} err="failed to get container status \"e706a8586576c3859096ff43c98fc7dc705450db619678ece4928d5a7a853e07\": rpc error: code = NotFound desc = an error occurred when try to find container \"e706a8586576c3859096ff43c98fc7dc705450db619678ece4928d5a7a853e07\": not found" Jul 15 23:55:17.571574 kubelet[2682]: I0715 23:55:17.571533 2682 scope.go:117] "RemoveContainer" containerID="bcefd6437d59c34169812fbbb69e436982a250c4593bb27944bb24c2cf8d1327" Jul 15 23:55:17.571758 containerd[1563]: time="2025-07-15T23:55:17.571722460Z" level=error msg="ContainerStatus for \"bcefd6437d59c34169812fbbb69e436982a250c4593bb27944bb24c2cf8d1327\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bcefd6437d59c34169812fbbb69e436982a250c4593bb27944bb24c2cf8d1327\": not found" Jul 15 23:55:17.571869 kubelet[2682]: E0715 23:55:17.571826 2682 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bcefd6437d59c34169812fbbb69e436982a250c4593bb27944bb24c2cf8d1327\": not found" containerID="bcefd6437d59c34169812fbbb69e436982a250c4593bb27944bb24c2cf8d1327" Jul 15 23:55:17.571924 kubelet[2682]: I0715 23:55:17.571864 2682 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bcefd6437d59c34169812fbbb69e436982a250c4593bb27944bb24c2cf8d1327"} err="failed to get container status \"bcefd6437d59c34169812fbbb69e436982a250c4593bb27944bb24c2cf8d1327\": rpc error: code = NotFound desc = an error occurred when try to find container \"bcefd6437d59c34169812fbbb69e436982a250c4593bb27944bb24c2cf8d1327\": not found" Jul 15 23:55:17.816089 systemd[1]: var-lib-kubelet-pods-837b0db6\x2dd2fc\x2d4a6e\x2db85e\x2dfd7c9dcde65b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drnwbj.mount: Deactivated successfully. Jul 15 23:55:17.816214 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a03c3a23b6b5323d2f6209ac379fadd9f30966c2c643d299fec2af1b0b6048bf-shm.mount: Deactivated successfully. Jul 15 23:55:17.816303 systemd[1]: var-lib-kubelet-pods-6010ff85\x2d230a\x2d4b4f\x2da347\x2dcfa7fcb042f6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6c98x.mount: Deactivated successfully. Jul 15 23:55:17.816389 systemd[1]: var-lib-kubelet-pods-6010ff85\x2d230a\x2d4b4f\x2da347\x2dcfa7fcb042f6-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 15 23:55:17.816465 systemd[1]: var-lib-kubelet-pods-6010ff85\x2d230a\x2d4b4f\x2da347\x2dcfa7fcb042f6-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 15 23:55:18.729957 sshd[4307]: Connection closed by 10.0.0.1 port 49738 Jul 15 23:55:18.730450 sshd-session[4305]: pam_unix(sshd:session): session closed for user core Jul 15 23:55:18.743222 systemd[1]: sshd@24-10.0.0.86:22-10.0.0.1:49738.service: Deactivated successfully. Jul 15 23:55:18.745264 systemd[1]: session-25.scope: Deactivated successfully. Jul 15 23:55:18.746179 systemd-logind[1498]: Session 25 logged out. Waiting for processes to exit. Jul 15 23:55:18.749623 systemd[1]: Started sshd@25-10.0.0.86:22-10.0.0.1:32872.service - OpenSSH per-connection server daemon (10.0.0.1:32872). Jul 15 23:55:18.750499 systemd-logind[1498]: Removed session 25. Jul 15 23:55:18.807510 sshd[4462]: Accepted publickey for core from 10.0.0.1 port 32872 ssh2: RSA SHA256:wrO5NCJWuMjqDZoRWCG1KDLSAbftsNF14I2QAREtKoA Jul 15 23:55:18.809037 sshd-session[4462]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:55:18.813634 systemd-logind[1498]: New session 26 of user core. Jul 15 23:55:18.824785 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 15 23:55:18.913154 containerd[1563]: time="2025-07-15T23:55:18.912986543Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8507ade49ad9747708cb8ecaca2f3c3dbabfee5106aa722eb891b9d34c7b6082\" id:\"8507ade49ad9747708cb8ecaca2f3c3dbabfee5106aa722eb891b9d34c7b6082\" pid:2976 exit_status:137 exited_at:{seconds:1752623716 nanos:844894289}" Jul 15 23:55:19.144146 kubelet[2682]: I0715 23:55:19.144017 2682 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6010ff85-230a-4b4f-a347-cfa7fcb042f6" path="/var/lib/kubelet/pods/6010ff85-230a-4b4f-a347-cfa7fcb042f6/volumes" Jul 15 23:55:19.145224 kubelet[2682]: I0715 23:55:19.145187 2682 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="837b0db6-d2fc-4a6e-b85e-fd7c9dcde65b" path="/var/lib/kubelet/pods/837b0db6-d2fc-4a6e-b85e-fd7c9dcde65b/volumes" Jul 15 23:55:19.249203 sshd[4464]: Connection closed by 10.0.0.1 port 32872 Jul 15 23:55:19.248935 sshd-session[4462]: pam_unix(sshd:session): session closed for user core Jul 15 23:55:19.263109 systemd[1]: sshd@25-10.0.0.86:22-10.0.0.1:32872.service: Deactivated successfully. Jul 15 23:55:19.265515 kubelet[2682]: E0715 23:55:19.265470 2682 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6010ff85-230a-4b4f-a347-cfa7fcb042f6" containerName="cilium-agent" Jul 15 23:55:19.265515 kubelet[2682]: E0715 23:55:19.265502 2682 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6010ff85-230a-4b4f-a347-cfa7fcb042f6" containerName="mount-bpf-fs" Jul 15 23:55:19.265515 kubelet[2682]: E0715 23:55:19.265509 2682 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6010ff85-230a-4b4f-a347-cfa7fcb042f6" containerName="clean-cilium-state" Jul 15 23:55:19.265515 kubelet[2682]: E0715 23:55:19.265517 2682 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="837b0db6-d2fc-4a6e-b85e-fd7c9dcde65b" containerName="cilium-operator" Jul 15 23:55:19.265687 kubelet[2682]: E0715 23:55:19.265527 2682 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6010ff85-230a-4b4f-a347-cfa7fcb042f6" containerName="mount-cgroup" Jul 15 23:55:19.265687 kubelet[2682]: E0715 23:55:19.265535 2682 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6010ff85-230a-4b4f-a347-cfa7fcb042f6" containerName="apply-sysctl-overwrites" Jul 15 23:55:19.266542 systemd[1]: session-26.scope: Deactivated successfully. Jul 15 23:55:19.270877 kubelet[2682]: I0715 23:55:19.270578 2682 memory_manager.go:354] "RemoveStaleState removing state" podUID="837b0db6-d2fc-4a6e-b85e-fd7c9dcde65b" containerName="cilium-operator" Jul 15 23:55:19.270877 kubelet[2682]: I0715 23:55:19.270611 2682 memory_manager.go:354] "RemoveStaleState removing state" podUID="6010ff85-230a-4b4f-a347-cfa7fcb042f6" containerName="cilium-agent" Jul 15 23:55:19.273316 systemd-logind[1498]: Session 26 logged out. Waiting for processes to exit. Jul 15 23:55:19.278247 systemd[1]: Started sshd@26-10.0.0.86:22-10.0.0.1:32888.service - OpenSSH per-connection server daemon (10.0.0.1:32888). Jul 15 23:55:19.285725 systemd-logind[1498]: Removed session 26. Jul 15 23:55:19.299434 systemd[1]: Created slice kubepods-burstable-pod77c3d128_eb52_4306_a157_63c3fb8495c4.slice - libcontainer container kubepods-burstable-pod77c3d128_eb52_4306_a157_63c3fb8495c4.slice. Jul 15 23:55:19.345628 sshd[4476]: Accepted publickey for core from 10.0.0.1 port 32888 ssh2: RSA SHA256:wrO5NCJWuMjqDZoRWCG1KDLSAbftsNF14I2QAREtKoA Jul 15 23:55:19.347406 sshd-session[4476]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:55:19.352043 systemd-logind[1498]: New session 27 of user core. Jul 15 23:55:19.362803 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 15 23:55:19.381885 kubelet[2682]: I0715 23:55:19.381702 2682 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/77c3d128-eb52-4306-a157-63c3fb8495c4-cilium-run\") pod \"cilium-mgqvz\" (UID: \"77c3d128-eb52-4306-a157-63c3fb8495c4\") " pod="kube-system/cilium-mgqvz" Jul 15 23:55:19.381885 kubelet[2682]: I0715 23:55:19.381744 2682 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/77c3d128-eb52-4306-a157-63c3fb8495c4-cilium-cgroup\") pod \"cilium-mgqvz\" (UID: \"77c3d128-eb52-4306-a157-63c3fb8495c4\") " pod="kube-system/cilium-mgqvz" Jul 15 23:55:19.381885 kubelet[2682]: I0715 23:55:19.381767 2682 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/77c3d128-eb52-4306-a157-63c3fb8495c4-host-proc-sys-net\") pod \"cilium-mgqvz\" (UID: \"77c3d128-eb52-4306-a157-63c3fb8495c4\") " pod="kube-system/cilium-mgqvz" Jul 15 23:55:19.381885 kubelet[2682]: I0715 23:55:19.381784 2682 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gd6f\" (UniqueName: \"kubernetes.io/projected/77c3d128-eb52-4306-a157-63c3fb8495c4-kube-api-access-6gd6f\") pod \"cilium-mgqvz\" (UID: \"77c3d128-eb52-4306-a157-63c3fb8495c4\") " pod="kube-system/cilium-mgqvz" Jul 15 23:55:19.381885 kubelet[2682]: I0715 23:55:19.381801 2682 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/77c3d128-eb52-4306-a157-63c3fb8495c4-etc-cni-netd\") pod \"cilium-mgqvz\" (UID: \"77c3d128-eb52-4306-a157-63c3fb8495c4\") " pod="kube-system/cilium-mgqvz" Jul 15 23:55:19.381885 kubelet[2682]: I0715 23:55:19.381814 2682 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/77c3d128-eb52-4306-a157-63c3fb8495c4-lib-modules\") pod \"cilium-mgqvz\" (UID: \"77c3d128-eb52-4306-a157-63c3fb8495c4\") " pod="kube-system/cilium-mgqvz" Jul 15 23:55:19.382130 kubelet[2682]: I0715 23:55:19.381829 2682 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/77c3d128-eb52-4306-a157-63c3fb8495c4-hostproc\") pod \"cilium-mgqvz\" (UID: \"77c3d128-eb52-4306-a157-63c3fb8495c4\") " pod="kube-system/cilium-mgqvz" Jul 15 23:55:19.382130 kubelet[2682]: I0715 23:55:19.381842 2682 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/77c3d128-eb52-4306-a157-63c3fb8495c4-cni-path\") pod \"cilium-mgqvz\" (UID: \"77c3d128-eb52-4306-a157-63c3fb8495c4\") " pod="kube-system/cilium-mgqvz" Jul 15 23:55:19.382313 kubelet[2682]: I0715 23:55:19.382245 2682 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/77c3d128-eb52-4306-a157-63c3fb8495c4-cilium-config-path\") pod \"cilium-mgqvz\" (UID: \"77c3d128-eb52-4306-a157-63c3fb8495c4\") " pod="kube-system/cilium-mgqvz" Jul 15 23:55:19.382313 kubelet[2682]: I0715 23:55:19.382298 2682 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/77c3d128-eb52-4306-a157-63c3fb8495c4-host-proc-sys-kernel\") pod \"cilium-mgqvz\" (UID: \"77c3d128-eb52-4306-a157-63c3fb8495c4\") " pod="kube-system/cilium-mgqvz" Jul 15 23:55:19.382313 kubelet[2682]: I0715 23:55:19.382317 2682 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/77c3d128-eb52-4306-a157-63c3fb8495c4-xtables-lock\") pod \"cilium-mgqvz\" (UID: \"77c3d128-eb52-4306-a157-63c3fb8495c4\") " pod="kube-system/cilium-mgqvz" Jul 15 23:55:19.382430 kubelet[2682]: I0715 23:55:19.382331 2682 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/77c3d128-eb52-4306-a157-63c3fb8495c4-cilium-ipsec-secrets\") pod \"cilium-mgqvz\" (UID: \"77c3d128-eb52-4306-a157-63c3fb8495c4\") " pod="kube-system/cilium-mgqvz" Jul 15 23:55:19.382430 kubelet[2682]: I0715 23:55:19.382359 2682 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/77c3d128-eb52-4306-a157-63c3fb8495c4-bpf-maps\") pod \"cilium-mgqvz\" (UID: \"77c3d128-eb52-4306-a157-63c3fb8495c4\") " pod="kube-system/cilium-mgqvz" Jul 15 23:55:19.382430 kubelet[2682]: I0715 23:55:19.382377 2682 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/77c3d128-eb52-4306-a157-63c3fb8495c4-clustermesh-secrets\") pod \"cilium-mgqvz\" (UID: \"77c3d128-eb52-4306-a157-63c3fb8495c4\") " pod="kube-system/cilium-mgqvz" Jul 15 23:55:19.382430 kubelet[2682]: I0715 23:55:19.382392 2682 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/77c3d128-eb52-4306-a157-63c3fb8495c4-hubble-tls\") pod \"cilium-mgqvz\" (UID: \"77c3d128-eb52-4306-a157-63c3fb8495c4\") " pod="kube-system/cilium-mgqvz" Jul 15 23:55:19.414029 sshd[4479]: Connection closed by 10.0.0.1 port 32888 Jul 15 23:55:19.414367 sshd-session[4476]: pam_unix(sshd:session): session closed for user core Jul 15 23:55:19.428249 systemd[1]: sshd@26-10.0.0.86:22-10.0.0.1:32888.service: Deactivated successfully. Jul 15 23:55:19.430108 systemd[1]: session-27.scope: Deactivated successfully. Jul 15 23:55:19.430942 systemd-logind[1498]: Session 27 logged out. Waiting for processes to exit. Jul 15 23:55:19.433814 systemd[1]: Started sshd@27-10.0.0.86:22-10.0.0.1:32900.service - OpenSSH per-connection server daemon (10.0.0.1:32900). Jul 15 23:55:19.434458 systemd-logind[1498]: Removed session 27. Jul 15 23:55:19.482188 sshd[4486]: Accepted publickey for core from 10.0.0.1 port 32900 ssh2: RSA SHA256:wrO5NCJWuMjqDZoRWCG1KDLSAbftsNF14I2QAREtKoA Jul 15 23:55:19.483960 sshd-session[4486]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:55:19.503443 systemd-logind[1498]: New session 28 of user core. Jul 15 23:55:19.510828 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 15 23:55:19.603831 kubelet[2682]: E0715 23:55:19.603774 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:55:19.607229 containerd[1563]: time="2025-07-15T23:55:19.607192208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mgqvz,Uid:77c3d128-eb52-4306-a157-63c3fb8495c4,Namespace:kube-system,Attempt:0,}" Jul 15 23:55:19.630604 containerd[1563]: time="2025-07-15T23:55:19.630544374Z" level=info msg="connecting to shim 5a797b04e8fa1546fc2800344ba011de63817537d6d76ed33e51f83b276ed7b5" address="unix:///run/containerd/s/253377b00eb147d996206d8a2ce96a379c85d79091ea0c567852c40ee14e7c45" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:55:19.659838 systemd[1]: Started cri-containerd-5a797b04e8fa1546fc2800344ba011de63817537d6d76ed33e51f83b276ed7b5.scope - libcontainer container 5a797b04e8fa1546fc2800344ba011de63817537d6d76ed33e51f83b276ed7b5. Jul 15 23:55:19.692100 containerd[1563]: time="2025-07-15T23:55:19.691446972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mgqvz,Uid:77c3d128-eb52-4306-a157-63c3fb8495c4,Namespace:kube-system,Attempt:0,} returns sandbox id \"5a797b04e8fa1546fc2800344ba011de63817537d6d76ed33e51f83b276ed7b5\"" Jul 15 23:55:19.692954 kubelet[2682]: E0715 23:55:19.692925 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:55:19.698598 containerd[1563]: time="2025-07-15T23:55:19.698554410Z" level=info msg="CreateContainer within sandbox \"5a797b04e8fa1546fc2800344ba011de63817537d6d76ed33e51f83b276ed7b5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 15 23:55:19.706373 containerd[1563]: time="2025-07-15T23:55:19.706313446Z" level=info msg="Container 4bb6765d5eb97144848abd1238fa90f5cbeac28ff6302559178cc80b4d223baa: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:55:19.731150 containerd[1563]: time="2025-07-15T23:55:19.731086374Z" level=info msg="CreateContainer within sandbox \"5a797b04e8fa1546fc2800344ba011de63817537d6d76ed33e51f83b276ed7b5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4bb6765d5eb97144848abd1238fa90f5cbeac28ff6302559178cc80b4d223baa\"" Jul 15 23:55:19.731722 containerd[1563]: time="2025-07-15T23:55:19.731692825Z" level=info msg="StartContainer for \"4bb6765d5eb97144848abd1238fa90f5cbeac28ff6302559178cc80b4d223baa\"" Jul 15 23:55:19.732783 containerd[1563]: time="2025-07-15T23:55:19.732727311Z" level=info msg="connecting to shim 4bb6765d5eb97144848abd1238fa90f5cbeac28ff6302559178cc80b4d223baa" address="unix:///run/containerd/s/253377b00eb147d996206d8a2ce96a379c85d79091ea0c567852c40ee14e7c45" protocol=ttrpc version=3 Jul 15 23:55:19.764857 systemd[1]: Started cri-containerd-4bb6765d5eb97144848abd1238fa90f5cbeac28ff6302559178cc80b4d223baa.scope - libcontainer container 4bb6765d5eb97144848abd1238fa90f5cbeac28ff6302559178cc80b4d223baa. Jul 15 23:55:19.799464 containerd[1563]: time="2025-07-15T23:55:19.799410993Z" level=info msg="StartContainer for \"4bb6765d5eb97144848abd1238fa90f5cbeac28ff6302559178cc80b4d223baa\" returns successfully" Jul 15 23:55:19.811569 systemd[1]: cri-containerd-4bb6765d5eb97144848abd1238fa90f5cbeac28ff6302559178cc80b4d223baa.scope: Deactivated successfully. Jul 15 23:55:19.812567 containerd[1563]: time="2025-07-15T23:55:19.812513149Z" level=info msg="received exit event container_id:\"4bb6765d5eb97144848abd1238fa90f5cbeac28ff6302559178cc80b4d223baa\" id:\"4bb6765d5eb97144848abd1238fa90f5cbeac28ff6302559178cc80b4d223baa\" pid:4557 exited_at:{seconds:1752623719 nanos:812188178}" Jul 15 23:55:19.812785 containerd[1563]: time="2025-07-15T23:55:19.812753020Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4bb6765d5eb97144848abd1238fa90f5cbeac28ff6302559178cc80b4d223baa\" id:\"4bb6765d5eb97144848abd1238fa90f5cbeac28ff6302559178cc80b4d223baa\" pid:4557 exited_at:{seconds:1752623719 nanos:812188178}" Jul 15 23:55:19.816142 kubelet[2682]: I0715 23:55:19.815827 2682 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-15T23:55:19Z","lastTransitionTime":"2025-07-15T23:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 15 23:55:20.537327 kubelet[2682]: E0715 23:55:20.537250 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:55:20.540726 containerd[1563]: time="2025-07-15T23:55:20.540018815Z" level=info msg="CreateContainer within sandbox \"5a797b04e8fa1546fc2800344ba011de63817537d6d76ed33e51f83b276ed7b5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 15 23:55:20.549762 containerd[1563]: time="2025-07-15T23:55:20.549712959Z" level=info msg="Container 503d994a38dbce1da1e54791d36d32c0e0640af8167e96f3146f831b7de3be69: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:55:20.560395 containerd[1563]: time="2025-07-15T23:55:20.560336110Z" level=info msg="CreateContainer within sandbox \"5a797b04e8fa1546fc2800344ba011de63817537d6d76ed33e51f83b276ed7b5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"503d994a38dbce1da1e54791d36d32c0e0640af8167e96f3146f831b7de3be69\"" Jul 15 23:55:20.560980 containerd[1563]: time="2025-07-15T23:55:20.560935367Z" level=info msg="StartContainer for \"503d994a38dbce1da1e54791d36d32c0e0640af8167e96f3146f831b7de3be69\"" Jul 15 23:55:20.561984 containerd[1563]: time="2025-07-15T23:55:20.561950366Z" level=info msg="connecting to shim 503d994a38dbce1da1e54791d36d32c0e0640af8167e96f3146f831b7de3be69" address="unix:///run/containerd/s/253377b00eb147d996206d8a2ce96a379c85d79091ea0c567852c40ee14e7c45" protocol=ttrpc version=3 Jul 15 23:55:20.590797 systemd[1]: Started cri-containerd-503d994a38dbce1da1e54791d36d32c0e0640af8167e96f3146f831b7de3be69.scope - libcontainer container 503d994a38dbce1da1e54791d36d32c0e0640af8167e96f3146f831b7de3be69. Jul 15 23:55:20.632494 systemd[1]: cri-containerd-503d994a38dbce1da1e54791d36d32c0e0640af8167e96f3146f831b7de3be69.scope: Deactivated successfully. Jul 15 23:55:20.633137 containerd[1563]: time="2025-07-15T23:55:20.633102239Z" level=info msg="TaskExit event in podsandbox handler container_id:\"503d994a38dbce1da1e54791d36d32c0e0640af8167e96f3146f831b7de3be69\" id:\"503d994a38dbce1da1e54791d36d32c0e0640af8167e96f3146f831b7de3be69\" pid:4601 exited_at:{seconds:1752623720 nanos:632831840}" Jul 15 23:55:20.846208 containerd[1563]: time="2025-07-15T23:55:20.846021846Z" level=info msg="received exit event container_id:\"503d994a38dbce1da1e54791d36d32c0e0640af8167e96f3146f831b7de3be69\" id:\"503d994a38dbce1da1e54791d36d32c0e0640af8167e96f3146f831b7de3be69\" pid:4601 exited_at:{seconds:1752623720 nanos:632831840}" Jul 15 23:55:20.848131 containerd[1563]: time="2025-07-15T23:55:20.848103552Z" level=info msg="StartContainer for \"503d994a38dbce1da1e54791d36d32c0e0640af8167e96f3146f831b7de3be69\" returns successfully" Jul 15 23:55:20.869514 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-503d994a38dbce1da1e54791d36d32c0e0640af8167e96f3146f831b7de3be69-rootfs.mount: Deactivated successfully. Jul 15 23:55:21.540986 kubelet[2682]: E0715 23:55:21.540950 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:55:21.542745 containerd[1563]: time="2025-07-15T23:55:21.542630463Z" level=info msg="CreateContainer within sandbox \"5a797b04e8fa1546fc2800344ba011de63817537d6d76ed33e51f83b276ed7b5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 15 23:55:21.989957 containerd[1563]: time="2025-07-15T23:55:21.989882642Z" level=info msg="Container d2dc48726f48a1a9f91641677fbb5e12fbcb45f0692c68572e9a19e527fea757: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:55:22.202582 kubelet[2682]: E0715 23:55:22.202526 2682 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 15 23:55:22.303615 containerd[1563]: time="2025-07-15T23:55:22.303292151Z" level=info msg="CreateContainer within sandbox \"5a797b04e8fa1546fc2800344ba011de63817537d6d76ed33e51f83b276ed7b5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d2dc48726f48a1a9f91641677fbb5e12fbcb45f0692c68572e9a19e527fea757\"" Jul 15 23:55:22.304352 containerd[1563]: time="2025-07-15T23:55:22.304298473Z" level=info msg="StartContainer for \"d2dc48726f48a1a9f91641677fbb5e12fbcb45f0692c68572e9a19e527fea757\"" Jul 15 23:55:22.306953 containerd[1563]: time="2025-07-15T23:55:22.306856995Z" level=info msg="connecting to shim d2dc48726f48a1a9f91641677fbb5e12fbcb45f0692c68572e9a19e527fea757" address="unix:///run/containerd/s/253377b00eb147d996206d8a2ce96a379c85d79091ea0c567852c40ee14e7c45" protocol=ttrpc version=3 Jul 15 23:55:22.339319 systemd[1]: Started cri-containerd-d2dc48726f48a1a9f91641677fbb5e12fbcb45f0692c68572e9a19e527fea757.scope - libcontainer container d2dc48726f48a1a9f91641677fbb5e12fbcb45f0692c68572e9a19e527fea757. Jul 15 23:55:22.395129 containerd[1563]: time="2025-07-15T23:55:22.395065478Z" level=info msg="StartContainer for \"d2dc48726f48a1a9f91641677fbb5e12fbcb45f0692c68572e9a19e527fea757\" returns successfully" Jul 15 23:55:22.396232 systemd[1]: cri-containerd-d2dc48726f48a1a9f91641677fbb5e12fbcb45f0692c68572e9a19e527fea757.scope: Deactivated successfully. Jul 15 23:55:22.398251 containerd[1563]: time="2025-07-15T23:55:22.398195595Z" level=info msg="received exit event container_id:\"d2dc48726f48a1a9f91641677fbb5e12fbcb45f0692c68572e9a19e527fea757\" id:\"d2dc48726f48a1a9f91641677fbb5e12fbcb45f0692c68572e9a19e527fea757\" pid:4646 exited_at:{seconds:1752623722 nanos:397482934}" Jul 15 23:55:22.398404 containerd[1563]: time="2025-07-15T23:55:22.398358902Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d2dc48726f48a1a9f91641677fbb5e12fbcb45f0692c68572e9a19e527fea757\" id:\"d2dc48726f48a1a9f91641677fbb5e12fbcb45f0692c68572e9a19e527fea757\" pid:4646 exited_at:{seconds:1752623722 nanos:397482934}" Jul 15 23:55:22.426801 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d2dc48726f48a1a9f91641677fbb5e12fbcb45f0692c68572e9a19e527fea757-rootfs.mount: Deactivated successfully. Jul 15 23:55:22.546371 kubelet[2682]: E0715 23:55:22.546311 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:55:23.550911 kubelet[2682]: E0715 23:55:23.550874 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:55:23.552632 containerd[1563]: time="2025-07-15T23:55:23.552576326Z" level=info msg="CreateContainer within sandbox \"5a797b04e8fa1546fc2800344ba011de63817537d6d76ed33e51f83b276ed7b5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 15 23:55:24.050386 containerd[1563]: time="2025-07-15T23:55:24.050328868Z" level=info msg="Container 517ff9eb1383741237e8ecd1f1ccd73f72f96197120ac1862ae74fc8143e0e66: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:55:24.438133 containerd[1563]: time="2025-07-15T23:55:24.437109019Z" level=info msg="CreateContainer within sandbox \"5a797b04e8fa1546fc2800344ba011de63817537d6d76ed33e51f83b276ed7b5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"517ff9eb1383741237e8ecd1f1ccd73f72f96197120ac1862ae74fc8143e0e66\"" Jul 15 23:55:24.439687 containerd[1563]: time="2025-07-15T23:55:24.439603280Z" level=info msg="StartContainer for \"517ff9eb1383741237e8ecd1f1ccd73f72f96197120ac1862ae74fc8143e0e66\"" Jul 15 23:55:24.442402 containerd[1563]: time="2025-07-15T23:55:24.442331009Z" level=info msg="connecting to shim 517ff9eb1383741237e8ecd1f1ccd73f72f96197120ac1862ae74fc8143e0e66" address="unix:///run/containerd/s/253377b00eb147d996206d8a2ce96a379c85d79091ea0c567852c40ee14e7c45" protocol=ttrpc version=3 Jul 15 23:55:24.478819 systemd[1]: Started cri-containerd-517ff9eb1383741237e8ecd1f1ccd73f72f96197120ac1862ae74fc8143e0e66.scope - libcontainer container 517ff9eb1383741237e8ecd1f1ccd73f72f96197120ac1862ae74fc8143e0e66. Jul 15 23:55:24.530223 systemd[1]: cri-containerd-517ff9eb1383741237e8ecd1f1ccd73f72f96197120ac1862ae74fc8143e0e66.scope: Deactivated successfully. Jul 15 23:55:24.531427 containerd[1563]: time="2025-07-15T23:55:24.531342960Z" level=info msg="TaskExit event in podsandbox handler container_id:\"517ff9eb1383741237e8ecd1f1ccd73f72f96197120ac1862ae74fc8143e0e66\" id:\"517ff9eb1383741237e8ecd1f1ccd73f72f96197120ac1862ae74fc8143e0e66\" pid:4686 exited_at:{seconds:1752623724 nanos:530642352}" Jul 15 23:55:24.706758 containerd[1563]: time="2025-07-15T23:55:24.706323768Z" level=info msg="received exit event container_id:\"517ff9eb1383741237e8ecd1f1ccd73f72f96197120ac1862ae74fc8143e0e66\" id:\"517ff9eb1383741237e8ecd1f1ccd73f72f96197120ac1862ae74fc8143e0e66\" pid:4686 exited_at:{seconds:1752623724 nanos:530642352}" Jul 15 23:55:24.721128 containerd[1563]: time="2025-07-15T23:55:24.721014958Z" level=info msg="StartContainer for \"517ff9eb1383741237e8ecd1f1ccd73f72f96197120ac1862ae74fc8143e0e66\" returns successfully" Jul 15 23:55:24.747568 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-517ff9eb1383741237e8ecd1f1ccd73f72f96197120ac1862ae74fc8143e0e66-rootfs.mount: Deactivated successfully. Jul 15 23:55:25.740691 kubelet[2682]: E0715 23:55:25.740595 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:55:25.746381 containerd[1563]: time="2025-07-15T23:55:25.746249644Z" level=info msg="CreateContainer within sandbox \"5a797b04e8fa1546fc2800344ba011de63817537d6d76ed33e51f83b276ed7b5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 15 23:55:25.763422 containerd[1563]: time="2025-07-15T23:55:25.763345976Z" level=info msg="Container 71f74691af81eee26304bdc9ec5ed11b10fa7e7bff6aeaf8c8f70c7610943513: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:55:25.772449 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3300538897.mount: Deactivated successfully. Jul 15 23:55:25.777471 containerd[1563]: time="2025-07-15T23:55:25.777404875Z" level=info msg="CreateContainer within sandbox \"5a797b04e8fa1546fc2800344ba011de63817537d6d76ed33e51f83b276ed7b5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"71f74691af81eee26304bdc9ec5ed11b10fa7e7bff6aeaf8c8f70c7610943513\"" Jul 15 23:55:25.781107 containerd[1563]: time="2025-07-15T23:55:25.780991790Z" level=info msg="StartContainer for \"71f74691af81eee26304bdc9ec5ed11b10fa7e7bff6aeaf8c8f70c7610943513\"" Jul 15 23:55:25.782966 containerd[1563]: time="2025-07-15T23:55:25.782888136Z" level=info msg="connecting to shim 71f74691af81eee26304bdc9ec5ed11b10fa7e7bff6aeaf8c8f70c7610943513" address="unix:///run/containerd/s/253377b00eb147d996206d8a2ce96a379c85d79091ea0c567852c40ee14e7c45" protocol=ttrpc version=3 Jul 15 23:55:25.812829 systemd[1]: Started cri-containerd-71f74691af81eee26304bdc9ec5ed11b10fa7e7bff6aeaf8c8f70c7610943513.scope - libcontainer container 71f74691af81eee26304bdc9ec5ed11b10fa7e7bff6aeaf8c8f70c7610943513. Jul 15 23:55:25.932002 containerd[1563]: time="2025-07-15T23:55:25.931924784Z" level=info msg="StartContainer for \"71f74691af81eee26304bdc9ec5ed11b10fa7e7bff6aeaf8c8f70c7610943513\" returns successfully" Jul 15 23:55:26.033324 containerd[1563]: time="2025-07-15T23:55:26.032978713Z" level=info msg="TaskExit event in podsandbox handler container_id:\"71f74691af81eee26304bdc9ec5ed11b10fa7e7bff6aeaf8c8f70c7610943513\" id:\"6ce1237ab02112f8497f2e85ce2e74c99e67af12bfc0a9e771f7a8d4e793051e\" pid:4757 exited_at:{seconds:1752623726 nanos:32210958}" Jul 15 23:55:26.475685 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Jul 15 23:55:26.749535 kubelet[2682]: E0715 23:55:26.749373 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:55:26.769385 kubelet[2682]: I0715 23:55:26.769284 2682 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-mgqvz" podStartSLOduration=7.769241978 podStartE2EDuration="7.769241978s" podCreationTimestamp="2025-07-15 23:55:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:55:26.768610009 +0000 UTC m=+109.711146663" watchObservedRunningTime="2025-07-15 23:55:26.769241978 +0000 UTC m=+109.711778621" Jul 15 23:55:27.609014 containerd[1563]: time="2025-07-15T23:55:27.608951961Z" level=info msg="TaskExit event in podsandbox handler container_id:\"71f74691af81eee26304bdc9ec5ed11b10fa7e7bff6aeaf8c8f70c7610943513\" id:\"a2807e9810b02cfb8a423f7bd9a89f00a260da3bb37d0644539b31d036f7ec14\" pid:4829 exit_status:1 exited_at:{seconds:1752623727 nanos:608530037}" Jul 15 23:55:27.751416 kubelet[2682]: E0715 23:55:27.751380 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:55:28.753917 kubelet[2682]: E0715 23:55:28.753845 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:55:29.732004 containerd[1563]: time="2025-07-15T23:55:29.731926177Z" level=info msg="TaskExit event in podsandbox handler container_id:\"71f74691af81eee26304bdc9ec5ed11b10fa7e7bff6aeaf8c8f70c7610943513\" id:\"8b06865f896bab3a1a702edfa340377ad23e15dc3f8148c41739969b983e489d\" pid:4961 exit_status:1 exited_at:{seconds:1752623729 nanos:731241880}" Jul 15 23:55:30.911060 systemd-networkd[1484]: lxc_health: Link UP Jul 15 23:55:30.911593 systemd-networkd[1484]: lxc_health: Gained carrier Jul 15 23:55:31.606225 kubelet[2682]: E0715 23:55:31.605739 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:55:31.763302 kubelet[2682]: E0715 23:55:31.763258 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:55:31.996906 containerd[1563]: time="2025-07-15T23:55:31.996863348Z" level=info msg="TaskExit event in podsandbox handler container_id:\"71f74691af81eee26304bdc9ec5ed11b10fa7e7bff6aeaf8c8f70c7610943513\" id:\"c05d82a902def43211b03b329b66975fe30c7b15b2e6976b4f37b7d0f58c75c3\" pid:5308 exited_at:{seconds:1752623731 nanos:996432137}" Jul 15 23:55:32.496996 systemd-networkd[1484]: lxc_health: Gained IPv6LL Jul 15 23:55:32.766075 kubelet[2682]: E0715 23:55:32.765927 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:55:34.135447 containerd[1563]: time="2025-07-15T23:55:34.135398106Z" level=info msg="TaskExit event in podsandbox handler container_id:\"71f74691af81eee26304bdc9ec5ed11b10fa7e7bff6aeaf8c8f70c7610943513\" id:\"d00eb7ae856ad90f0d412331e0e197bba8520ec5375ea10b2f96fe1a8f6f4aae\" pid:5337 exited_at:{seconds:1752623734 nanos:135023692}" Jul 15 23:55:36.465320 containerd[1563]: time="2025-07-15T23:55:36.465252447Z" level=info msg="TaskExit event in podsandbox handler container_id:\"71f74691af81eee26304bdc9ec5ed11b10fa7e7bff6aeaf8c8f70c7610943513\" id:\"4a249d28d1fdf2eeca76d53fc6d967827b6598cb926fa56c10299e3c6c56d490\" pid:5372 exited_at:{seconds:1752623736 nanos:464794095}" Jul 15 23:55:36.472238 sshd[4492]: Connection closed by 10.0.0.1 port 32900 Jul 15 23:55:36.473070 sshd-session[4486]: pam_unix(sshd:session): session closed for user core Jul 15 23:55:36.478586 systemd[1]: sshd@27-10.0.0.86:22-10.0.0.1:32900.service: Deactivated successfully. Jul 15 23:55:36.481279 systemd[1]: session-28.scope: Deactivated successfully. Jul 15 23:55:36.482255 systemd-logind[1498]: Session 28 logged out. Waiting for processes to exit. Jul 15 23:55:36.484009 systemd-logind[1498]: Removed session 28. Jul 15 23:55:37.138216 containerd[1563]: time="2025-07-15T23:55:37.137912022Z" level=info msg="StopPodSandbox for \"8507ade49ad9747708cb8ecaca2f3c3dbabfee5106aa722eb891b9d34c7b6082\"" Jul 15 23:55:37.138216 containerd[1563]: time="2025-07-15T23:55:37.138131646Z" level=info msg="TearDown network for sandbox \"8507ade49ad9747708cb8ecaca2f3c3dbabfee5106aa722eb891b9d34c7b6082\" successfully" Jul 15 23:55:37.138216 containerd[1563]: time="2025-07-15T23:55:37.138148237Z" level=info msg="StopPodSandbox for \"8507ade49ad9747708cb8ecaca2f3c3dbabfee5106aa722eb891b9d34c7b6082\" returns successfully" Jul 15 23:55:37.138826 containerd[1563]: time="2025-07-15T23:55:37.138776087Z" level=info msg="RemovePodSandbox for \"8507ade49ad9747708cb8ecaca2f3c3dbabfee5106aa722eb891b9d34c7b6082\"" Jul 15 23:55:37.138899 containerd[1563]: time="2025-07-15T23:55:37.138844175Z" level=info msg="Forcibly stopping sandbox \"8507ade49ad9747708cb8ecaca2f3c3dbabfee5106aa722eb891b9d34c7b6082\"" Jul 15 23:55:37.139046 containerd[1563]: time="2025-07-15T23:55:37.139013533Z" level=info msg="TearDown network for sandbox \"8507ade49ad9747708cb8ecaca2f3c3dbabfee5106aa722eb891b9d34c7b6082\" successfully" Jul 15 23:55:37.141044 containerd[1563]: time="2025-07-15T23:55:37.141012341Z" level=info msg="Ensure that sandbox 8507ade49ad9747708cb8ecaca2f3c3dbabfee5106aa722eb891b9d34c7b6082 in task-service has been cleanup successfully" Jul 15 23:55:37.146798 containerd[1563]: time="2025-07-15T23:55:37.146761179Z" level=info msg="RemovePodSandbox \"8507ade49ad9747708cb8ecaca2f3c3dbabfee5106aa722eb891b9d34c7b6082\" returns successfully" Jul 15 23:55:37.146931 kubelet[2682]: E0715 23:55:37.146896 2682 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:55:37.147354 containerd[1563]: time="2025-07-15T23:55:37.147310792Z" level=info msg="StopPodSandbox for \"a03c3a23b6b5323d2f6209ac379fadd9f30966c2c643d299fec2af1b0b6048bf\"" Jul 15 23:55:37.147458 containerd[1563]: time="2025-07-15T23:55:37.147414818Z" level=info msg="TearDown network for sandbox \"a03c3a23b6b5323d2f6209ac379fadd9f30966c2c643d299fec2af1b0b6048bf\" successfully" Jul 15 23:55:37.147458 containerd[1563]: time="2025-07-15T23:55:37.147437330Z" level=info msg="StopPodSandbox for \"a03c3a23b6b5323d2f6209ac379fadd9f30966c2c643d299fec2af1b0b6048bf\" returns successfully" Jul 15 23:55:37.147822 containerd[1563]: time="2025-07-15T23:55:37.147792678Z" level=info msg="RemovePodSandbox for \"a03c3a23b6b5323d2f6209ac379fadd9f30966c2c643d299fec2af1b0b6048bf\"" Jul 15 23:55:37.147948 containerd[1563]: time="2025-07-15T23:55:37.147919837Z" level=info msg="Forcibly stopping sandbox \"a03c3a23b6b5323d2f6209ac379fadd9f30966c2c643d299fec2af1b0b6048bf\"" Jul 15 23:55:37.148019 containerd[1563]: time="2025-07-15T23:55:37.148002282Z" level=info msg="TearDown network for sandbox \"a03c3a23b6b5323d2f6209ac379fadd9f30966c2c643d299fec2af1b0b6048bf\" successfully" Jul 15 23:55:37.150311 containerd[1563]: time="2025-07-15T23:55:37.150282489Z" level=info msg="Ensure that sandbox a03c3a23b6b5323d2f6209ac379fadd9f30966c2c643d299fec2af1b0b6048bf in task-service has been cleanup successfully" Jul 15 23:55:37.154563 containerd[1563]: time="2025-07-15T23:55:37.154458240Z" level=info msg="RemovePodSandbox \"a03c3a23b6b5323d2f6209ac379fadd9f30966c2c643d299fec2af1b0b6048bf\" returns successfully"