Jul 15 05:20:02.816283 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue Jul 15 03:28:48 -00 2025 Jul 15 05:20:02.816314 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=926b029026d98240a9e8b6527b65fc026ae523bea87c3b77ffd7237bcc7be4fb Jul 15 05:20:02.816352 kernel: BIOS-provided physical RAM map: Jul 15 05:20:02.816364 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000002ffff] usable Jul 15 05:20:02.816373 kernel: BIOS-e820: [mem 0x0000000000030000-0x000000000004ffff] reserved Jul 15 05:20:02.816381 kernel: BIOS-e820: [mem 0x0000000000050000-0x000000000009efff] usable Jul 15 05:20:02.816391 kernel: BIOS-e820: [mem 0x000000000009f000-0x000000000009ffff] reserved Jul 15 05:20:02.816401 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009b8ecfff] usable Jul 15 05:20:02.816409 kernel: BIOS-e820: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Jul 15 05:20:02.816419 kernel: BIOS-e820: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Jul 15 05:20:02.816427 kernel: BIOS-e820: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Jul 15 05:20:02.816439 kernel: BIOS-e820: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Jul 15 05:20:02.816448 kernel: BIOS-e820: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Jul 15 05:20:02.816457 kernel: BIOS-e820: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Jul 15 05:20:02.816468 kernel: BIOS-e820: [mem 0x000000009bfb7000-0x000000009bffffff] usable Jul 15 05:20:02.816477 kernel: BIOS-e820: [mem 0x000000009c000000-0x000000009cffffff] reserved Jul 15 05:20:02.816489 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jul 15 05:20:02.816499 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 15 05:20:02.816508 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 15 05:20:02.816517 kernel: NX (Execute Disable) protection: active Jul 15 05:20:02.816526 kernel: APIC: Static calls initialized Jul 15 05:20:02.816535 kernel: e820: update [mem 0x9a13e018-0x9a147c57] usable ==> usable Jul 15 05:20:02.816545 kernel: e820: update [mem 0x9a101018-0x9a13de57] usable ==> usable Jul 15 05:20:02.816554 kernel: extended physical RAM map: Jul 15 05:20:02.816563 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000002ffff] usable Jul 15 05:20:02.816583 kernel: reserve setup_data: [mem 0x0000000000030000-0x000000000004ffff] reserved Jul 15 05:20:02.816601 kernel: reserve setup_data: [mem 0x0000000000050000-0x000000000009efff] usable Jul 15 05:20:02.816614 kernel: reserve setup_data: [mem 0x000000000009f000-0x000000000009ffff] reserved Jul 15 05:20:02.816623 kernel: reserve setup_data: [mem 0x0000000000100000-0x000000009a101017] usable Jul 15 05:20:02.816632 kernel: reserve setup_data: [mem 0x000000009a101018-0x000000009a13de57] usable Jul 15 05:20:02.816641 kernel: reserve setup_data: [mem 0x000000009a13de58-0x000000009a13e017] usable Jul 15 05:20:02.816651 kernel: reserve setup_data: [mem 0x000000009a13e018-0x000000009a147c57] usable Jul 15 05:20:02.816660 kernel: reserve setup_data: [mem 0x000000009a147c58-0x000000009b8ecfff] usable Jul 15 05:20:02.816670 kernel: reserve setup_data: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Jul 15 05:20:02.816679 kernel: reserve setup_data: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Jul 15 05:20:02.816688 kernel: reserve setup_data: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Jul 15 05:20:02.816698 kernel: reserve setup_data: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Jul 15 05:20:02.816707 kernel: reserve setup_data: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Jul 15 05:20:02.816719 kernel: reserve setup_data: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Jul 15 05:20:02.816729 kernel: reserve setup_data: [mem 0x000000009bfb7000-0x000000009bffffff] usable Jul 15 05:20:02.816743 kernel: reserve setup_data: [mem 0x000000009c000000-0x000000009cffffff] reserved Jul 15 05:20:02.816753 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jul 15 05:20:02.816763 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 15 05:20:02.816773 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 15 05:20:02.816785 kernel: efi: EFI v2.7 by EDK II Jul 15 05:20:02.816795 kernel: efi: SMBIOS=0x9b9d5000 ACPI=0x9bb7e000 ACPI 2.0=0x9bb7e014 MEMATTR=0x9a1af018 RNG=0x9bb73018 Jul 15 05:20:02.816805 kernel: random: crng init done Jul 15 05:20:02.816814 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Jul 15 05:20:02.816824 kernel: secureboot: Secure boot enabled Jul 15 05:20:02.816834 kernel: SMBIOS 2.8 present. Jul 15 05:20:02.816843 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Jul 15 05:20:02.816853 kernel: DMI: Memory slots populated: 1/1 Jul 15 05:20:02.816863 kernel: Hypervisor detected: KVM Jul 15 05:20:02.816872 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 15 05:20:02.816882 kernel: kvm-clock: using sched offset of 5024184253 cycles Jul 15 05:20:02.816896 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 15 05:20:02.816906 kernel: tsc: Detected 2794.750 MHz processor Jul 15 05:20:02.816917 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 15 05:20:02.816927 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 15 05:20:02.816937 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x400000000 Jul 15 05:20:02.816947 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jul 15 05:20:02.816958 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 15 05:20:02.816970 kernel: Using GB pages for direct mapping Jul 15 05:20:02.816981 kernel: ACPI: Early table checksum verification disabled Jul 15 05:20:02.816994 kernel: ACPI: RSDP 0x000000009BB7E014 000024 (v02 BOCHS ) Jul 15 05:20:02.817004 kernel: ACPI: XSDT 0x000000009BB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jul 15 05:20:02.817014 kernel: ACPI: FACP 0x000000009BB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 05:20:02.817024 kernel: ACPI: DSDT 0x000000009BB7A000 002237 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 05:20:02.817034 kernel: ACPI: FACS 0x000000009BBDD000 000040 Jul 15 05:20:02.817044 kernel: ACPI: APIC 0x000000009BB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 05:20:02.817074 kernel: ACPI: HPET 0x000000009BB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 05:20:02.817094 kernel: ACPI: MCFG 0x000000009BB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 05:20:02.817104 kernel: ACPI: WAET 0x000000009BB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 05:20:02.817118 kernel: ACPI: BGRT 0x000000009BB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jul 15 05:20:02.817128 kernel: ACPI: Reserving FACP table memory at [mem 0x9bb79000-0x9bb790f3] Jul 15 05:20:02.817138 kernel: ACPI: Reserving DSDT table memory at [mem 0x9bb7a000-0x9bb7c236] Jul 15 05:20:02.817148 kernel: ACPI: Reserving FACS table memory at [mem 0x9bbdd000-0x9bbdd03f] Jul 15 05:20:02.817157 kernel: ACPI: Reserving APIC table memory at [mem 0x9bb78000-0x9bb7808f] Jul 15 05:20:02.817167 kernel: ACPI: Reserving HPET table memory at [mem 0x9bb77000-0x9bb77037] Jul 15 05:20:02.817177 kernel: ACPI: Reserving MCFG table memory at [mem 0x9bb76000-0x9bb7603b] Jul 15 05:20:02.817187 kernel: ACPI: Reserving WAET table memory at [mem 0x9bb75000-0x9bb75027] Jul 15 05:20:02.817196 kernel: ACPI: Reserving BGRT table memory at [mem 0x9bb74000-0x9bb74037] Jul 15 05:20:02.817208 kernel: No NUMA configuration found Jul 15 05:20:02.817218 kernel: Faking a node at [mem 0x0000000000000000-0x000000009bffffff] Jul 15 05:20:02.817229 kernel: NODE_DATA(0) allocated [mem 0x9bf57dc0-0x9bf5efff] Jul 15 05:20:02.817239 kernel: Zone ranges: Jul 15 05:20:02.817249 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 15 05:20:02.817258 kernel: DMA32 [mem 0x0000000001000000-0x000000009bffffff] Jul 15 05:20:02.817268 kernel: Normal empty Jul 15 05:20:02.817278 kernel: Device empty Jul 15 05:20:02.817288 kernel: Movable zone start for each node Jul 15 05:20:02.817300 kernel: Early memory node ranges Jul 15 05:20:02.817310 kernel: node 0: [mem 0x0000000000001000-0x000000000002ffff] Jul 15 05:20:02.817320 kernel: node 0: [mem 0x0000000000050000-0x000000000009efff] Jul 15 05:20:02.817330 kernel: node 0: [mem 0x0000000000100000-0x000000009b8ecfff] Jul 15 05:20:02.817340 kernel: node 0: [mem 0x000000009bbff000-0x000000009bfb0fff] Jul 15 05:20:02.817350 kernel: node 0: [mem 0x000000009bfb7000-0x000000009bffffff] Jul 15 05:20:02.817360 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009bffffff] Jul 15 05:20:02.817369 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 15 05:20:02.817379 kernel: On node 0, zone DMA: 32 pages in unavailable ranges Jul 15 05:20:02.817389 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 15 05:20:02.817402 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jul 15 05:20:02.817412 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Jul 15 05:20:02.817421 kernel: On node 0, zone DMA32: 16384 pages in unavailable ranges Jul 15 05:20:02.817431 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 15 05:20:02.817441 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 15 05:20:02.817451 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 15 05:20:02.817461 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 15 05:20:02.817471 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 15 05:20:02.817481 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 15 05:20:02.817495 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 15 05:20:02.817505 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 15 05:20:02.817515 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 15 05:20:02.817525 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 15 05:20:02.817535 kernel: TSC deadline timer available Jul 15 05:20:02.817544 kernel: CPU topo: Max. logical packages: 1 Jul 15 05:20:02.817554 kernel: CPU topo: Max. logical dies: 1 Jul 15 05:20:02.817565 kernel: CPU topo: Max. dies per package: 1 Jul 15 05:20:02.817585 kernel: CPU topo: Max. threads per core: 1 Jul 15 05:20:02.817595 kernel: CPU topo: Num. cores per package: 4 Jul 15 05:20:02.817606 kernel: CPU topo: Num. threads per package: 4 Jul 15 05:20:02.817616 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jul 15 05:20:02.817629 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 15 05:20:02.817639 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 15 05:20:02.817649 kernel: kvm-guest: setup PV sched yield Jul 15 05:20:02.817660 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Jul 15 05:20:02.817670 kernel: Booting paravirtualized kernel on KVM Jul 15 05:20:02.817683 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 15 05:20:02.817694 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jul 15 05:20:02.817705 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jul 15 05:20:02.817715 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jul 15 05:20:02.817736 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 15 05:20:02.817748 kernel: kvm-guest: PV spinlocks enabled Jul 15 05:20:02.817773 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 15 05:20:02.817787 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=926b029026d98240a9e8b6527b65fc026ae523bea87c3b77ffd7237bcc7be4fb Jul 15 05:20:02.817801 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 15 05:20:02.817812 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 15 05:20:02.817823 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 15 05:20:02.817833 kernel: Fallback order for Node 0: 0 Jul 15 05:20:02.817848 kernel: Built 1 zonelists, mobility grouping on. Total pages: 638054 Jul 15 05:20:02.817858 kernel: Policy zone: DMA32 Jul 15 05:20:02.817869 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 15 05:20:02.817879 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 15 05:20:02.817890 kernel: ftrace: allocating 40097 entries in 157 pages Jul 15 05:20:02.817903 kernel: ftrace: allocated 157 pages with 5 groups Jul 15 05:20:02.817913 kernel: Dynamic Preempt: voluntary Jul 15 05:20:02.817924 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 15 05:20:02.817936 kernel: rcu: RCU event tracing is enabled. Jul 15 05:20:02.817946 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 15 05:20:02.817957 kernel: Trampoline variant of Tasks RCU enabled. Jul 15 05:20:02.817967 kernel: Rude variant of Tasks RCU enabled. Jul 15 05:20:02.817977 kernel: Tracing variant of Tasks RCU enabled. Jul 15 05:20:02.817988 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 15 05:20:02.817999 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 15 05:20:02.818012 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 15 05:20:02.818023 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 15 05:20:02.818034 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 15 05:20:02.818045 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 15 05:20:02.818092 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 15 05:20:02.818104 kernel: Console: colour dummy device 80x25 Jul 15 05:20:02.818117 kernel: printk: legacy console [ttyS0] enabled Jul 15 05:20:02.818128 kernel: ACPI: Core revision 20240827 Jul 15 05:20:02.818142 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 15 05:20:02.818153 kernel: APIC: Switch to symmetric I/O mode setup Jul 15 05:20:02.818163 kernel: x2apic enabled Jul 15 05:20:02.818174 kernel: APIC: Switched APIC routing to: physical x2apic Jul 15 05:20:02.818184 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jul 15 05:20:02.818195 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jul 15 05:20:02.818206 kernel: kvm-guest: setup PV IPIs Jul 15 05:20:02.818216 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 15 05:20:02.818227 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Jul 15 05:20:02.818241 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Jul 15 05:20:02.818252 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 15 05:20:02.818262 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 15 05:20:02.818273 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 15 05:20:02.818283 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 15 05:20:02.818294 kernel: Spectre V2 : Mitigation: Retpolines Jul 15 05:20:02.818305 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 15 05:20:02.818315 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 15 05:20:02.818326 kernel: RETBleed: Mitigation: untrained return thunk Jul 15 05:20:02.818339 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 15 05:20:02.818350 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 15 05:20:02.818361 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jul 15 05:20:02.818372 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jul 15 05:20:02.818383 kernel: x86/bugs: return thunk changed Jul 15 05:20:02.818393 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jul 15 05:20:02.818404 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 15 05:20:02.818415 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 15 05:20:02.818429 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 15 05:20:02.818440 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 15 05:20:02.818451 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 15 05:20:02.818462 kernel: Freeing SMP alternatives memory: 32K Jul 15 05:20:02.818472 kernel: pid_max: default: 32768 minimum: 301 Jul 15 05:20:02.818483 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 15 05:20:02.818494 kernel: landlock: Up and running. Jul 15 05:20:02.818504 kernel: SELinux: Initializing. Jul 15 05:20:02.818515 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 15 05:20:02.818528 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 15 05:20:02.818539 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 15 05:20:02.818550 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 15 05:20:02.818560 kernel: ... version: 0 Jul 15 05:20:02.818571 kernel: ... bit width: 48 Jul 15 05:20:02.818582 kernel: ... generic registers: 6 Jul 15 05:20:02.818592 kernel: ... value mask: 0000ffffffffffff Jul 15 05:20:02.818603 kernel: ... max period: 00007fffffffffff Jul 15 05:20:02.818614 kernel: ... fixed-purpose events: 0 Jul 15 05:20:02.818624 kernel: ... event mask: 000000000000003f Jul 15 05:20:02.818638 kernel: signal: max sigframe size: 1776 Jul 15 05:20:02.818660 kernel: rcu: Hierarchical SRCU implementation. Jul 15 05:20:02.818680 kernel: rcu: Max phase no-delay instances is 400. Jul 15 05:20:02.818691 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 15 05:20:02.818702 kernel: smp: Bringing up secondary CPUs ... Jul 15 05:20:02.818712 kernel: smpboot: x86: Booting SMP configuration: Jul 15 05:20:02.818727 kernel: .... node #0, CPUs: #1 #2 #3 Jul 15 05:20:02.818737 kernel: smp: Brought up 1 node, 4 CPUs Jul 15 05:20:02.818747 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Jul 15 05:20:02.818762 kernel: Memory: 2409212K/2552216K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54608K init, 2360K bss, 137064K reserved, 0K cma-reserved) Jul 15 05:20:02.818773 kernel: devtmpfs: initialized Jul 15 05:20:02.818783 kernel: x86/mm: Memory block size: 128MB Jul 15 05:20:02.818794 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bb7f000-0x9bbfefff] (524288 bytes) Jul 15 05:20:02.818805 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bfb5000-0x9bfb6fff] (8192 bytes) Jul 15 05:20:02.818816 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 15 05:20:02.818827 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 15 05:20:02.818837 kernel: pinctrl core: initialized pinctrl subsystem Jul 15 05:20:02.818850 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 15 05:20:02.818861 kernel: audit: initializing netlink subsys (disabled) Jul 15 05:20:02.818872 kernel: audit: type=2000 audit(1752556800.277:1): state=initialized audit_enabled=0 res=1 Jul 15 05:20:02.818883 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 15 05:20:02.818894 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 15 05:20:02.818904 kernel: cpuidle: using governor menu Jul 15 05:20:02.818915 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 15 05:20:02.818925 kernel: dca service started, version 1.12.1 Jul 15 05:20:02.818936 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Jul 15 05:20:02.818949 kernel: PCI: Using configuration type 1 for base access Jul 15 05:20:02.818960 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 15 05:20:02.818971 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 15 05:20:02.818982 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 15 05:20:02.818992 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 15 05:20:02.819005 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 15 05:20:02.819017 kernel: ACPI: Added _OSI(Module Device) Jul 15 05:20:02.819029 kernel: ACPI: Added _OSI(Processor Device) Jul 15 05:20:02.819040 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 15 05:20:02.819067 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 15 05:20:02.819086 kernel: ACPI: Interpreter enabled Jul 15 05:20:02.819097 kernel: ACPI: PM: (supports S0 S5) Jul 15 05:20:02.819108 kernel: ACPI: Using IOAPIC for interrupt routing Jul 15 05:20:02.819118 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 15 05:20:02.819129 kernel: PCI: Using E820 reservations for host bridge windows Jul 15 05:20:02.819140 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 15 05:20:02.819151 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 15 05:20:02.819358 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 15 05:20:02.819509 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jul 15 05:20:02.819647 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jul 15 05:20:02.819660 kernel: PCI host bridge to bus 0000:00 Jul 15 05:20:02.819799 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 15 05:20:02.819927 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 15 05:20:02.820071 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 15 05:20:02.820224 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Jul 15 05:20:02.820353 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jul 15 05:20:02.820480 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Jul 15 05:20:02.820611 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 15 05:20:02.820780 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jul 15 05:20:02.820931 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jul 15 05:20:02.821098 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Jul 15 05:20:02.821245 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Jul 15 05:20:02.821383 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Jul 15 05:20:02.821520 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 15 05:20:02.821671 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jul 15 05:20:02.821816 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Jul 15 05:20:02.821959 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Jul 15 05:20:02.822134 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Jul 15 05:20:02.822306 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jul 15 05:20:02.822448 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Jul 15 05:20:02.822565 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Jul 15 05:20:02.822679 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Jul 15 05:20:02.822802 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jul 15 05:20:02.822917 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Jul 15 05:20:02.823035 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Jul 15 05:20:02.823210 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Jul 15 05:20:02.823364 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Jul 15 05:20:02.823520 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jul 15 05:20:02.823677 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 15 05:20:02.823843 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jul 15 05:20:02.823986 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Jul 15 05:20:02.824172 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Jul 15 05:20:02.824316 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jul 15 05:20:02.824456 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Jul 15 05:20:02.824472 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 15 05:20:02.824483 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 15 05:20:02.824493 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 15 05:20:02.824504 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 15 05:20:02.824515 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 15 05:20:02.824530 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 15 05:20:02.824540 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 15 05:20:02.824551 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 15 05:20:02.824562 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 15 05:20:02.824573 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 15 05:20:02.824583 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 15 05:20:02.824594 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 15 05:20:02.824604 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 15 05:20:02.824615 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 15 05:20:02.824628 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 15 05:20:02.824639 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 15 05:20:02.824649 kernel: iommu: Default domain type: Translated Jul 15 05:20:02.824660 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 15 05:20:02.824671 kernel: efivars: Registered efivars operations Jul 15 05:20:02.824681 kernel: PCI: Using ACPI for IRQ routing Jul 15 05:20:02.824691 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 15 05:20:02.824702 kernel: e820: reserve RAM buffer [mem 0x0009f000-0x0009ffff] Jul 15 05:20:02.824712 kernel: e820: reserve RAM buffer [mem 0x9a101018-0x9bffffff] Jul 15 05:20:02.824725 kernel: e820: reserve RAM buffer [mem 0x9a13e018-0x9bffffff] Jul 15 05:20:02.824736 kernel: e820: reserve RAM buffer [mem 0x9b8ed000-0x9bffffff] Jul 15 05:20:02.824746 kernel: e820: reserve RAM buffer [mem 0x9bfb1000-0x9bffffff] Jul 15 05:20:02.824887 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 15 05:20:02.825025 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 15 05:20:02.825205 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 15 05:20:02.825221 kernel: vgaarb: loaded Jul 15 05:20:02.825231 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 15 05:20:02.825246 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 15 05:20:02.825256 kernel: clocksource: Switched to clocksource kvm-clock Jul 15 05:20:02.825267 kernel: VFS: Disk quotas dquot_6.6.0 Jul 15 05:20:02.825277 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 15 05:20:02.825288 kernel: pnp: PnP ACPI init Jul 15 05:20:02.825434 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Jul 15 05:20:02.825449 kernel: pnp: PnP ACPI: found 6 devices Jul 15 05:20:02.825460 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 15 05:20:02.825474 kernel: NET: Registered PF_INET protocol family Jul 15 05:20:02.825484 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 15 05:20:02.825495 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 15 05:20:02.825506 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 15 05:20:02.825516 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 15 05:20:02.825527 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 15 05:20:02.825537 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 15 05:20:02.825548 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 15 05:20:02.825558 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 15 05:20:02.825571 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 15 05:20:02.825582 kernel: NET: Registered PF_XDP protocol family Jul 15 05:20:02.825725 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Jul 15 05:20:02.826973 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Jul 15 05:20:02.827151 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 15 05:20:02.827273 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 15 05:20:02.827378 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 15 05:20:02.827484 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Jul 15 05:20:02.827598 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jul 15 05:20:02.827704 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Jul 15 05:20:02.827714 kernel: PCI: CLS 0 bytes, default 64 Jul 15 05:20:02.827723 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Jul 15 05:20:02.827731 kernel: Initialise system trusted keyrings Jul 15 05:20:02.827739 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 15 05:20:02.827748 kernel: Key type asymmetric registered Jul 15 05:20:02.827756 kernel: Asymmetric key parser 'x509' registered Jul 15 05:20:02.827764 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 15 05:20:02.827787 kernel: io scheduler mq-deadline registered Jul 15 05:20:02.827798 kernel: io scheduler kyber registered Jul 15 05:20:02.827806 kernel: io scheduler bfq registered Jul 15 05:20:02.827815 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 15 05:20:02.827824 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 15 05:20:02.827832 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 15 05:20:02.827840 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jul 15 05:20:02.827849 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 15 05:20:02.827858 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 15 05:20:02.827868 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 15 05:20:02.827877 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 15 05:20:02.827885 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 15 05:20:02.828007 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 15 05:20:02.828171 kernel: rtc_cmos 00:04: registered as rtc0 Jul 15 05:20:02.828308 kernel: rtc_cmos 00:04: setting system clock to 2025-07-15T05:20:02 UTC (1752556802) Jul 15 05:20:02.828444 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jul 15 05:20:02.828464 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 15 05:20:02.828480 kernel: efifb: probing for efifb Jul 15 05:20:02.828492 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Jul 15 05:20:02.828504 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Jul 15 05:20:02.828516 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jul 15 05:20:02.828527 kernel: efifb: scrolling: redraw Jul 15 05:20:02.828538 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 15 05:20:02.828550 kernel: Console: switching to colour frame buffer device 160x50 Jul 15 05:20:02.828561 kernel: fb0: EFI VGA frame buffer device Jul 15 05:20:02.828573 kernel: pstore: Using crash dump compression: deflate Jul 15 05:20:02.828587 kernel: pstore: Registered efi_pstore as persistent store backend Jul 15 05:20:02.828600 kernel: NET: Registered PF_INET6 protocol family Jul 15 05:20:02.828612 kernel: Segment Routing with IPv6 Jul 15 05:20:02.828623 kernel: In-situ OAM (IOAM) with IPv6 Jul 15 05:20:02.828635 kernel: NET: Registered PF_PACKET protocol family Jul 15 05:20:02.828649 kernel: Key type dns_resolver registered Jul 15 05:20:02.828660 kernel: IPI shorthand broadcast: enabled Jul 15 05:20:02.828672 kernel: sched_clock: Marking stable (2918003091, 156117883)->(3142144067, -68023093) Jul 15 05:20:02.828683 kernel: registered taskstats version 1 Jul 15 05:20:02.828694 kernel: Loading compiled-in X.509 certificates Jul 15 05:20:02.828705 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: a24478b628e55368911ce1800a2bd6bc158938c7' Jul 15 05:20:02.828716 kernel: Demotion targets for Node 0: null Jul 15 05:20:02.828727 kernel: Key type .fscrypt registered Jul 15 05:20:02.828737 kernel: Key type fscrypt-provisioning registered Jul 15 05:20:02.828751 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 15 05:20:02.828763 kernel: ima: Allocated hash algorithm: sha1 Jul 15 05:20:02.828773 kernel: ima: No architecture policies found Jul 15 05:20:02.828784 kernel: clk: Disabling unused clocks Jul 15 05:20:02.828795 kernel: Warning: unable to open an initial console. Jul 15 05:20:02.828806 kernel: Freeing unused kernel image (initmem) memory: 54608K Jul 15 05:20:02.828817 kernel: Write protecting the kernel read-only data: 24576k Jul 15 05:20:02.828828 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 15 05:20:02.828839 kernel: Run /init as init process Jul 15 05:20:02.828853 kernel: with arguments: Jul 15 05:20:02.828863 kernel: /init Jul 15 05:20:02.828874 kernel: with environment: Jul 15 05:20:02.828885 kernel: HOME=/ Jul 15 05:20:02.828895 kernel: TERM=linux Jul 15 05:20:02.828906 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 15 05:20:02.828918 systemd[1]: Successfully made /usr/ read-only. Jul 15 05:20:02.828933 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 15 05:20:02.828949 systemd[1]: Detected virtualization kvm. Jul 15 05:20:02.828960 systemd[1]: Detected architecture x86-64. Jul 15 05:20:02.828971 systemd[1]: Running in initrd. Jul 15 05:20:02.828982 systemd[1]: No hostname configured, using default hostname. Jul 15 05:20:02.828994 systemd[1]: Hostname set to . Jul 15 05:20:02.829006 systemd[1]: Initializing machine ID from VM UUID. Jul 15 05:20:02.829020 systemd[1]: Queued start job for default target initrd.target. Jul 15 05:20:02.829033 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 05:20:02.829049 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 05:20:02.829088 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 15 05:20:02.829100 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 15 05:20:02.829135 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 15 05:20:02.829150 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 15 05:20:02.829163 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 15 05:20:02.829180 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 15 05:20:02.829191 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 05:20:02.829203 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 15 05:20:02.829215 systemd[1]: Reached target paths.target - Path Units. Jul 15 05:20:02.829226 systemd[1]: Reached target slices.target - Slice Units. Jul 15 05:20:02.829238 systemd[1]: Reached target swap.target - Swaps. Jul 15 05:20:02.829249 systemd[1]: Reached target timers.target - Timer Units. Jul 15 05:20:02.829261 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 15 05:20:02.829273 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 15 05:20:02.829289 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 15 05:20:02.829301 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 15 05:20:02.829313 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 15 05:20:02.829324 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 15 05:20:02.829336 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 05:20:02.829348 systemd[1]: Reached target sockets.target - Socket Units. Jul 15 05:20:02.829360 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 15 05:20:02.829372 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 15 05:20:02.829388 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 15 05:20:02.829400 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 15 05:20:02.829412 systemd[1]: Starting systemd-fsck-usr.service... Jul 15 05:20:02.829425 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 15 05:20:02.829437 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 15 05:20:02.829449 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 05:20:02.829461 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 15 05:20:02.829477 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 05:20:02.829489 systemd[1]: Finished systemd-fsck-usr.service. Jul 15 05:20:02.829501 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 15 05:20:02.829539 systemd-journald[219]: Collecting audit messages is disabled. Jul 15 05:20:02.829572 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 05:20:02.829585 systemd-journald[219]: Journal started Jul 15 05:20:02.829611 systemd-journald[219]: Runtime Journal (/run/log/journal/bed08e1c6a24490f880a02e75bde60f3) is 6M, max 48.2M, 42.2M free. Jul 15 05:20:02.816631 systemd-modules-load[221]: Inserted module 'overlay' Jul 15 05:20:02.832881 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 15 05:20:02.834614 systemd[1]: Started systemd-journald.service - Journal Service. Jul 15 05:20:02.834817 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 15 05:20:02.844084 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 15 05:20:02.846339 systemd-modules-load[221]: Inserted module 'br_netfilter' Jul 15 05:20:02.847466 kernel: Bridge firewalling registered Jul 15 05:20:02.849652 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 15 05:20:02.852572 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 15 05:20:02.855402 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 15 05:20:02.860189 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 15 05:20:02.863142 systemd-tmpfiles[243]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 15 05:20:02.863318 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 05:20:02.868378 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 05:20:02.871160 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 15 05:20:02.874386 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 15 05:20:02.876420 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 15 05:20:02.879890 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 15 05:20:02.891475 dracut-cmdline[261]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=926b029026d98240a9e8b6527b65fc026ae523bea87c3b77ffd7237bcc7be4fb Jul 15 05:20:02.929547 systemd-resolved[263]: Positive Trust Anchors: Jul 15 05:20:02.929562 systemd-resolved[263]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 15 05:20:02.929598 systemd-resolved[263]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 15 05:20:02.932400 systemd-resolved[263]: Defaulting to hostname 'linux'. Jul 15 05:20:02.933392 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 15 05:20:02.938320 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 15 05:20:03.000087 kernel: SCSI subsystem initialized Jul 15 05:20:03.009084 kernel: Loading iSCSI transport class v2.0-870. Jul 15 05:20:03.020094 kernel: iscsi: registered transport (tcp) Jul 15 05:20:03.040383 kernel: iscsi: registered transport (qla4xxx) Jul 15 05:20:03.040428 kernel: QLogic iSCSI HBA Driver Jul 15 05:20:03.059917 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 15 05:20:03.075741 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 05:20:03.078430 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 15 05:20:03.129361 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 15 05:20:03.132250 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 15 05:20:03.192087 kernel: raid6: avx2x4 gen() 30320 MB/s Jul 15 05:20:03.209089 kernel: raid6: avx2x2 gen() 29945 MB/s Jul 15 05:20:03.226140 kernel: raid6: avx2x1 gen() 25609 MB/s Jul 15 05:20:03.226156 kernel: raid6: using algorithm avx2x4 gen() 30320 MB/s Jul 15 05:20:03.244184 kernel: raid6: .... xor() 8447 MB/s, rmw enabled Jul 15 05:20:03.244240 kernel: raid6: using avx2x2 recovery algorithm Jul 15 05:20:03.264091 kernel: xor: automatically using best checksumming function avx Jul 15 05:20:03.430094 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 15 05:20:03.438164 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 15 05:20:03.440776 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 05:20:03.469159 systemd-udevd[472]: Using default interface naming scheme 'v255'. Jul 15 05:20:03.474528 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 05:20:03.477874 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 15 05:20:03.507356 dracut-pre-trigger[479]: rd.md=0: removing MD RAID activation Jul 15 05:20:03.536928 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 15 05:20:03.539570 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 15 05:20:03.614337 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 05:20:03.618170 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 15 05:20:03.654097 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jul 15 05:20:03.661197 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 15 05:20:03.666165 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 15 05:20:03.666206 kernel: GPT:9289727 != 19775487 Jul 15 05:20:03.666217 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 15 05:20:03.667292 kernel: GPT:9289727 != 19775487 Jul 15 05:20:03.667312 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 15 05:20:03.668379 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 15 05:20:03.679111 kernel: cryptd: max_cpu_qlen set to 1000 Jul 15 05:20:03.682081 kernel: libata version 3.00 loaded. Jul 15 05:20:03.687080 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jul 15 05:20:03.688379 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 05:20:03.688504 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 05:20:03.694031 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 05:20:03.697969 kernel: AES CTR mode by8 optimization enabled Jul 15 05:20:03.699283 kernel: ahci 0000:00:1f.2: version 3.0 Jul 15 05:20:03.699479 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 15 05:20:03.700345 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 05:20:03.715561 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jul 15 05:20:03.716675 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jul 15 05:20:03.716816 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 15 05:20:03.716950 kernel: scsi host0: ahci Jul 15 05:20:03.717186 kernel: scsi host1: ahci Jul 15 05:20:03.717328 kernel: scsi host2: ahci Jul 15 05:20:03.717464 kernel: scsi host3: ahci Jul 15 05:20:03.731219 kernel: scsi host4: ahci Jul 15 05:20:03.741615 kernel: scsi host5: ahci Jul 15 05:20:03.741830 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 0 Jul 15 05:20:03.741848 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 0 Jul 15 05:20:03.741858 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 0 Jul 15 05:20:03.741868 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 0 Jul 15 05:20:03.741878 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 0 Jul 15 05:20:03.743101 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 0 Jul 15 05:20:03.757716 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 15 05:20:03.766011 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 15 05:20:03.775499 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 15 05:20:03.782375 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 15 05:20:03.782627 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 15 05:20:03.783767 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 15 05:20:03.788065 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 05:20:03.788116 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 05:20:03.792003 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 05:20:03.797625 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 05:20:03.800852 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 15 05:20:03.808277 disk-uuid[634]: Primary Header is updated. Jul 15 05:20:03.808277 disk-uuid[634]: Secondary Entries is updated. Jul 15 05:20:03.808277 disk-uuid[634]: Secondary Header is updated. Jul 15 05:20:03.812127 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 15 05:20:03.816086 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 15 05:20:03.821181 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 05:20:04.054394 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 15 05:20:04.054487 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 15 05:20:04.054504 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jul 15 05:20:04.056092 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 15 05:20:04.056167 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 15 05:20:04.057089 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 15 05:20:04.058100 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 15 05:20:04.059283 kernel: ata3.00: applying bridge limits Jul 15 05:20:04.059309 kernel: ata3.00: configured for UDMA/100 Jul 15 05:20:04.060097 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 15 05:20:04.101092 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 15 05:20:04.101316 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 15 05:20:04.115082 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 15 05:20:04.526912 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 15 05:20:04.527904 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 15 05:20:04.529423 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 05:20:04.529747 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 15 05:20:04.530914 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 15 05:20:04.567835 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 15 05:20:04.820938 disk-uuid[636]: The operation has completed successfully. Jul 15 05:20:04.822549 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 15 05:20:04.857533 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 15 05:20:04.857662 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 15 05:20:04.895707 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 15 05:20:04.923333 sh[668]: Success Jul 15 05:20:04.941874 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 15 05:20:04.941948 kernel: device-mapper: uevent: version 1.0.3 Jul 15 05:20:04.941965 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 15 05:20:04.952075 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jul 15 05:20:04.987269 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 15 05:20:04.991859 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 15 05:20:05.004778 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 15 05:20:05.010188 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 15 05:20:05.010240 kernel: BTRFS: device fsid eb96c768-dac4-4ca9-ae1d-82815d4ce00b devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (680) Jul 15 05:20:05.011456 kernel: BTRFS info (device dm-0): first mount of filesystem eb96c768-dac4-4ca9-ae1d-82815d4ce00b Jul 15 05:20:05.011478 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 15 05:20:05.013077 kernel: BTRFS info (device dm-0): using free-space-tree Jul 15 05:20:05.017634 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 15 05:20:05.019804 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 15 05:20:05.022037 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 15 05:20:05.024796 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 15 05:20:05.027710 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 15 05:20:05.051708 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (712) Jul 15 05:20:05.051741 kernel: BTRFS info (device vda6): first mount of filesystem 86e7a055-b4ff-48a6-9a0a-c301ff74862f Jul 15 05:20:05.051753 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 15 05:20:05.053199 kernel: BTRFS info (device vda6): using free-space-tree Jul 15 05:20:05.060075 kernel: BTRFS info (device vda6): last unmount of filesystem 86e7a055-b4ff-48a6-9a0a-c301ff74862f Jul 15 05:20:05.060807 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 15 05:20:05.064702 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 15 05:20:05.141607 ignition[757]: Ignition 2.21.0 Jul 15 05:20:05.141622 ignition[757]: Stage: fetch-offline Jul 15 05:20:05.141656 ignition[757]: no configs at "/usr/lib/ignition/base.d" Jul 15 05:20:05.141666 ignition[757]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 05:20:05.141756 ignition[757]: parsed url from cmdline: "" Jul 15 05:20:05.141761 ignition[757]: no config URL provided Jul 15 05:20:05.141767 ignition[757]: reading system config file "/usr/lib/ignition/user.ign" Jul 15 05:20:05.141778 ignition[757]: no config at "/usr/lib/ignition/user.ign" Jul 15 05:20:05.141802 ignition[757]: op(1): [started] loading QEMU firmware config module Jul 15 05:20:05.141808 ignition[757]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 15 05:20:05.150863 ignition[757]: op(1): [finished] loading QEMU firmware config module Jul 15 05:20:05.168212 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 15 05:20:05.172195 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 15 05:20:05.196044 ignition[757]: parsing config with SHA512: d6abb23ced4a95c9d208a70f476d60ce6a8de5891d7aab933221d230efb4e03dcaf49d7218a5041f5f98f53dc04fb5173c036c53471e6a03f10782e6b3f72e75 Jul 15 05:20:05.201167 unknown[757]: fetched base config from "system" Jul 15 05:20:05.201586 ignition[757]: fetch-offline: fetch-offline passed Jul 15 05:20:05.201175 unknown[757]: fetched user config from "qemu" Jul 15 05:20:05.201642 ignition[757]: Ignition finished successfully Jul 15 05:20:05.204613 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 15 05:20:05.232940 systemd-networkd[857]: lo: Link UP Jul 15 05:20:05.232952 systemd-networkd[857]: lo: Gained carrier Jul 15 05:20:05.234442 systemd-networkd[857]: Enumeration completed Jul 15 05:20:05.234742 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 15 05:20:05.234833 systemd-networkd[857]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 05:20:05.234838 systemd-networkd[857]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 05:20:05.235329 systemd-networkd[857]: eth0: Link UP Jul 15 05:20:05.235333 systemd-networkd[857]: eth0: Gained carrier Jul 15 05:20:05.235342 systemd-networkd[857]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 05:20:05.240364 systemd[1]: Reached target network.target - Network. Jul 15 05:20:05.242946 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 15 05:20:05.249231 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 15 05:20:05.253134 systemd-networkd[857]: eth0: DHCPv4 address 10.0.0.126/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 15 05:20:05.282776 ignition[861]: Ignition 2.21.0 Jul 15 05:20:05.282791 ignition[861]: Stage: kargs Jul 15 05:20:05.282910 ignition[861]: no configs at "/usr/lib/ignition/base.d" Jul 15 05:20:05.282921 ignition[861]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 05:20:05.285165 ignition[861]: kargs: kargs passed Jul 15 05:20:05.285232 ignition[861]: Ignition finished successfully Jul 15 05:20:05.290726 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 15 05:20:05.292345 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 15 05:20:05.294445 systemd-resolved[263]: Detected conflict on linux IN A 10.0.0.126 Jul 15 05:20:05.294455 systemd-resolved[263]: Hostname conflict, changing published hostname from 'linux' to 'linux10'. Jul 15 05:20:05.328091 ignition[870]: Ignition 2.21.0 Jul 15 05:20:05.328104 ignition[870]: Stage: disks Jul 15 05:20:05.328232 ignition[870]: no configs at "/usr/lib/ignition/base.d" Jul 15 05:20:05.328243 ignition[870]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 05:20:05.330592 ignition[870]: disks: disks passed Jul 15 05:20:05.330643 ignition[870]: Ignition finished successfully Jul 15 05:20:05.334016 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 15 05:20:05.336271 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 15 05:20:05.336553 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 15 05:20:05.338760 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 15 05:20:05.343081 systemd[1]: Reached target sysinit.target - System Initialization. Jul 15 05:20:05.345125 systemd[1]: Reached target basic.target - Basic System. Jul 15 05:20:05.347212 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 15 05:20:05.379443 systemd-fsck[881]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 15 05:20:05.386795 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 15 05:20:05.390465 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 15 05:20:05.496078 kernel: EXT4-fs (vda9): mounted filesystem 277c3938-5262-4ab1-8fa3-62fde82f8257 r/w with ordered data mode. Quota mode: none. Jul 15 05:20:05.496136 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 15 05:20:05.496930 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 15 05:20:05.499545 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 15 05:20:05.501830 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 15 05:20:05.503198 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 15 05:20:05.503238 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 15 05:20:05.503257 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 15 05:20:05.523562 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 15 05:20:05.525890 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 15 05:20:05.531612 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (889) Jul 15 05:20:05.531636 kernel: BTRFS info (device vda6): first mount of filesystem 86e7a055-b4ff-48a6-9a0a-c301ff74862f Jul 15 05:20:05.531647 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 15 05:20:05.531658 kernel: BTRFS info (device vda6): using free-space-tree Jul 15 05:20:05.535296 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 15 05:20:05.564519 initrd-setup-root[913]: cut: /sysroot/etc/passwd: No such file or directory Jul 15 05:20:05.569204 initrd-setup-root[920]: cut: /sysroot/etc/group: No such file or directory Jul 15 05:20:05.572747 initrd-setup-root[927]: cut: /sysroot/etc/shadow: No such file or directory Jul 15 05:20:05.577785 initrd-setup-root[934]: cut: /sysroot/etc/gshadow: No such file or directory Jul 15 05:20:05.666009 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 15 05:20:05.667295 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 15 05:20:05.669875 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 15 05:20:05.689077 kernel: BTRFS info (device vda6): last unmount of filesystem 86e7a055-b4ff-48a6-9a0a-c301ff74862f Jul 15 05:20:05.700200 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 15 05:20:05.713902 ignition[1003]: INFO : Ignition 2.21.0 Jul 15 05:20:05.713902 ignition[1003]: INFO : Stage: mount Jul 15 05:20:05.716639 ignition[1003]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 05:20:05.716639 ignition[1003]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 05:20:05.718790 ignition[1003]: INFO : mount: mount passed Jul 15 05:20:05.718790 ignition[1003]: INFO : Ignition finished successfully Jul 15 05:20:05.722919 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 15 05:20:05.725867 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 15 05:20:06.010085 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 15 05:20:06.012129 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 15 05:20:06.033084 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1015) Jul 15 05:20:06.035114 kernel: BTRFS info (device vda6): first mount of filesystem 86e7a055-b4ff-48a6-9a0a-c301ff74862f Jul 15 05:20:06.035146 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 15 05:20:06.035157 kernel: BTRFS info (device vda6): using free-space-tree Jul 15 05:20:06.039508 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 15 05:20:06.072992 ignition[1032]: INFO : Ignition 2.21.0 Jul 15 05:20:06.072992 ignition[1032]: INFO : Stage: files Jul 15 05:20:06.075258 ignition[1032]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 05:20:06.075258 ignition[1032]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 05:20:06.075258 ignition[1032]: DEBUG : files: compiled without relabeling support, skipping Jul 15 05:20:06.078887 ignition[1032]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 15 05:20:06.078887 ignition[1032]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 15 05:20:06.078887 ignition[1032]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 15 05:20:06.083442 ignition[1032]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 15 05:20:06.083442 ignition[1032]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 15 05:20:06.083442 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 15 05:20:06.083442 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 15 05:20:06.080241 unknown[1032]: wrote ssh authorized keys file for user: core Jul 15 05:20:06.151898 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 15 05:20:06.283661 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 15 05:20:06.283661 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 15 05:20:06.287745 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 15 05:20:06.375963 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 15 05:20:06.484437 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 15 05:20:06.484437 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 15 05:20:06.488631 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 15 05:20:06.488631 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 15 05:20:06.488631 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 15 05:20:06.488631 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 15 05:20:06.488631 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 15 05:20:06.488631 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 15 05:20:06.488631 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 15 05:20:06.502742 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 15 05:20:06.502742 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 15 05:20:06.502742 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 15 05:20:06.502742 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 15 05:20:06.502742 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 15 05:20:06.502742 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jul 15 05:20:06.639384 systemd-networkd[857]: eth0: Gained IPv6LL Jul 15 05:20:06.846217 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 15 05:20:07.441457 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 15 05:20:07.441457 ignition[1032]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 15 05:20:07.445869 ignition[1032]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 15 05:20:07.447954 ignition[1032]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 15 05:20:07.447954 ignition[1032]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 15 05:20:07.447954 ignition[1032]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jul 15 05:20:07.447954 ignition[1032]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 15 05:20:07.447954 ignition[1032]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 15 05:20:07.447954 ignition[1032]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jul 15 05:20:07.447954 ignition[1032]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jul 15 05:20:07.467320 ignition[1032]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 15 05:20:07.473165 ignition[1032]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 15 05:20:07.474787 ignition[1032]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jul 15 05:20:07.474787 ignition[1032]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jul 15 05:20:07.474787 ignition[1032]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jul 15 05:20:07.474787 ignition[1032]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 15 05:20:07.474787 ignition[1032]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 15 05:20:07.474787 ignition[1032]: INFO : files: files passed Jul 15 05:20:07.474787 ignition[1032]: INFO : Ignition finished successfully Jul 15 05:20:07.484315 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 15 05:20:07.486964 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 15 05:20:07.489799 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 15 05:20:07.506585 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 15 05:20:07.506707 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 15 05:20:07.510087 initrd-setup-root-after-ignition[1062]: grep: /sysroot/oem/oem-release: No such file or directory Jul 15 05:20:07.513784 initrd-setup-root-after-ignition[1064]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 15 05:20:07.513784 initrd-setup-root-after-ignition[1064]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 15 05:20:07.517394 initrd-setup-root-after-ignition[1068]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 15 05:20:07.516639 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 15 05:20:07.517958 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 15 05:20:07.521066 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 15 05:20:07.575418 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 15 05:20:07.575541 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 15 05:20:07.576162 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 15 05:20:07.576473 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 15 05:20:07.576827 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 15 05:20:07.577586 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 15 05:20:07.596531 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 15 05:20:07.598642 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 15 05:20:07.635474 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 15 05:20:07.637782 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 05:20:07.638500 systemd[1]: Stopped target timers.target - Timer Units. Jul 15 05:20:07.638811 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 15 05:20:07.638953 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 15 05:20:07.643815 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 15 05:20:07.644346 systemd[1]: Stopped target basic.target - Basic System. Jul 15 05:20:07.644661 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 15 05:20:07.644991 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 15 05:20:07.645489 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 15 05:20:07.645806 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 15 05:20:07.646309 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 15 05:20:07.646625 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 15 05:20:07.646968 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 15 05:20:07.647462 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 15 05:20:07.647766 systemd[1]: Stopped target swap.target - Swaps. Jul 15 05:20:07.648091 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 15 05:20:07.648210 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 15 05:20:07.648905 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 15 05:20:07.649415 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 05:20:07.649699 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 15 05:20:07.649827 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 05:20:07.671432 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 15 05:20:07.671557 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 15 05:20:07.675315 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 15 05:20:07.675429 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 15 05:20:07.675902 systemd[1]: Stopped target paths.target - Path Units. Jul 15 05:20:07.679759 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 15 05:20:07.683154 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 05:20:07.685910 systemd[1]: Stopped target slices.target - Slice Units. Jul 15 05:20:07.686446 systemd[1]: Stopped target sockets.target - Socket Units. Jul 15 05:20:07.686771 systemd[1]: iscsid.socket: Deactivated successfully. Jul 15 05:20:07.686876 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 15 05:20:07.689724 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 15 05:20:07.689812 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 15 05:20:07.691415 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 15 05:20:07.691547 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 15 05:20:07.693201 systemd[1]: ignition-files.service: Deactivated successfully. Jul 15 05:20:07.693307 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 15 05:20:07.698804 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 15 05:20:07.699481 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 15 05:20:07.699584 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 05:20:07.700581 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 15 05:20:07.706396 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 15 05:20:07.706531 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 05:20:07.707630 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 15 05:20:07.707725 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 15 05:20:07.716159 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 15 05:20:07.716272 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 15 05:20:07.731408 ignition[1088]: INFO : Ignition 2.21.0 Jul 15 05:20:07.731408 ignition[1088]: INFO : Stage: umount Jul 15 05:20:07.733874 ignition[1088]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 05:20:07.733874 ignition[1088]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 05:20:07.733874 ignition[1088]: INFO : umount: umount passed Jul 15 05:20:07.733874 ignition[1088]: INFO : Ignition finished successfully Jul 15 05:20:07.735640 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 15 05:20:07.735769 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 15 05:20:07.737999 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 15 05:20:07.738473 systemd[1]: Stopped target network.target - Network. Jul 15 05:20:07.739579 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 15 05:20:07.739629 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 15 05:20:07.740959 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 15 05:20:07.741004 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 15 05:20:07.741444 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 15 05:20:07.741488 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 15 05:20:07.741769 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 15 05:20:07.741806 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 15 05:20:07.746676 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 15 05:20:07.748602 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 15 05:20:07.757530 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 15 05:20:07.757660 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 15 05:20:07.763455 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 15 05:20:07.763648 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 15 05:20:07.763762 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 15 05:20:07.767798 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 15 05:20:07.768685 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 15 05:20:07.769960 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 15 05:20:07.770000 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 15 05:20:07.772659 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 15 05:20:07.774408 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 15 05:20:07.774458 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 15 05:20:07.780415 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 15 05:20:07.780485 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 15 05:20:07.783266 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 15 05:20:07.783317 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 15 05:20:07.783707 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 15 05:20:07.783748 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 05:20:07.788231 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 05:20:07.789717 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 15 05:20:07.789784 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 15 05:20:07.808117 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 15 05:20:07.808258 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 15 05:20:07.846288 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 15 05:20:07.846518 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 05:20:07.849617 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 15 05:20:07.849699 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 15 05:20:07.850411 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 15 05:20:07.850448 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 05:20:07.850705 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 15 05:20:07.850756 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 15 05:20:07.851503 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 15 05:20:07.851553 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 15 05:20:07.857712 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 15 05:20:07.857776 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 15 05:20:07.862759 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 15 05:20:07.863282 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 15 05:20:07.863346 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 05:20:07.867912 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 15 05:20:07.867970 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 05:20:07.871339 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 05:20:07.871389 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 05:20:07.875785 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 15 05:20:07.875846 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 15 05:20:07.875892 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 15 05:20:07.889311 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 15 05:20:07.889426 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 15 05:20:07.952251 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 15 05:20:07.952404 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 15 05:20:07.953562 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 15 05:20:07.955628 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 15 05:20:07.955685 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 15 05:20:07.961269 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 15 05:20:07.988427 systemd[1]: Switching root. Jul 15 05:20:08.030856 systemd-journald[219]: Journal stopped Jul 15 05:20:09.386867 systemd-journald[219]: Received SIGTERM from PID 1 (systemd). Jul 15 05:20:09.386955 kernel: SELinux: policy capability network_peer_controls=1 Jul 15 05:20:09.386975 kernel: SELinux: policy capability open_perms=1 Jul 15 05:20:09.387000 kernel: SELinux: policy capability extended_socket_class=1 Jul 15 05:20:09.387015 kernel: SELinux: policy capability always_check_network=0 Jul 15 05:20:09.387038 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 15 05:20:09.387072 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 15 05:20:09.387088 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 15 05:20:09.387112 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 15 05:20:09.387127 kernel: SELinux: policy capability userspace_initial_context=0 Jul 15 05:20:09.387142 kernel: audit: type=1403 audit(1752556808.572:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 15 05:20:09.387166 systemd[1]: Successfully loaded SELinux policy in 59.776ms. Jul 15 05:20:09.387205 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.119ms. Jul 15 05:20:09.387224 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 15 05:20:09.387241 systemd[1]: Detected virtualization kvm. Jul 15 05:20:09.387257 systemd[1]: Detected architecture x86-64. Jul 15 05:20:09.387277 systemd[1]: Detected first boot. Jul 15 05:20:09.387294 systemd[1]: Initializing machine ID from VM UUID. Jul 15 05:20:09.387309 zram_generator::config[1134]: No configuration found. Jul 15 05:20:09.387328 kernel: Guest personality initialized and is inactive Jul 15 05:20:09.387345 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 15 05:20:09.387361 kernel: Initialized host personality Jul 15 05:20:09.387377 kernel: NET: Registered PF_VSOCK protocol family Jul 15 05:20:09.387393 systemd[1]: Populated /etc with preset unit settings. Jul 15 05:20:09.387410 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 15 05:20:09.387426 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 15 05:20:09.387443 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 15 05:20:09.387459 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 15 05:20:09.387476 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 15 05:20:09.387495 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 15 05:20:09.387512 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 15 05:20:09.387528 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 15 05:20:09.387544 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 15 05:20:09.387560 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 15 05:20:09.387577 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 15 05:20:09.387593 systemd[1]: Created slice user.slice - User and Session Slice. Jul 15 05:20:09.387623 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 05:20:09.387661 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 05:20:09.387682 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 15 05:20:09.387699 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 15 05:20:09.387715 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 15 05:20:09.387732 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 15 05:20:09.387749 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 15 05:20:09.387766 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 05:20:09.387782 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 15 05:20:09.387802 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 15 05:20:09.387819 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 15 05:20:09.387835 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 15 05:20:09.387852 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 15 05:20:09.387872 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 05:20:09.387900 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 15 05:20:09.387916 systemd[1]: Reached target slices.target - Slice Units. Jul 15 05:20:09.387933 systemd[1]: Reached target swap.target - Swaps. Jul 15 05:20:09.387949 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 15 05:20:09.387966 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 15 05:20:09.387985 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 15 05:20:09.388001 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 15 05:20:09.388018 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 15 05:20:09.388036 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 05:20:09.388071 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 15 05:20:09.388088 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 15 05:20:09.388104 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 15 05:20:09.388121 systemd[1]: Mounting media.mount - External Media Directory... Jul 15 05:20:09.388138 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 05:20:09.388159 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 15 05:20:09.388175 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 15 05:20:09.388192 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 15 05:20:09.388208 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 15 05:20:09.388225 systemd[1]: Reached target machines.target - Containers. Jul 15 05:20:09.388247 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 15 05:20:09.388264 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 05:20:09.388281 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 15 05:20:09.388301 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 15 05:20:09.388317 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 05:20:09.388333 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 15 05:20:09.388350 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 05:20:09.388367 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 15 05:20:09.388383 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 05:20:09.388400 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 15 05:20:09.388418 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 15 05:20:09.388438 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 15 05:20:09.388454 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 15 05:20:09.388470 systemd[1]: Stopped systemd-fsck-usr.service. Jul 15 05:20:09.388489 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 05:20:09.388505 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 15 05:20:09.388521 kernel: loop: module loaded Jul 15 05:20:09.388538 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 15 05:20:09.388554 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 15 05:20:09.388570 kernel: fuse: init (API version 7.41) Jul 15 05:20:09.388589 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 15 05:20:09.388606 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 15 05:20:09.388622 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 15 05:20:09.388638 systemd[1]: verity-setup.service: Deactivated successfully. Jul 15 05:20:09.388655 systemd[1]: Stopped verity-setup.service. Jul 15 05:20:09.388675 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 05:20:09.388691 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 15 05:20:09.388707 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 15 05:20:09.388724 systemd[1]: Mounted media.mount - External Media Directory. Jul 15 05:20:09.388740 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 15 05:20:09.388759 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 15 05:20:09.388777 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 15 05:20:09.388793 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 15 05:20:09.388837 systemd-journald[1210]: Collecting audit messages is disabled. Jul 15 05:20:09.388867 kernel: ACPI: bus type drm_connector registered Jul 15 05:20:09.388884 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 05:20:09.388912 systemd-journald[1210]: Journal started Jul 15 05:20:09.388952 systemd-journald[1210]: Runtime Journal (/run/log/journal/bed08e1c6a24490f880a02e75bde60f3) is 6M, max 48.2M, 42.2M free. Jul 15 05:20:09.125663 systemd[1]: Queued start job for default target multi-user.target. Jul 15 05:20:09.137951 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 15 05:20:09.138434 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 15 05:20:09.392075 systemd[1]: Started systemd-journald.service - Journal Service. Jul 15 05:20:09.393325 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 15 05:20:09.393539 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 15 05:20:09.395182 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 05:20:09.395459 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 05:20:09.396951 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 15 05:20:09.397257 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 15 05:20:09.399031 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 05:20:09.399335 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 05:20:09.401337 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 15 05:20:09.401668 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 15 05:20:09.403497 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 05:20:09.403779 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 05:20:09.405455 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 15 05:20:09.407044 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 05:20:09.408876 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 15 05:20:09.410770 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 15 05:20:09.425403 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 15 05:20:09.428223 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 15 05:20:09.430925 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 15 05:20:09.432509 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 15 05:20:09.432650 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 15 05:20:09.434975 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 15 05:20:09.438466 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 15 05:20:09.440680 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 05:20:09.442258 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 15 05:20:09.445177 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 15 05:20:09.446752 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 05:20:09.449842 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 15 05:20:09.451236 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 15 05:20:09.453486 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 15 05:20:09.456039 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 15 05:20:09.459215 systemd-journald[1210]: Time spent on flushing to /var/log/journal/bed08e1c6a24490f880a02e75bde60f3 is 27.180ms for 1043 entries. Jul 15 05:20:09.459215 systemd-journald[1210]: System Journal (/var/log/journal/bed08e1c6a24490f880a02e75bde60f3) is 8M, max 195.6M, 187.6M free. Jul 15 05:20:09.499221 systemd-journald[1210]: Received client request to flush runtime journal. Jul 15 05:20:09.499266 kernel: loop0: detected capacity change from 0 to 221472 Jul 15 05:20:09.459238 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 15 05:20:09.462430 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 15 05:20:09.463010 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 15 05:20:09.483269 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 15 05:20:09.486465 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 05:20:09.488630 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 15 05:20:09.493603 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 15 05:20:09.507329 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 15 05:20:09.509697 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 15 05:20:09.516105 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 15 05:20:09.520909 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 15 05:20:09.524425 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 15 05:20:09.535907 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 15 05:20:09.540084 kernel: loop1: detected capacity change from 0 to 114000 Jul 15 05:20:09.553576 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. Jul 15 05:20:09.553593 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. Jul 15 05:20:09.559564 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 05:20:09.572086 kernel: loop2: detected capacity change from 0 to 146488 Jul 15 05:20:09.604076 kernel: loop3: detected capacity change from 0 to 221472 Jul 15 05:20:09.611116 kernel: loop4: detected capacity change from 0 to 114000 Jul 15 05:20:09.619087 kernel: loop5: detected capacity change from 0 to 146488 Jul 15 05:20:09.628800 (sd-merge)[1277]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 15 05:20:09.630444 (sd-merge)[1277]: Merged extensions into '/usr'. Jul 15 05:20:09.634941 systemd[1]: Reload requested from client PID 1253 ('systemd-sysext') (unit systemd-sysext.service)... Jul 15 05:20:09.634962 systemd[1]: Reloading... Jul 15 05:20:09.699084 zram_generator::config[1303]: No configuration found. Jul 15 05:20:09.776268 ldconfig[1248]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 15 05:20:09.802168 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 05:20:09.892800 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 15 05:20:09.892920 systemd[1]: Reloading finished in 257 ms. Jul 15 05:20:09.929730 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 15 05:20:09.931369 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 15 05:20:09.946467 systemd[1]: Starting ensure-sysext.service... Jul 15 05:20:09.948294 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 15 05:20:09.960360 systemd[1]: Reload requested from client PID 1340 ('systemctl') (unit ensure-sysext.service)... Jul 15 05:20:09.960377 systemd[1]: Reloading... Jul 15 05:20:09.982180 systemd-tmpfiles[1342]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 15 05:20:09.982226 systemd-tmpfiles[1342]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 15 05:20:09.982617 systemd-tmpfiles[1342]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 15 05:20:09.982938 systemd-tmpfiles[1342]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 15 05:20:09.984172 systemd-tmpfiles[1342]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 15 05:20:09.984570 systemd-tmpfiles[1342]: ACLs are not supported, ignoring. Jul 15 05:20:09.984662 systemd-tmpfiles[1342]: ACLs are not supported, ignoring. Jul 15 05:20:09.994424 systemd-tmpfiles[1342]: Detected autofs mount point /boot during canonicalization of boot. Jul 15 05:20:09.994440 systemd-tmpfiles[1342]: Skipping /boot Jul 15 05:20:10.010748 systemd-tmpfiles[1342]: Detected autofs mount point /boot during canonicalization of boot. Jul 15 05:20:10.010835 systemd-tmpfiles[1342]: Skipping /boot Jul 15 05:20:10.012095 zram_generator::config[1372]: No configuration found. Jul 15 05:20:10.101394 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 05:20:10.181221 systemd[1]: Reloading finished in 220 ms. Jul 15 05:20:10.202398 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 15 05:20:10.221754 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 05:20:10.230129 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 15 05:20:10.232653 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 15 05:20:10.234981 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 15 05:20:10.248508 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 15 05:20:10.251209 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 05:20:10.253531 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 15 05:20:10.257727 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 05:20:10.258045 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 05:20:10.263228 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 05:20:10.266242 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 05:20:10.269387 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 05:20:10.269894 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 05:20:10.269988 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 05:20:10.272338 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 15 05:20:10.273356 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 05:20:10.279075 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 05:20:10.279278 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 05:20:10.279474 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 05:20:10.279602 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 05:20:10.279733 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 05:20:10.285319 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 05:20:10.286005 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 05:20:10.288037 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 15 05:20:10.290699 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 05:20:10.290909 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 05:20:10.292814 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 05:20:10.293183 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 05:20:10.302955 augenrules[1441]: No rules Jul 15 05:20:10.302140 systemd-udevd[1412]: Using default interface naming scheme 'v255'. Jul 15 05:20:10.303383 systemd[1]: audit-rules.service: Deactivated successfully. Jul 15 05:20:10.303887 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 15 05:20:10.305434 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 15 05:20:10.307519 systemd[1]: Finished ensure-sysext.service. Jul 15 05:20:10.314995 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 05:20:10.315267 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 05:20:10.316454 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 15 05:20:10.317714 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 05:20:10.317755 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 05:20:10.317806 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 05:20:10.317854 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 15 05:20:10.320572 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 15 05:20:10.333151 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 15 05:20:10.334227 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 15 05:20:10.334534 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 15 05:20:10.335760 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 05:20:10.337463 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 15 05:20:10.340991 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 15 05:20:10.341228 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 15 05:20:10.348121 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 15 05:20:10.351106 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 15 05:20:10.359878 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 15 05:20:10.409144 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 15 05:20:10.433807 systemd-resolved[1411]: Positive Trust Anchors: Jul 15 05:20:10.433825 systemd-resolved[1411]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 15 05:20:10.433864 systemd-resolved[1411]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 15 05:20:10.437738 systemd-resolved[1411]: Defaulting to hostname 'linux'. Jul 15 05:20:10.440679 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 15 05:20:10.450699 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 15 05:20:10.452075 kernel: mousedev: PS/2 mouse device common for all mice Jul 15 05:20:10.456774 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 15 05:20:10.463167 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 15 05:20:10.479073 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jul 15 05:20:10.488298 kernel: ACPI: button: Power Button [PWRF] Jul 15 05:20:10.487898 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 15 05:20:10.509226 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jul 15 05:20:10.509470 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 15 05:20:10.509628 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 15 05:20:10.523913 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 15 05:20:10.525260 systemd[1]: Reached target sysinit.target - System Initialization. Jul 15 05:20:10.527223 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 15 05:20:10.528742 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 15 05:20:10.530146 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 15 05:20:10.533125 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 15 05:20:10.534362 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 15 05:20:10.534390 systemd[1]: Reached target paths.target - Path Units. Jul 15 05:20:10.535412 systemd[1]: Reached target time-set.target - System Time Set. Jul 15 05:20:10.536597 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 15 05:20:10.539239 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 15 05:20:10.540441 systemd[1]: Reached target timers.target - Timer Units. Jul 15 05:20:10.542103 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 15 05:20:10.543458 systemd-networkd[1473]: lo: Link UP Jul 15 05:20:10.543466 systemd-networkd[1473]: lo: Gained carrier Jul 15 05:20:10.544710 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 15 05:20:10.545970 systemd-networkd[1473]: Enumeration completed Jul 15 05:20:10.551121 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 15 05:20:10.552456 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 15 05:20:10.553665 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 15 05:20:10.557515 systemd-networkd[1473]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 05:20:10.557528 systemd-networkd[1473]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 05:20:10.558415 systemd-networkd[1473]: eth0: Link UP Jul 15 05:20:10.558599 systemd-networkd[1473]: eth0: Gained carrier Jul 15 05:20:10.558612 systemd-networkd[1473]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 05:20:10.566377 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 15 05:20:10.568224 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 15 05:20:10.570043 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 15 05:20:10.571312 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 15 05:20:10.576096 systemd-networkd[1473]: eth0: DHCPv4 address 10.0.0.126/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 15 05:20:10.576713 systemd-timesyncd[1450]: Network configuration changed, trying to establish connection. Jul 15 05:20:10.577787 systemd[1]: Reached target network.target - Network. Jul 15 05:20:10.579099 systemd[1]: Reached target sockets.target - Socket Units. Jul 15 05:20:10.580026 systemd[1]: Reached target basic.target - Basic System. Jul 15 05:20:11.369644 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 15 05:20:11.369733 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 15 05:20:11.370575 systemd-resolved[1411]: Clock change detected. Flushing caches. Jul 15 05:20:11.370691 systemd-timesyncd[1450]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 15 05:20:11.370736 systemd-timesyncd[1450]: Initial clock synchronization to Tue 2025-07-15 05:20:11.369546 UTC. Jul 15 05:20:11.372824 systemd[1]: Starting containerd.service - containerd container runtime... Jul 15 05:20:11.375187 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 15 05:20:11.377829 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 15 05:20:11.385346 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 15 05:20:11.387345 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 15 05:20:11.388368 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 15 05:20:11.390912 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 15 05:20:11.395863 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 15 05:20:11.399764 jq[1524]: false Jul 15 05:20:11.399697 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 15 05:20:11.401710 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 15 05:20:11.404514 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 15 05:20:11.412836 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 15 05:20:11.417078 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 15 05:20:11.418913 google_oslogin_nss_cache[1526]: oslogin_cache_refresh[1526]: Refreshing passwd entry cache Jul 15 05:20:11.418925 oslogin_cache_refresh[1526]: Refreshing passwd entry cache Jul 15 05:20:11.420853 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 15 05:20:11.421495 extend-filesystems[1525]: Found /dev/vda6 Jul 15 05:20:11.423389 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 15 05:20:11.430421 oslogin_cache_refresh[1526]: Failure getting users, quitting Jul 15 05:20:11.434371 extend-filesystems[1525]: Found /dev/vda9 Jul 15 05:20:11.434371 extend-filesystems[1525]: Checking size of /dev/vda9 Jul 15 05:20:11.436556 google_oslogin_nss_cache[1526]: oslogin_cache_refresh[1526]: Failure getting users, quitting Jul 15 05:20:11.436556 google_oslogin_nss_cache[1526]: oslogin_cache_refresh[1526]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 15 05:20:11.436556 google_oslogin_nss_cache[1526]: oslogin_cache_refresh[1526]: Refreshing group entry cache Jul 15 05:20:11.423850 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 15 05:20:11.430445 oslogin_cache_refresh[1526]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 15 05:20:11.424835 systemd[1]: Starting update-engine.service - Update Engine... Jul 15 05:20:11.430491 oslogin_cache_refresh[1526]: Refreshing group entry cache Jul 15 05:20:11.430671 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 15 05:20:11.441238 google_oslogin_nss_cache[1526]: oslogin_cache_refresh[1526]: Failure getting groups, quitting Jul 15 05:20:11.441238 google_oslogin_nss_cache[1526]: oslogin_cache_refresh[1526]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 15 05:20:11.439622 oslogin_cache_refresh[1526]: Failure getting groups, quitting Jul 15 05:20:11.439738 oslogin_cache_refresh[1526]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 15 05:20:11.441686 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 15 05:20:11.443294 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 15 05:20:11.443549 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 15 05:20:11.443892 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 15 05:20:11.444142 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 15 05:20:11.446026 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 15 05:20:11.446270 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 15 05:20:11.451662 jq[1543]: true Jul 15 05:20:11.464775 update_engine[1541]: I20250715 05:20:11.464685 1541 main.cc:92] Flatcar Update Engine starting Jul 15 05:20:11.464783 systemd[1]: motdgen.service: Deactivated successfully. Jul 15 05:20:11.465046 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 15 05:20:11.478563 (ntainerd)[1561]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 15 05:20:11.482101 jq[1562]: true Jul 15 05:20:11.484242 dbus-daemon[1522]: [system] SELinux support is enabled Jul 15 05:20:11.484439 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 15 05:20:11.488042 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 15 05:20:11.488171 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 15 05:20:11.489781 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 15 05:20:11.489894 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 15 05:20:11.500341 update_engine[1541]: I20250715 05:20:11.500290 1541 update_check_scheduler.cc:74] Next update check in 9m51s Jul 15 05:20:11.501474 systemd[1]: Started update-engine.service - Update Engine. Jul 15 05:20:11.504779 extend-filesystems[1525]: Resized partition /dev/vda9 Jul 15 05:20:11.507513 extend-filesystems[1571]: resize2fs 1.47.2 (1-Jan-2025) Jul 15 05:20:11.509011 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 05:20:11.552835 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 15 05:20:11.552915 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 15 05:20:11.552934 tar[1554]: linux-amd64/helm Jul 15 05:20:11.519930 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 15 05:20:11.521948 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 15 05:20:11.529870 systemd-logind[1536]: New seat seat0. Jul 15 05:20:11.532885 systemd[1]: Started systemd-logind.service - User Login Management. Jul 15 05:20:11.575624 systemd-logind[1536]: Watching system buttons on /dev/input/event2 (Power Button) Jul 15 05:20:11.588138 extend-filesystems[1571]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 15 05:20:11.588138 extend-filesystems[1571]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 15 05:20:11.588138 extend-filesystems[1571]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 15 05:20:11.588824 systemd-logind[1536]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 15 05:20:11.594269 extend-filesystems[1525]: Resized filesystem in /dev/vda9 Jul 15 05:20:11.596581 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 15 05:20:11.596891 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 15 05:20:11.608198 kernel: kvm_amd: TSC scaling supported Jul 15 05:20:11.608664 kernel: kvm_amd: Nested Virtualization enabled Jul 15 05:20:11.608692 kernel: kvm_amd: Nested Paging enabled Jul 15 05:20:11.608811 kernel: kvm_amd: LBR virtualization supported Jul 15 05:20:11.618276 bash[1588]: Updated "/home/core/.ssh/authorized_keys" Jul 15 05:20:11.622111 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jul 15 05:20:11.622144 kernel: kvm_amd: Virtual GIF supported Jul 15 05:20:11.687465 locksmithd[1573]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 15 05:20:11.700107 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 15 05:20:11.702035 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 05:20:11.716936 sshd_keygen[1552]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 15 05:20:11.731313 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 15 05:20:11.743755 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 15 05:20:11.749987 kernel: EDAC MC: Ver: 3.0.0 Jul 15 05:20:11.748957 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 15 05:20:11.774299 systemd[1]: issuegen.service: Deactivated successfully. Jul 15 05:20:11.774686 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 15 05:20:11.779681 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 15 05:20:11.799666 containerd[1561]: time="2025-07-15T05:20:11Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 15 05:20:11.799648 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 15 05:20:11.802018 containerd[1561]: time="2025-07-15T05:20:11.801785418Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Jul 15 05:20:11.807595 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 15 05:20:11.810286 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 15 05:20:11.811804 systemd[1]: Reached target getty.target - Login Prompts. Jul 15 05:20:11.813224 containerd[1561]: time="2025-07-15T05:20:11.813167635Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="12.082µs" Jul 15 05:20:11.813224 containerd[1561]: time="2025-07-15T05:20:11.813221917Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 15 05:20:11.813278 containerd[1561]: time="2025-07-15T05:20:11.813242546Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 15 05:20:11.813527 containerd[1561]: time="2025-07-15T05:20:11.813494037Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 15 05:20:11.813565 containerd[1561]: time="2025-07-15T05:20:11.813530125Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 15 05:20:11.813586 containerd[1561]: time="2025-07-15T05:20:11.813571733Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 15 05:20:11.814950 containerd[1561]: time="2025-07-15T05:20:11.814916724Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 15 05:20:11.814950 containerd[1561]: time="2025-07-15T05:20:11.814945097Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 15 05:20:11.815319 containerd[1561]: time="2025-07-15T05:20:11.815285465Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 15 05:20:11.815319 containerd[1561]: time="2025-07-15T05:20:11.815312266Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 15 05:20:11.815396 containerd[1561]: time="2025-07-15T05:20:11.815326011Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 15 05:20:11.815396 containerd[1561]: time="2025-07-15T05:20:11.815336932Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 15 05:20:11.815686 containerd[1561]: time="2025-07-15T05:20:11.815463219Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 15 05:20:11.815838 containerd[1561]: time="2025-07-15T05:20:11.815808426Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 15 05:20:11.815871 containerd[1561]: time="2025-07-15T05:20:11.815853370Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 15 05:20:11.815871 containerd[1561]: time="2025-07-15T05:20:11.815865813Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 15 05:20:11.815919 containerd[1561]: time="2025-07-15T05:20:11.815905578Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 15 05:20:11.816299 containerd[1561]: time="2025-07-15T05:20:11.816195151Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 15 05:20:11.816299 containerd[1561]: time="2025-07-15T05:20:11.816286061Z" level=info msg="metadata content store policy set" policy=shared Jul 15 05:20:11.824962 containerd[1561]: time="2025-07-15T05:20:11.824674356Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 15 05:20:11.824962 containerd[1561]: time="2025-07-15T05:20:11.824751771Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 15 05:20:11.824962 containerd[1561]: time="2025-07-15T05:20:11.824772640Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 15 05:20:11.824962 containerd[1561]: time="2025-07-15T05:20:11.824799811Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 15 05:20:11.824962 containerd[1561]: time="2025-07-15T05:20:11.824815150Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 15 05:20:11.824962 containerd[1561]: time="2025-07-15T05:20:11.824831070Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 15 05:20:11.824962 containerd[1561]: time="2025-07-15T05:20:11.824847371Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 15 05:20:11.824962 containerd[1561]: time="2025-07-15T05:20:11.824864583Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 15 05:20:11.824962 containerd[1561]: time="2025-07-15T05:20:11.824881545Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 15 05:20:11.824962 containerd[1561]: time="2025-07-15T05:20:11.824897384Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 15 05:20:11.824962 containerd[1561]: time="2025-07-15T05:20:11.824911481Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 15 05:20:11.824962 containerd[1561]: time="2025-07-15T05:20:11.824928182Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 15 05:20:11.825209 containerd[1561]: time="2025-07-15T05:20:11.825122787Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 15 05:20:11.825209 containerd[1561]: time="2025-07-15T05:20:11.825143987Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 15 05:20:11.825209 containerd[1561]: time="2025-07-15T05:20:11.825158985Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 15 05:20:11.825209 containerd[1561]: time="2025-07-15T05:20:11.825170416Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 15 05:20:11.825209 containerd[1561]: time="2025-07-15T05:20:11.825182980Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 15 05:20:11.825209 containerd[1561]: time="2025-07-15T05:20:11.825198659Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 15 05:20:11.825315 containerd[1561]: time="2025-07-15T05:20:11.825215631Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 15 05:20:11.825315 containerd[1561]: time="2025-07-15T05:20:11.825230038Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 15 05:20:11.825315 containerd[1561]: time="2025-07-15T05:20:11.825248262Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 15 05:20:11.825315 containerd[1561]: time="2025-07-15T05:20:11.825268139Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 15 05:20:11.825315 containerd[1561]: time="2025-07-15T05:20:11.825280803Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 15 05:20:11.825411 containerd[1561]: time="2025-07-15T05:20:11.825360232Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 15 05:20:11.825411 containerd[1561]: time="2025-07-15T05:20:11.825377013Z" level=info msg="Start snapshots syncer" Jul 15 05:20:11.825411 containerd[1561]: time="2025-07-15T05:20:11.825406940Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 15 05:20:11.825781 containerd[1561]: time="2025-07-15T05:20:11.825732600Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 15 05:20:11.825889 containerd[1561]: time="2025-07-15T05:20:11.825792562Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 15 05:20:11.825889 containerd[1561]: time="2025-07-15T05:20:11.825858296Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 15 05:20:11.825974 containerd[1561]: time="2025-07-15T05:20:11.825953935Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 15 05:20:11.826000 containerd[1561]: time="2025-07-15T05:20:11.825986937Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 15 05:20:11.826215 containerd[1561]: time="2025-07-15T05:20:11.825998228Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 15 05:20:11.826215 containerd[1561]: time="2025-07-15T05:20:11.826009068Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 15 05:20:11.826215 containerd[1561]: time="2025-07-15T05:20:11.826020440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 15 05:20:11.826215 containerd[1561]: time="2025-07-15T05:20:11.826030549Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 15 05:20:11.826215 containerd[1561]: time="2025-07-15T05:20:11.826040948Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 15 05:20:11.826215 containerd[1561]: time="2025-07-15T05:20:11.826059633Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 15 05:20:11.826215 containerd[1561]: time="2025-07-15T05:20:11.826069942Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 15 05:20:11.826215 containerd[1561]: time="2025-07-15T05:20:11.826079350Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 15 05:20:11.826215 containerd[1561]: time="2025-07-15T05:20:11.826119756Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 15 05:20:11.826215 containerd[1561]: time="2025-07-15T05:20:11.826133802Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 15 05:20:11.826215 containerd[1561]: time="2025-07-15T05:20:11.826144091Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 15 05:20:11.826215 containerd[1561]: time="2025-07-15T05:20:11.826153279Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 15 05:20:11.826215 containerd[1561]: time="2025-07-15T05:20:11.826161474Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 15 05:20:11.826215 containerd[1561]: time="2025-07-15T05:20:11.826172114Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 15 05:20:11.826611 containerd[1561]: time="2025-07-15T05:20:11.826183616Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 15 05:20:11.826611 containerd[1561]: time="2025-07-15T05:20:11.826201790Z" level=info msg="runtime interface created" Jul 15 05:20:11.826611 containerd[1561]: time="2025-07-15T05:20:11.826207260Z" level=info msg="created NRI interface" Jul 15 05:20:11.826611 containerd[1561]: time="2025-07-15T05:20:11.826219974Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 15 05:20:11.826611 containerd[1561]: time="2025-07-15T05:20:11.826231175Z" level=info msg="Connect containerd service" Jul 15 05:20:11.826611 containerd[1561]: time="2025-07-15T05:20:11.826252284Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 15 05:20:11.827117 containerd[1561]: time="2025-07-15T05:20:11.827086268Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 15 05:20:11.911882 tar[1554]: linux-amd64/LICENSE Jul 15 05:20:11.911882 tar[1554]: linux-amd64/README.md Jul 15 05:20:11.922328 containerd[1561]: time="2025-07-15T05:20:11.922262389Z" level=info msg="Start subscribing containerd event" Jul 15 05:20:11.922438 containerd[1561]: time="2025-07-15T05:20:11.922326970Z" level=info msg="Start recovering state" Jul 15 05:20:11.922516 containerd[1561]: time="2025-07-15T05:20:11.922496568Z" level=info msg="Start event monitor" Jul 15 05:20:11.922562 containerd[1561]: time="2025-07-15T05:20:11.922520042Z" level=info msg="Start cni network conf syncer for default" Jul 15 05:20:11.922562 containerd[1561]: time="2025-07-15T05:20:11.922529920Z" level=info msg="Start streaming server" Jul 15 05:20:11.922658 containerd[1561]: time="2025-07-15T05:20:11.922542424Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 15 05:20:11.922658 containerd[1561]: time="2025-07-15T05:20:11.922649224Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 15 05:20:11.922698 containerd[1561]: time="2025-07-15T05:20:11.922550098Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 15 05:20:11.922698 containerd[1561]: time="2025-07-15T05:20:11.922676145Z" level=info msg="runtime interface starting up..." Jul 15 05:20:11.922698 containerd[1561]: time="2025-07-15T05:20:11.922683248Z" level=info msg="starting plugins..." Jul 15 05:20:11.922772 containerd[1561]: time="2025-07-15T05:20:11.922709086Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 15 05:20:11.924111 containerd[1561]: time="2025-07-15T05:20:11.922849860Z" level=info msg="containerd successfully booted in 0.123768s" Jul 15 05:20:11.922944 systemd[1]: Started containerd.service - containerd container runtime. Jul 15 05:20:11.930409 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 15 05:20:12.419869 systemd-networkd[1473]: eth0: Gained IPv6LL Jul 15 05:20:12.422894 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 15 05:20:12.424712 systemd[1]: Reached target network-online.target - Network is Online. Jul 15 05:20:12.427154 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 15 05:20:12.429450 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 05:20:12.431571 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 15 05:20:12.467851 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 15 05:20:12.469683 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 15 05:20:12.469923 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 15 05:20:12.472146 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 15 05:20:13.140565 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 05:20:13.142199 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 15 05:20:13.143555 systemd[1]: Startup finished in 2.971s (kernel) + 5.940s (initrd) + 3.840s (userspace) = 12.752s. Jul 15 05:20:13.156097 (kubelet)[1670]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 05:20:13.567029 kubelet[1670]: E0715 05:20:13.566892 1670 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 05:20:13.571042 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 05:20:13.571243 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 05:20:13.571681 systemd[1]: kubelet.service: Consumed 975ms CPU time, 265.3M memory peak. Jul 15 05:20:16.398706 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 15 05:20:16.400123 systemd[1]: Started sshd@0-10.0.0.126:22-10.0.0.1:33682.service - OpenSSH per-connection server daemon (10.0.0.1:33682). Jul 15 05:20:16.468773 sshd[1684]: Accepted publickey for core from 10.0.0.1 port 33682 ssh2: RSA SHA256:u8XLUfBAvkkcme5upcPT7VprXL+p6dqsv6pgcjAevNM Jul 15 05:20:16.470505 sshd-session[1684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:20:16.477528 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 15 05:20:16.478798 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 15 05:20:16.485572 systemd-logind[1536]: New session 1 of user core. Jul 15 05:20:16.500134 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 15 05:20:16.502993 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 15 05:20:16.522917 (systemd)[1689]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 15 05:20:16.525538 systemd-logind[1536]: New session c1 of user core. Jul 15 05:20:16.679318 systemd[1689]: Queued start job for default target default.target. Jul 15 05:20:16.695889 systemd[1689]: Created slice app.slice - User Application Slice. Jul 15 05:20:16.695913 systemd[1689]: Reached target paths.target - Paths. Jul 15 05:20:16.695951 systemd[1689]: Reached target timers.target - Timers. Jul 15 05:20:16.697439 systemd[1689]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 15 05:20:16.708318 systemd[1689]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 15 05:20:16.708448 systemd[1689]: Reached target sockets.target - Sockets. Jul 15 05:20:16.708491 systemd[1689]: Reached target basic.target - Basic System. Jul 15 05:20:16.708532 systemd[1689]: Reached target default.target - Main User Target. Jul 15 05:20:16.708570 systemd[1689]: Startup finished in 176ms. Jul 15 05:20:16.708705 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 15 05:20:16.710152 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 15 05:20:16.771447 systemd[1]: Started sshd@1-10.0.0.126:22-10.0.0.1:33698.service - OpenSSH per-connection server daemon (10.0.0.1:33698). Jul 15 05:20:16.833497 sshd[1700]: Accepted publickey for core from 10.0.0.1 port 33698 ssh2: RSA SHA256:u8XLUfBAvkkcme5upcPT7VprXL+p6dqsv6pgcjAevNM Jul 15 05:20:16.835396 sshd-session[1700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:20:16.840235 systemd-logind[1536]: New session 2 of user core. Jul 15 05:20:16.854768 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 15 05:20:16.906981 sshd[1703]: Connection closed by 10.0.0.1 port 33698 Jul 15 05:20:16.907339 sshd-session[1700]: pam_unix(sshd:session): session closed for user core Jul 15 05:20:16.924443 systemd[1]: sshd@1-10.0.0.126:22-10.0.0.1:33698.service: Deactivated successfully. Jul 15 05:20:16.926140 systemd[1]: session-2.scope: Deactivated successfully. Jul 15 05:20:16.926980 systemd-logind[1536]: Session 2 logged out. Waiting for processes to exit. Jul 15 05:20:16.929554 systemd[1]: Started sshd@2-10.0.0.126:22-10.0.0.1:33704.service - OpenSSH per-connection server daemon (10.0.0.1:33704). Jul 15 05:20:16.930114 systemd-logind[1536]: Removed session 2. Jul 15 05:20:16.989953 sshd[1709]: Accepted publickey for core from 10.0.0.1 port 33704 ssh2: RSA SHA256:u8XLUfBAvkkcme5upcPT7VprXL+p6dqsv6pgcjAevNM Jul 15 05:20:16.991371 sshd-session[1709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:20:16.995497 systemd-logind[1536]: New session 3 of user core. Jul 15 05:20:17.006763 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 15 05:20:17.055726 sshd[1712]: Connection closed by 10.0.0.1 port 33704 Jul 15 05:20:17.056099 sshd-session[1709]: pam_unix(sshd:session): session closed for user core Jul 15 05:20:17.074177 systemd[1]: sshd@2-10.0.0.126:22-10.0.0.1:33704.service: Deactivated successfully. Jul 15 05:20:17.075963 systemd[1]: session-3.scope: Deactivated successfully. Jul 15 05:20:17.076668 systemd-logind[1536]: Session 3 logged out. Waiting for processes to exit. Jul 15 05:20:17.079177 systemd[1]: Started sshd@3-10.0.0.126:22-10.0.0.1:33706.service - OpenSSH per-connection server daemon (10.0.0.1:33706). Jul 15 05:20:17.079961 systemd-logind[1536]: Removed session 3. Jul 15 05:20:17.128206 sshd[1718]: Accepted publickey for core from 10.0.0.1 port 33706 ssh2: RSA SHA256:u8XLUfBAvkkcme5upcPT7VprXL+p6dqsv6pgcjAevNM Jul 15 05:20:17.129545 sshd-session[1718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:20:17.134119 systemd-logind[1536]: New session 4 of user core. Jul 15 05:20:17.145770 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 15 05:20:17.198510 sshd[1721]: Connection closed by 10.0.0.1 port 33706 Jul 15 05:20:17.198824 sshd-session[1718]: pam_unix(sshd:session): session closed for user core Jul 15 05:20:17.209070 systemd[1]: sshd@3-10.0.0.126:22-10.0.0.1:33706.service: Deactivated successfully. Jul 15 05:20:17.210790 systemd[1]: session-4.scope: Deactivated successfully. Jul 15 05:20:17.211481 systemd-logind[1536]: Session 4 logged out. Waiting for processes to exit. Jul 15 05:20:17.214050 systemd[1]: Started sshd@4-10.0.0.126:22-10.0.0.1:33710.service - OpenSSH per-connection server daemon (10.0.0.1:33710). Jul 15 05:20:17.214810 systemd-logind[1536]: Removed session 4. Jul 15 05:20:17.270056 sshd[1727]: Accepted publickey for core from 10.0.0.1 port 33710 ssh2: RSA SHA256:u8XLUfBAvkkcme5upcPT7VprXL+p6dqsv6pgcjAevNM Jul 15 05:20:17.271302 sshd-session[1727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:20:17.275472 systemd-logind[1536]: New session 5 of user core. Jul 15 05:20:17.285751 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 15 05:20:17.395462 sudo[1731]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 15 05:20:17.395805 sudo[1731]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 05:20:17.416356 sudo[1731]: pam_unix(sudo:session): session closed for user root Jul 15 05:20:17.417973 sshd[1730]: Connection closed by 10.0.0.1 port 33710 Jul 15 05:20:17.418474 sshd-session[1727]: pam_unix(sshd:session): session closed for user core Jul 15 05:20:17.439202 systemd[1]: sshd@4-10.0.0.126:22-10.0.0.1:33710.service: Deactivated successfully. Jul 15 05:20:17.441097 systemd[1]: session-5.scope: Deactivated successfully. Jul 15 05:20:17.441859 systemd-logind[1536]: Session 5 logged out. Waiting for processes to exit. Jul 15 05:20:17.444725 systemd[1]: Started sshd@5-10.0.0.126:22-10.0.0.1:33714.service - OpenSSH per-connection server daemon (10.0.0.1:33714). Jul 15 05:20:17.445401 systemd-logind[1536]: Removed session 5. Jul 15 05:20:17.505043 sshd[1737]: Accepted publickey for core from 10.0.0.1 port 33714 ssh2: RSA SHA256:u8XLUfBAvkkcme5upcPT7VprXL+p6dqsv6pgcjAevNM Jul 15 05:20:17.506576 sshd-session[1737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:20:17.510656 systemd-logind[1536]: New session 6 of user core. Jul 15 05:20:17.524763 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 15 05:20:17.577844 sudo[1742]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 15 05:20:17.578152 sudo[1742]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 05:20:17.700293 sudo[1742]: pam_unix(sudo:session): session closed for user root Jul 15 05:20:17.707702 sudo[1741]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 15 05:20:17.708065 sudo[1741]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 05:20:17.719761 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 15 05:20:17.772397 augenrules[1764]: No rules Jul 15 05:20:17.774107 systemd[1]: audit-rules.service: Deactivated successfully. Jul 15 05:20:17.774393 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 15 05:20:17.775787 sudo[1741]: pam_unix(sudo:session): session closed for user root Jul 15 05:20:17.777446 sshd[1740]: Connection closed by 10.0.0.1 port 33714 Jul 15 05:20:17.777848 sshd-session[1737]: pam_unix(sshd:session): session closed for user core Jul 15 05:20:17.791213 systemd[1]: sshd@5-10.0.0.126:22-10.0.0.1:33714.service: Deactivated successfully. Jul 15 05:20:17.793041 systemd[1]: session-6.scope: Deactivated successfully. Jul 15 05:20:17.793963 systemd-logind[1536]: Session 6 logged out. Waiting for processes to exit. Jul 15 05:20:17.796545 systemd[1]: Started sshd@6-10.0.0.126:22-10.0.0.1:33726.service - OpenSSH per-connection server daemon (10.0.0.1:33726). Jul 15 05:20:17.797300 systemd-logind[1536]: Removed session 6. Jul 15 05:20:17.858059 sshd[1773]: Accepted publickey for core from 10.0.0.1 port 33726 ssh2: RSA SHA256:u8XLUfBAvkkcme5upcPT7VprXL+p6dqsv6pgcjAevNM Jul 15 05:20:17.859927 sshd-session[1773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:20:17.866039 systemd-logind[1536]: New session 7 of user core. Jul 15 05:20:17.875910 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 15 05:20:17.931218 sudo[1777]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 15 05:20:17.931630 sudo[1777]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 05:20:18.257914 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 15 05:20:18.280192 (dockerd)[1798]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 15 05:20:18.532911 dockerd[1798]: time="2025-07-15T05:20:18.532770501Z" level=info msg="Starting up" Jul 15 05:20:18.533602 dockerd[1798]: time="2025-07-15T05:20:18.533582012Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 15 05:20:18.546383 dockerd[1798]: time="2025-07-15T05:20:18.546322556Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jul 15 05:20:19.085240 dockerd[1798]: time="2025-07-15T05:20:19.085172948Z" level=info msg="Loading containers: start." Jul 15 05:20:19.096705 kernel: Initializing XFRM netlink socket Jul 15 05:20:19.673441 systemd-networkd[1473]: docker0: Link UP Jul 15 05:20:19.678772 dockerd[1798]: time="2025-07-15T05:20:19.678728613Z" level=info msg="Loading containers: done." Jul 15 05:20:19.692426 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1721841060-merged.mount: Deactivated successfully. Jul 15 05:20:19.694623 dockerd[1798]: time="2025-07-15T05:20:19.694573386Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 15 05:20:19.694702 dockerd[1798]: time="2025-07-15T05:20:19.694685967Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jul 15 05:20:19.694802 dockerd[1798]: time="2025-07-15T05:20:19.694785955Z" level=info msg="Initializing buildkit" Jul 15 05:20:19.725557 dockerd[1798]: time="2025-07-15T05:20:19.725509901Z" level=info msg="Completed buildkit initialization" Jul 15 05:20:19.731899 dockerd[1798]: time="2025-07-15T05:20:19.731859214Z" level=info msg="Daemon has completed initialization" Jul 15 05:20:19.732005 dockerd[1798]: time="2025-07-15T05:20:19.731942039Z" level=info msg="API listen on /run/docker.sock" Jul 15 05:20:19.732088 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 15 05:20:20.401606 containerd[1561]: time="2025-07-15T05:20:20.401565865Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 15 05:20:21.091784 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount333790110.mount: Deactivated successfully. Jul 15 05:20:22.229607 containerd[1561]: time="2025-07-15T05:20:22.229521248Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:20:22.239816 containerd[1561]: time="2025-07-15T05:20:22.239762827Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=28077744" Jul 15 05:20:22.366493 containerd[1561]: time="2025-07-15T05:20:22.366439240Z" level=info msg="ImageCreate event name:\"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:20:22.460404 containerd[1561]: time="2025-07-15T05:20:22.460333378Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:20:22.461273 containerd[1561]: time="2025-07-15T05:20:22.461235229Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"28074544\" in 2.059631183s" Jul 15 05:20:22.461273 containerd[1561]: time="2025-07-15T05:20:22.461269603Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\"" Jul 15 05:20:22.461788 containerd[1561]: time="2025-07-15T05:20:22.461771274Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 15 05:20:23.583391 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 15 05:20:23.586050 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 05:20:23.776428 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 05:20:23.788991 (kubelet)[2080]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 05:20:23.831974 kubelet[2080]: E0715 05:20:23.831884 2080 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 05:20:23.838464 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 05:20:23.838792 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 05:20:23.839582 systemd[1]: kubelet.service: Consumed 220ms CPU time, 111.3M memory peak. Jul 15 05:20:24.595259 containerd[1561]: time="2025-07-15T05:20:24.595201383Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:20:24.596198 containerd[1561]: time="2025-07-15T05:20:24.596156323Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=24713294" Jul 15 05:20:24.597423 containerd[1561]: time="2025-07-15T05:20:24.597381831Z" level=info msg="ImageCreate event name:\"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:20:24.600226 containerd[1561]: time="2025-07-15T05:20:24.600174446Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:20:24.601098 containerd[1561]: time="2025-07-15T05:20:24.601067240Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"26315128\" in 2.13926615s" Jul 15 05:20:24.601148 containerd[1561]: time="2025-07-15T05:20:24.601103588Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\"" Jul 15 05:20:24.601593 containerd[1561]: time="2025-07-15T05:20:24.601572908Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 15 05:20:26.448306 containerd[1561]: time="2025-07-15T05:20:26.448230762Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:20:26.449234 containerd[1561]: time="2025-07-15T05:20:26.449165915Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=18783671" Jul 15 05:20:26.450651 containerd[1561]: time="2025-07-15T05:20:26.450591177Z" level=info msg="ImageCreate event name:\"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:20:26.453258 containerd[1561]: time="2025-07-15T05:20:26.453228931Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:20:26.454406 containerd[1561]: time="2025-07-15T05:20:26.454372425Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"20385523\" in 1.852772557s" Jul 15 05:20:26.454463 containerd[1561]: time="2025-07-15T05:20:26.454405547Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\"" Jul 15 05:20:26.455117 containerd[1561]: time="2025-07-15T05:20:26.455094929Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 15 05:20:27.629950 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2815610876.mount: Deactivated successfully. Jul 15 05:20:27.895839 containerd[1561]: time="2025-07-15T05:20:27.895728112Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:20:27.896503 containerd[1561]: time="2025-07-15T05:20:27.896473089Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=30383943" Jul 15 05:20:27.897494 containerd[1561]: time="2025-07-15T05:20:27.897462033Z" level=info msg="ImageCreate event name:\"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:20:27.899173 containerd[1561]: time="2025-07-15T05:20:27.899142272Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:20:27.899620 containerd[1561]: time="2025-07-15T05:20:27.899592106Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"30382962\" in 1.444469425s" Jul 15 05:20:27.899620 containerd[1561]: time="2025-07-15T05:20:27.899616632Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\"" Jul 15 05:20:27.900090 containerd[1561]: time="2025-07-15T05:20:27.900051447Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 15 05:20:28.404681 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2038364104.mount: Deactivated successfully. Jul 15 05:20:29.404653 containerd[1561]: time="2025-07-15T05:20:29.404571845Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:20:29.405502 containerd[1561]: time="2025-07-15T05:20:29.405474067Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jul 15 05:20:29.406753 containerd[1561]: time="2025-07-15T05:20:29.406718640Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:20:29.409524 containerd[1561]: time="2025-07-15T05:20:29.409482922Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:20:29.410500 containerd[1561]: time="2025-07-15T05:20:29.410471104Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.510392516s" Jul 15 05:20:29.410548 containerd[1561]: time="2025-07-15T05:20:29.410503435Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 15 05:20:29.411035 containerd[1561]: time="2025-07-15T05:20:29.411008462Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 15 05:20:30.361614 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1539050757.mount: Deactivated successfully. Jul 15 05:20:30.367998 containerd[1561]: time="2025-07-15T05:20:30.367965564Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 05:20:30.369101 containerd[1561]: time="2025-07-15T05:20:30.369015192Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 15 05:20:30.370579 containerd[1561]: time="2025-07-15T05:20:30.370523830Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 05:20:30.372836 containerd[1561]: time="2025-07-15T05:20:30.372788606Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 05:20:30.373421 containerd[1561]: time="2025-07-15T05:20:30.373354867Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 962.319095ms" Jul 15 05:20:30.373421 containerd[1561]: time="2025-07-15T05:20:30.373399100Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 15 05:20:30.373933 containerd[1561]: time="2025-07-15T05:20:30.373824227Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 15 05:20:31.445422 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3711233766.mount: Deactivated successfully. Jul 15 05:20:34.082797 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 15 05:20:34.084579 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 05:20:34.317597 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 05:20:34.322743 (kubelet)[2218]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 05:20:34.479575 kubelet[2218]: E0715 05:20:34.479442 2218 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 05:20:34.483599 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 05:20:34.483827 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 05:20:34.484228 systemd[1]: kubelet.service: Consumed 225ms CPU time, 110.7M memory peak. Jul 15 05:20:34.964098 containerd[1561]: time="2025-07-15T05:20:34.964034071Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:20:34.964920 containerd[1561]: time="2025-07-15T05:20:34.964881911Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" Jul 15 05:20:34.966434 containerd[1561]: time="2025-07-15T05:20:34.966362867Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:20:34.969198 containerd[1561]: time="2025-07-15T05:20:34.969146826Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:20:34.970367 containerd[1561]: time="2025-07-15T05:20:34.970331928Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 4.596476161s" Jul 15 05:20:34.970367 containerd[1561]: time="2025-07-15T05:20:34.970364990Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jul 15 05:20:37.409448 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 05:20:37.409613 systemd[1]: kubelet.service: Consumed 225ms CPU time, 110.7M memory peak. Jul 15 05:20:37.412190 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 05:20:37.439724 systemd[1]: Reload requested from client PID 2259 ('systemctl') (unit session-7.scope)... Jul 15 05:20:37.439747 systemd[1]: Reloading... Jul 15 05:20:37.537701 zram_generator::config[2305]: No configuration found. Jul 15 05:20:37.775455 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 05:20:37.903145 systemd[1]: Reloading finished in 462 ms. Jul 15 05:20:37.964457 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 15 05:20:37.964550 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 15 05:20:37.964851 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 05:20:37.964891 systemd[1]: kubelet.service: Consumed 158ms CPU time, 98.3M memory peak. Jul 15 05:20:37.966224 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 05:20:38.128710 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 05:20:38.132531 (kubelet)[2350]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 15 05:20:38.164845 kubelet[2350]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 05:20:38.164845 kubelet[2350]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 15 05:20:38.164845 kubelet[2350]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 05:20:38.165207 kubelet[2350]: I0715 05:20:38.164925 2350 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 15 05:20:38.381513 kubelet[2350]: I0715 05:20:38.381412 2350 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 15 05:20:38.381513 kubelet[2350]: I0715 05:20:38.381439 2350 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 15 05:20:38.381704 kubelet[2350]: I0715 05:20:38.381689 2350 server.go:934] "Client rotation is on, will bootstrap in background" Jul 15 05:20:38.402653 kubelet[2350]: E0715 05:20:38.402610 2350 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.126:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.126:6443: connect: connection refused" logger="UnhandledError" Jul 15 05:20:38.403446 kubelet[2350]: I0715 05:20:38.403424 2350 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 05:20:38.408372 kubelet[2350]: I0715 05:20:38.408339 2350 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 15 05:20:38.414288 kubelet[2350]: I0715 05:20:38.414113 2350 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 15 05:20:38.414757 kubelet[2350]: I0715 05:20:38.414737 2350 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 15 05:20:38.414916 kubelet[2350]: I0715 05:20:38.414889 2350 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 15 05:20:38.415153 kubelet[2350]: I0715 05:20:38.414914 2350 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 15 05:20:38.415270 kubelet[2350]: I0715 05:20:38.415174 2350 topology_manager.go:138] "Creating topology manager with none policy" Jul 15 05:20:38.415270 kubelet[2350]: I0715 05:20:38.415187 2350 container_manager_linux.go:300] "Creating device plugin manager" Jul 15 05:20:38.415330 kubelet[2350]: I0715 05:20:38.415301 2350 state_mem.go:36] "Initialized new in-memory state store" Jul 15 05:20:38.418814 kubelet[2350]: I0715 05:20:38.418786 2350 kubelet.go:408] "Attempting to sync node with API server" Jul 15 05:20:38.418814 kubelet[2350]: I0715 05:20:38.418817 2350 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 15 05:20:38.418881 kubelet[2350]: I0715 05:20:38.418855 2350 kubelet.go:314] "Adding apiserver pod source" Jul 15 05:20:38.418881 kubelet[2350]: I0715 05:20:38.418872 2350 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 15 05:20:38.421011 kubelet[2350]: I0715 05:20:38.420940 2350 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 15 05:20:38.421279 kubelet[2350]: I0715 05:20:38.421258 2350 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 15 05:20:38.421339 kubelet[2350]: W0715 05:20:38.421320 2350 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 15 05:20:38.423451 kubelet[2350]: W0715 05:20:38.422400 2350 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.126:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.126:6443: connect: connection refused Jul 15 05:20:38.423451 kubelet[2350]: E0715 05:20:38.422451 2350 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.126:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.126:6443: connect: connection refused" logger="UnhandledError" Jul 15 05:20:38.423451 kubelet[2350]: I0715 05:20:38.423264 2350 server.go:1274] "Started kubelet" Jul 15 05:20:38.424407 kubelet[2350]: I0715 05:20:38.423700 2350 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 15 05:20:38.424407 kubelet[2350]: W0715 05:20:38.423723 2350 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.126:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.126:6443: connect: connection refused Jul 15 05:20:38.424407 kubelet[2350]: E0715 05:20:38.423755 2350 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.126:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.126:6443: connect: connection refused" logger="UnhandledError" Jul 15 05:20:38.424407 kubelet[2350]: I0715 05:20:38.424107 2350 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 15 05:20:38.424588 kubelet[2350]: I0715 05:20:38.424570 2350 server.go:449] "Adding debug handlers to kubelet server" Jul 15 05:20:38.425360 kubelet[2350]: I0715 05:20:38.425335 2350 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 15 05:20:38.425559 kubelet[2350]: I0715 05:20:38.425542 2350 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 15 05:20:38.426528 kubelet[2350]: I0715 05:20:38.426488 2350 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 15 05:20:38.427455 kubelet[2350]: E0715 05:20:38.426283 2350 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.126:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.126:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18525528bae62f0c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-15 05:20:38.423244556 +0000 UTC m=+0.287125149,LastTimestamp:2025-07-15 05:20:38.423244556 +0000 UTC m=+0.287125149,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 15 05:20:38.427567 kubelet[2350]: E0715 05:20:38.427470 2350 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 05:20:38.427567 kubelet[2350]: I0715 05:20:38.427500 2350 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 15 05:20:38.427630 kubelet[2350]: I0715 05:20:38.427626 2350 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 15 05:20:38.427703 kubelet[2350]: I0715 05:20:38.427692 2350 reconciler.go:26] "Reconciler: start to sync state" Jul 15 05:20:38.427933 kubelet[2350]: W0715 05:20:38.427901 2350 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.126:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.126:6443: connect: connection refused Jul 15 05:20:38.427962 kubelet[2350]: E0715 05:20:38.427935 2350 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.126:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.126:6443: connect: connection refused" logger="UnhandledError" Jul 15 05:20:38.428312 kubelet[2350]: E0715 05:20:38.428105 2350 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.126:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.126:6443: connect: connection refused" interval="200ms" Jul 15 05:20:38.428889 kubelet[2350]: I0715 05:20:38.428869 2350 factory.go:221] Registration of the systemd container factory successfully Jul 15 05:20:38.428966 kubelet[2350]: I0715 05:20:38.428949 2350 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 15 05:20:38.429026 kubelet[2350]: E0715 05:20:38.429014 2350 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 15 05:20:38.429776 kubelet[2350]: I0715 05:20:38.429765 2350 factory.go:221] Registration of the containerd container factory successfully Jul 15 05:20:38.441916 kubelet[2350]: I0715 05:20:38.441884 2350 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 15 05:20:38.441916 kubelet[2350]: I0715 05:20:38.441904 2350 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 15 05:20:38.441916 kubelet[2350]: I0715 05:20:38.441918 2350 state_mem.go:36] "Initialized new in-memory state store" Jul 15 05:20:38.443676 kubelet[2350]: I0715 05:20:38.443624 2350 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 15 05:20:38.445155 kubelet[2350]: I0715 05:20:38.445108 2350 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 15 05:20:38.445155 kubelet[2350]: I0715 05:20:38.445135 2350 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 15 05:20:38.445155 kubelet[2350]: I0715 05:20:38.445150 2350 kubelet.go:2321] "Starting kubelet main sync loop" Jul 15 05:20:38.445261 kubelet[2350]: E0715 05:20:38.445189 2350 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 15 05:20:38.445702 kubelet[2350]: W0715 05:20:38.445619 2350 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.126:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.126:6443: connect: connection refused Jul 15 05:20:38.445740 kubelet[2350]: E0715 05:20:38.445707 2350 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.126:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.126:6443: connect: connection refused" logger="UnhandledError" Jul 15 05:20:38.527799 kubelet[2350]: E0715 05:20:38.527768 2350 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 05:20:38.546119 kubelet[2350]: E0715 05:20:38.546092 2350 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 15 05:20:38.628385 kubelet[2350]: E0715 05:20:38.628355 2350 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 05:20:38.628616 kubelet[2350]: E0715 05:20:38.628589 2350 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.126:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.126:6443: connect: connection refused" interval="400ms" Jul 15 05:20:38.728588 kubelet[2350]: E0715 05:20:38.728453 2350 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 05:20:38.746769 kubelet[2350]: E0715 05:20:38.746706 2350 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 15 05:20:38.829082 kubelet[2350]: E0715 05:20:38.829029 2350 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 05:20:38.879440 kubelet[2350]: I0715 05:20:38.879392 2350 policy_none.go:49] "None policy: Start" Jul 15 05:20:38.880149 kubelet[2350]: I0715 05:20:38.880115 2350 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 15 05:20:38.880149 kubelet[2350]: I0715 05:20:38.880137 2350 state_mem.go:35] "Initializing new in-memory state store" Jul 15 05:20:38.887065 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 15 05:20:38.897377 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 15 05:20:38.900494 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 15 05:20:38.913616 kubelet[2350]: I0715 05:20:38.913583 2350 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 15 05:20:38.913888 kubelet[2350]: I0715 05:20:38.913868 2350 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 15 05:20:38.913933 kubelet[2350]: I0715 05:20:38.913885 2350 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 15 05:20:38.914077 kubelet[2350]: I0715 05:20:38.914060 2350 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 15 05:20:38.915330 kubelet[2350]: E0715 05:20:38.915296 2350 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 15 05:20:39.015360 kubelet[2350]: I0715 05:20:39.015236 2350 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 15 05:20:39.015671 kubelet[2350]: E0715 05:20:39.015593 2350 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.126:6443/api/v1/nodes\": dial tcp 10.0.0.126:6443: connect: connection refused" node="localhost" Jul 15 05:20:39.029361 kubelet[2350]: E0715 05:20:39.029290 2350 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.126:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.126:6443: connect: connection refused" interval="800ms" Jul 15 05:20:39.154855 systemd[1]: Created slice kubepods-burstable-podeb204dd96efac615ece0f2fb87eeaf4c.slice - libcontainer container kubepods-burstable-podeb204dd96efac615ece0f2fb87eeaf4c.slice. Jul 15 05:20:39.177039 systemd[1]: Created slice kubepods-burstable-pod3f04709fe51ae4ab5abd58e8da771b74.slice - libcontainer container kubepods-burstable-pod3f04709fe51ae4ab5abd58e8da771b74.slice. Jul 15 05:20:39.188164 systemd[1]: Created slice kubepods-burstable-podb35b56493416c25588cb530e37ffc065.slice - libcontainer container kubepods-burstable-podb35b56493416c25588cb530e37ffc065.slice. Jul 15 05:20:39.217476 kubelet[2350]: I0715 05:20:39.217456 2350 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 15 05:20:39.217828 kubelet[2350]: E0715 05:20:39.217761 2350 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.126:6443/api/v1/nodes\": dial tcp 10.0.0.126:6443: connect: connection refused" node="localhost" Jul 15 05:20:39.231092 kubelet[2350]: I0715 05:20:39.231068 2350 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/eb204dd96efac615ece0f2fb87eeaf4c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"eb204dd96efac615ece0f2fb87eeaf4c\") " pod="kube-system/kube-apiserver-localhost" Jul 15 05:20:39.231135 kubelet[2350]: I0715 05:20:39.231102 2350 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/eb204dd96efac615ece0f2fb87eeaf4c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"eb204dd96efac615ece0f2fb87eeaf4c\") " pod="kube-system/kube-apiserver-localhost" Jul 15 05:20:39.231135 kubelet[2350]: I0715 05:20:39.231126 2350 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 05:20:39.231202 kubelet[2350]: I0715 05:20:39.231141 2350 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 05:20:39.231202 kubelet[2350]: I0715 05:20:39.231154 2350 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 05:20:39.231202 kubelet[2350]: I0715 05:20:39.231172 2350 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 15 05:20:39.231202 kubelet[2350]: I0715 05:20:39.231192 2350 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/eb204dd96efac615ece0f2fb87eeaf4c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"eb204dd96efac615ece0f2fb87eeaf4c\") " pod="kube-system/kube-apiserver-localhost" Jul 15 05:20:39.231283 kubelet[2350]: I0715 05:20:39.231209 2350 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 05:20:39.231283 kubelet[2350]: I0715 05:20:39.231231 2350 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 05:20:39.476057 containerd[1561]: time="2025-07-15T05:20:39.475996730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:eb204dd96efac615ece0f2fb87eeaf4c,Namespace:kube-system,Attempt:0,}" Jul 15 05:20:39.486594 containerd[1561]: time="2025-07-15T05:20:39.486543261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,}" Jul 15 05:20:39.491042 containerd[1561]: time="2025-07-15T05:20:39.490964960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,}" Jul 15 05:20:39.553121 kubelet[2350]: W0715 05:20:39.553027 2350 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.126:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.126:6443: connect: connection refused Jul 15 05:20:39.553121 kubelet[2350]: E0715 05:20:39.553102 2350 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.126:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.126:6443: connect: connection refused" logger="UnhandledError" Jul 15 05:20:39.619511 kubelet[2350]: I0715 05:20:39.619470 2350 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 15 05:20:39.619882 kubelet[2350]: E0715 05:20:39.619840 2350 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.126:6443/api/v1/nodes\": dial tcp 10.0.0.126:6443: connect: connection refused" node="localhost" Jul 15 05:20:39.683795 kubelet[2350]: W0715 05:20:39.683719 2350 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.126:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.126:6443: connect: connection refused Jul 15 05:20:39.683795 kubelet[2350]: E0715 05:20:39.683787 2350 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.126:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.126:6443: connect: connection refused" logger="UnhandledError" Jul 15 05:20:39.705552 kubelet[2350]: W0715 05:20:39.705479 2350 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.126:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.126:6443: connect: connection refused Jul 15 05:20:39.705597 kubelet[2350]: E0715 05:20:39.705549 2350 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.126:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.126:6443: connect: connection refused" logger="UnhandledError" Jul 15 05:20:39.727392 kubelet[2350]: W0715 05:20:39.727276 2350 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.126:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.126:6443: connect: connection refused Jul 15 05:20:39.727392 kubelet[2350]: E0715 05:20:39.727326 2350 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.126:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.126:6443: connect: connection refused" logger="UnhandledError" Jul 15 05:20:39.795720 containerd[1561]: time="2025-07-15T05:20:39.795666697Z" level=info msg="connecting to shim 77f814f58fb80945f0d4456023f740da98bd1dcc496294bcd1dd8d5aeab2ae91" address="unix:///run/containerd/s/63c283a02a46b23ee68daa0d45ec5d45c32eaea781d4e261fef733fc4e1df050" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:20:39.796399 containerd[1561]: time="2025-07-15T05:20:39.796363693Z" level=info msg="connecting to shim e0d0a928eb34b6c5db70cff6ae5d3e1e06831c0eb4c8f8f9e883ae79a0dcefa5" address="unix:///run/containerd/s/063f13037bb1d474c83adefc2b2d19dcb78b3c12b25d7f65d6f194d29e5b29df" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:20:39.809438 containerd[1561]: time="2025-07-15T05:20:39.809381658Z" level=info msg="connecting to shim 8645aed540919f44b92a66872be8d7d2af50f0b72109d1e989f49227b343a650" address="unix:///run/containerd/s/a2d6bc5f3c0c1f253bfef44a792073b99bf077c6439ad7efaebf4bef868d2c24" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:20:39.823785 systemd[1]: Started cri-containerd-e0d0a928eb34b6c5db70cff6ae5d3e1e06831c0eb4c8f8f9e883ae79a0dcefa5.scope - libcontainer container e0d0a928eb34b6c5db70cff6ae5d3e1e06831c0eb4c8f8f9e883ae79a0dcefa5. Jul 15 05:20:39.827907 systemd[1]: Started cri-containerd-77f814f58fb80945f0d4456023f740da98bd1dcc496294bcd1dd8d5aeab2ae91.scope - libcontainer container 77f814f58fb80945f0d4456023f740da98bd1dcc496294bcd1dd8d5aeab2ae91. Jul 15 05:20:39.830316 kubelet[2350]: E0715 05:20:39.830277 2350 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.126:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.126:6443: connect: connection refused" interval="1.6s" Jul 15 05:20:39.833116 systemd[1]: Started cri-containerd-8645aed540919f44b92a66872be8d7d2af50f0b72109d1e989f49227b343a650.scope - libcontainer container 8645aed540919f44b92a66872be8d7d2af50f0b72109d1e989f49227b343a650. Jul 15 05:20:39.847898 kubelet[2350]: E0715 05:20:39.847806 2350 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.126:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.126:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18525528bae62f0c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-15 05:20:38.423244556 +0000 UTC m=+0.287125149,LastTimestamp:2025-07-15 05:20:38.423244556 +0000 UTC m=+0.287125149,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 15 05:20:39.871284 containerd[1561]: time="2025-07-15T05:20:39.871219441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:eb204dd96efac615ece0f2fb87eeaf4c,Namespace:kube-system,Attempt:0,} returns sandbox id \"e0d0a928eb34b6c5db70cff6ae5d3e1e06831c0eb4c8f8f9e883ae79a0dcefa5\"" Jul 15 05:20:39.874014 containerd[1561]: time="2025-07-15T05:20:39.873983803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,} returns sandbox id \"77f814f58fb80945f0d4456023f740da98bd1dcc496294bcd1dd8d5aeab2ae91\"" Jul 15 05:20:39.875009 containerd[1561]: time="2025-07-15T05:20:39.874794984Z" level=info msg="CreateContainer within sandbox \"e0d0a928eb34b6c5db70cff6ae5d3e1e06831c0eb4c8f8f9e883ae79a0dcefa5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 15 05:20:39.876356 containerd[1561]: time="2025-07-15T05:20:39.876268076Z" level=info msg="CreateContainer within sandbox \"77f814f58fb80945f0d4456023f740da98bd1dcc496294bcd1dd8d5aeab2ae91\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 15 05:20:39.884663 containerd[1561]: time="2025-07-15T05:20:39.884613971Z" level=info msg="Container fd15a3f8c56feb99d805db2ed38ebdd6a154eb561e02d0a847dc3ad2157bfaf9: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:20:39.888619 containerd[1561]: time="2025-07-15T05:20:39.888577391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,} returns sandbox id \"8645aed540919f44b92a66872be8d7d2af50f0b72109d1e989f49227b343a650\"" Jul 15 05:20:39.890494 containerd[1561]: time="2025-07-15T05:20:39.890463116Z" level=info msg="CreateContainer within sandbox \"8645aed540919f44b92a66872be8d7d2af50f0b72109d1e989f49227b343a650\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 15 05:20:39.898073 containerd[1561]: time="2025-07-15T05:20:39.898037986Z" level=info msg="Container d8ab24e0c07542187a875801081816b115eb17753374c30dd4295db0c0b70b5a: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:20:39.903236 containerd[1561]: time="2025-07-15T05:20:39.903201055Z" level=info msg="CreateContainer within sandbox \"e0d0a928eb34b6c5db70cff6ae5d3e1e06831c0eb4c8f8f9e883ae79a0dcefa5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"fd15a3f8c56feb99d805db2ed38ebdd6a154eb561e02d0a847dc3ad2157bfaf9\"" Jul 15 05:20:39.903837 containerd[1561]: time="2025-07-15T05:20:39.903784940Z" level=info msg="StartContainer for \"fd15a3f8c56feb99d805db2ed38ebdd6a154eb561e02d0a847dc3ad2157bfaf9\"" Jul 15 05:20:39.904933 containerd[1561]: time="2025-07-15T05:20:39.904898658Z" level=info msg="connecting to shim fd15a3f8c56feb99d805db2ed38ebdd6a154eb561e02d0a847dc3ad2157bfaf9" address="unix:///run/containerd/s/063f13037bb1d474c83adefc2b2d19dcb78b3c12b25d7f65d6f194d29e5b29df" protocol=ttrpc version=3 Jul 15 05:20:39.907774 containerd[1561]: time="2025-07-15T05:20:39.907746927Z" level=info msg="CreateContainer within sandbox \"77f814f58fb80945f0d4456023f740da98bd1dcc496294bcd1dd8d5aeab2ae91\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d8ab24e0c07542187a875801081816b115eb17753374c30dd4295db0c0b70b5a\"" Jul 15 05:20:39.907888 containerd[1561]: time="2025-07-15T05:20:39.907825785Z" level=info msg="Container 130fa130239346f250164768ef16418b43b9e62fa30f881cb24e6d1ae2626600: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:20:39.908178 containerd[1561]: time="2025-07-15T05:20:39.908148650Z" level=info msg="StartContainer for \"d8ab24e0c07542187a875801081816b115eb17753374c30dd4295db0c0b70b5a\"" Jul 15 05:20:39.909133 containerd[1561]: time="2025-07-15T05:20:39.909106686Z" level=info msg="connecting to shim d8ab24e0c07542187a875801081816b115eb17753374c30dd4295db0c0b70b5a" address="unix:///run/containerd/s/63c283a02a46b23ee68daa0d45ec5d45c32eaea781d4e261fef733fc4e1df050" protocol=ttrpc version=3 Jul 15 05:20:39.917739 containerd[1561]: time="2025-07-15T05:20:39.917695748Z" level=info msg="CreateContainer within sandbox \"8645aed540919f44b92a66872be8d7d2af50f0b72109d1e989f49227b343a650\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"130fa130239346f250164768ef16418b43b9e62fa30f881cb24e6d1ae2626600\"" Jul 15 05:20:39.918596 containerd[1561]: time="2025-07-15T05:20:39.918562653Z" level=info msg="StartContainer for \"130fa130239346f250164768ef16418b43b9e62fa30f881cb24e6d1ae2626600\"" Jul 15 05:20:39.920245 containerd[1561]: time="2025-07-15T05:20:39.920216092Z" level=info msg="connecting to shim 130fa130239346f250164768ef16418b43b9e62fa30f881cb24e6d1ae2626600" address="unix:///run/containerd/s/a2d6bc5f3c0c1f253bfef44a792073b99bf077c6439ad7efaebf4bef868d2c24" protocol=ttrpc version=3 Jul 15 05:20:39.927892 systemd[1]: Started cri-containerd-fd15a3f8c56feb99d805db2ed38ebdd6a154eb561e02d0a847dc3ad2157bfaf9.scope - libcontainer container fd15a3f8c56feb99d805db2ed38ebdd6a154eb561e02d0a847dc3ad2157bfaf9. Jul 15 05:20:39.935780 systemd[1]: Started cri-containerd-d8ab24e0c07542187a875801081816b115eb17753374c30dd4295db0c0b70b5a.scope - libcontainer container d8ab24e0c07542187a875801081816b115eb17753374c30dd4295db0c0b70b5a. Jul 15 05:20:39.939398 systemd[1]: Started cri-containerd-130fa130239346f250164768ef16418b43b9e62fa30f881cb24e6d1ae2626600.scope - libcontainer container 130fa130239346f250164768ef16418b43b9e62fa30f881cb24e6d1ae2626600. Jul 15 05:20:39.989620 containerd[1561]: time="2025-07-15T05:20:39.989481450Z" level=info msg="StartContainer for \"fd15a3f8c56feb99d805db2ed38ebdd6a154eb561e02d0a847dc3ad2157bfaf9\" returns successfully" Jul 15 05:20:39.997023 containerd[1561]: time="2025-07-15T05:20:39.996954840Z" level=info msg="StartContainer for \"d8ab24e0c07542187a875801081816b115eb17753374c30dd4295db0c0b70b5a\" returns successfully" Jul 15 05:20:40.001931 containerd[1561]: time="2025-07-15T05:20:40.001681772Z" level=info msg="StartContainer for \"130fa130239346f250164768ef16418b43b9e62fa30f881cb24e6d1ae2626600\" returns successfully" Jul 15 05:20:40.422385 kubelet[2350]: I0715 05:20:40.422348 2350 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 15 05:20:40.909005 kubelet[2350]: I0715 05:20:40.908903 2350 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 15 05:20:40.909005 kubelet[2350]: E0715 05:20:40.908938 2350 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 15 05:20:40.919185 kubelet[2350]: E0715 05:20:40.919135 2350 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 05:20:41.019986 kubelet[2350]: E0715 05:20:41.019931 2350 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 05:20:41.120579 kubelet[2350]: E0715 05:20:41.120516 2350 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 05:20:41.221099 kubelet[2350]: E0715 05:20:41.220974 2350 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 05:20:41.321674 kubelet[2350]: E0715 05:20:41.321613 2350 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 05:20:41.422662 kubelet[2350]: E0715 05:20:41.422594 2350 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 05:20:41.522867 kubelet[2350]: E0715 05:20:41.522713 2350 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 05:20:41.623370 kubelet[2350]: E0715 05:20:41.623315 2350 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 05:20:41.723997 kubelet[2350]: E0715 05:20:41.723942 2350 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 05:20:41.824686 kubelet[2350]: E0715 05:20:41.824554 2350 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 05:20:41.924715 kubelet[2350]: E0715 05:20:41.924660 2350 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 05:20:42.025237 kubelet[2350]: E0715 05:20:42.025179 2350 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 05:20:42.125751 kubelet[2350]: E0715 05:20:42.125707 2350 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 05:20:42.226088 kubelet[2350]: E0715 05:20:42.226036 2350 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 05:20:42.326571 kubelet[2350]: E0715 05:20:42.326508 2350 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 05:20:42.426890 kubelet[2350]: E0715 05:20:42.426759 2350 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 05:20:42.527996 kubelet[2350]: E0715 05:20:42.527936 2350 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 05:20:42.569989 systemd[1]: Reload requested from client PID 2631 ('systemctl') (unit session-7.scope)... Jul 15 05:20:42.570005 systemd[1]: Reloading... Jul 15 05:20:42.628920 kubelet[2350]: E0715 05:20:42.628887 2350 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 05:20:42.645721 zram_generator::config[2677]: No configuration found. Jul 15 05:20:42.734067 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 05:20:42.866650 systemd[1]: Reloading finished in 296 ms. Jul 15 05:20:42.902295 kubelet[2350]: I0715 05:20:42.902233 2350 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 05:20:42.902436 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 05:20:42.917241 systemd[1]: kubelet.service: Deactivated successfully. Jul 15 05:20:42.917661 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 05:20:42.917734 systemd[1]: kubelet.service: Consumed 659ms CPU time, 130.3M memory peak. Jul 15 05:20:42.920080 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 05:20:43.139462 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 05:20:43.151005 (kubelet)[2719]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 15 05:20:43.188998 kubelet[2719]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 05:20:43.188998 kubelet[2719]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 15 05:20:43.188998 kubelet[2719]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 05:20:43.189369 kubelet[2719]: I0715 05:20:43.189032 2719 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 15 05:20:43.195619 kubelet[2719]: I0715 05:20:43.195588 2719 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 15 05:20:43.195619 kubelet[2719]: I0715 05:20:43.195611 2719 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 15 05:20:43.195894 kubelet[2719]: I0715 05:20:43.195873 2719 server.go:934] "Client rotation is on, will bootstrap in background" Jul 15 05:20:43.197121 kubelet[2719]: I0715 05:20:43.197095 2719 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 15 05:20:43.199714 kubelet[2719]: I0715 05:20:43.199678 2719 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 05:20:43.203026 kubelet[2719]: I0715 05:20:43.202995 2719 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 15 05:20:43.208352 kubelet[2719]: I0715 05:20:43.208325 2719 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 15 05:20:43.208473 kubelet[2719]: I0715 05:20:43.208456 2719 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 15 05:20:43.208677 kubelet[2719]: I0715 05:20:43.208613 2719 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 15 05:20:43.208910 kubelet[2719]: I0715 05:20:43.208676 2719 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 15 05:20:43.208995 kubelet[2719]: I0715 05:20:43.208919 2719 topology_manager.go:138] "Creating topology manager with none policy" Jul 15 05:20:43.208995 kubelet[2719]: I0715 05:20:43.208931 2719 container_manager_linux.go:300] "Creating device plugin manager" Jul 15 05:20:43.208995 kubelet[2719]: I0715 05:20:43.208959 2719 state_mem.go:36] "Initialized new in-memory state store" Jul 15 05:20:43.209094 kubelet[2719]: I0715 05:20:43.209078 2719 kubelet.go:408] "Attempting to sync node with API server" Jul 15 05:20:43.209120 kubelet[2719]: I0715 05:20:43.209097 2719 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 15 05:20:43.209143 kubelet[2719]: I0715 05:20:43.209138 2719 kubelet.go:314] "Adding apiserver pod source" Jul 15 05:20:43.209167 kubelet[2719]: I0715 05:20:43.209151 2719 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 15 05:20:43.210422 kubelet[2719]: I0715 05:20:43.210396 2719 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 15 05:20:43.210964 kubelet[2719]: I0715 05:20:43.210947 2719 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 15 05:20:43.211439 kubelet[2719]: I0715 05:20:43.211421 2719 server.go:1274] "Started kubelet" Jul 15 05:20:43.212196 kubelet[2719]: I0715 05:20:43.212121 2719 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 15 05:20:43.212585 kubelet[2719]: I0715 05:20:43.212557 2719 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 15 05:20:43.212783 kubelet[2719]: I0715 05:20:43.212744 2719 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 15 05:20:43.214382 kubelet[2719]: I0715 05:20:43.214338 2719 server.go:449] "Adding debug handlers to kubelet server" Jul 15 05:20:43.219373 kubelet[2719]: I0715 05:20:43.219138 2719 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 15 05:20:43.220845 kubelet[2719]: I0715 05:20:43.220797 2719 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 15 05:20:43.221021 kubelet[2719]: I0715 05:20:43.220929 2719 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 15 05:20:43.223005 kubelet[2719]: I0715 05:20:43.222961 2719 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 15 05:20:43.223179 kubelet[2719]: I0715 05:20:43.223124 2719 reconciler.go:26] "Reconciler: start to sync state" Jul 15 05:20:43.223722 kubelet[2719]: E0715 05:20:43.223629 2719 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 05:20:43.225704 kubelet[2719]: E0715 05:20:43.224874 2719 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 15 05:20:43.225704 kubelet[2719]: I0715 05:20:43.225094 2719 factory.go:221] Registration of the systemd container factory successfully Jul 15 05:20:43.225704 kubelet[2719]: I0715 05:20:43.225207 2719 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 15 05:20:43.230790 kubelet[2719]: I0715 05:20:43.230324 2719 factory.go:221] Registration of the containerd container factory successfully Jul 15 05:20:43.247795 kubelet[2719]: I0715 05:20:43.247740 2719 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 15 05:20:43.251569 kubelet[2719]: I0715 05:20:43.251539 2719 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 15 05:20:43.251569 kubelet[2719]: I0715 05:20:43.251569 2719 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 15 05:20:43.251658 kubelet[2719]: I0715 05:20:43.251587 2719 kubelet.go:2321] "Starting kubelet main sync loop" Jul 15 05:20:43.251658 kubelet[2719]: E0715 05:20:43.251626 2719 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 15 05:20:43.280975 kubelet[2719]: I0715 05:20:43.280945 2719 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 15 05:20:43.280975 kubelet[2719]: I0715 05:20:43.280963 2719 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 15 05:20:43.280975 kubelet[2719]: I0715 05:20:43.280983 2719 state_mem.go:36] "Initialized new in-memory state store" Jul 15 05:20:43.281163 kubelet[2719]: I0715 05:20:43.281138 2719 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 15 05:20:43.281194 kubelet[2719]: I0715 05:20:43.281154 2719 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 15 05:20:43.281194 kubelet[2719]: I0715 05:20:43.281180 2719 policy_none.go:49] "None policy: Start" Jul 15 05:20:43.281749 kubelet[2719]: I0715 05:20:43.281729 2719 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 15 05:20:43.281749 kubelet[2719]: I0715 05:20:43.281749 2719 state_mem.go:35] "Initializing new in-memory state store" Jul 15 05:20:43.281878 kubelet[2719]: I0715 05:20:43.281850 2719 state_mem.go:75] "Updated machine memory state" Jul 15 05:20:43.285907 kubelet[2719]: I0715 05:20:43.285866 2719 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 15 05:20:43.286073 kubelet[2719]: I0715 05:20:43.286050 2719 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 15 05:20:43.286117 kubelet[2719]: I0715 05:20:43.286068 2719 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 15 05:20:43.286265 kubelet[2719]: I0715 05:20:43.286244 2719 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 15 05:20:43.388001 kubelet[2719]: I0715 05:20:43.387970 2719 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 15 05:20:43.394243 kubelet[2719]: I0715 05:20:43.394123 2719 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jul 15 05:20:43.394243 kubelet[2719]: I0715 05:20:43.394185 2719 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 15 05:20:43.525184 kubelet[2719]: I0715 05:20:43.525131 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 05:20:43.525184 kubelet[2719]: I0715 05:20:43.525169 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 15 05:20:43.525184 kubelet[2719]: I0715 05:20:43.525190 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/eb204dd96efac615ece0f2fb87eeaf4c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"eb204dd96efac615ece0f2fb87eeaf4c\") " pod="kube-system/kube-apiserver-localhost" Jul 15 05:20:43.525385 kubelet[2719]: I0715 05:20:43.525205 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/eb204dd96efac615ece0f2fb87eeaf4c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"eb204dd96efac615ece0f2fb87eeaf4c\") " pod="kube-system/kube-apiserver-localhost" Jul 15 05:20:43.525385 kubelet[2719]: I0715 05:20:43.525224 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/eb204dd96efac615ece0f2fb87eeaf4c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"eb204dd96efac615ece0f2fb87eeaf4c\") " pod="kube-system/kube-apiserver-localhost" Jul 15 05:20:43.525385 kubelet[2719]: I0715 05:20:43.525243 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 05:20:43.525385 kubelet[2719]: I0715 05:20:43.525273 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 05:20:43.525385 kubelet[2719]: I0715 05:20:43.525330 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 05:20:43.525517 kubelet[2719]: I0715 05:20:43.525361 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 05:20:43.570387 sudo[2754]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 15 05:20:43.570815 sudo[2754]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 15 05:20:43.860557 sudo[2754]: pam_unix(sudo:session): session closed for user root Jul 15 05:20:44.210843 kubelet[2719]: I0715 05:20:44.210723 2719 apiserver.go:52] "Watching apiserver" Jul 15 05:20:44.225539 kubelet[2719]: I0715 05:20:44.224693 2719 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 15 05:20:44.270745 kubelet[2719]: E0715 05:20:44.270703 2719 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jul 15 05:20:44.284905 kubelet[2719]: I0715 05:20:44.284842 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.284824711 podStartE2EDuration="1.284824711s" podCreationTimestamp="2025-07-15 05:20:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 05:20:44.2846166 +0000 UTC m=+1.129740867" watchObservedRunningTime="2025-07-15 05:20:44.284824711 +0000 UTC m=+1.129948988" Jul 15 05:20:44.297573 kubelet[2719]: I0715 05:20:44.297511 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.297494972 podStartE2EDuration="1.297494972s" podCreationTimestamp="2025-07-15 05:20:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 05:20:44.291031043 +0000 UTC m=+1.136155320" watchObservedRunningTime="2025-07-15 05:20:44.297494972 +0000 UTC m=+1.142619249" Jul 15 05:20:45.618112 sudo[1777]: pam_unix(sudo:session): session closed for user root Jul 15 05:20:45.619485 sshd[1776]: Connection closed by 10.0.0.1 port 33726 Jul 15 05:20:45.619908 sshd-session[1773]: pam_unix(sshd:session): session closed for user core Jul 15 05:20:45.625069 systemd[1]: sshd@6-10.0.0.126:22-10.0.0.1:33726.service: Deactivated successfully. Jul 15 05:20:45.627790 systemd[1]: session-7.scope: Deactivated successfully. Jul 15 05:20:45.628052 systemd[1]: session-7.scope: Consumed 4.359s CPU time, 263.2M memory peak. Jul 15 05:20:45.629506 systemd-logind[1536]: Session 7 logged out. Waiting for processes to exit. Jul 15 05:20:45.630885 systemd-logind[1536]: Removed session 7. Jul 15 05:20:48.920082 kubelet[2719]: I0715 05:20:48.920039 2719 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 15 05:20:48.920588 containerd[1561]: time="2025-07-15T05:20:48.920409576Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 15 05:20:48.920851 kubelet[2719]: I0715 05:20:48.920626 2719 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 15 05:20:49.739780 kubelet[2719]: I0715 05:20:49.739712 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=6.739670007 podStartE2EDuration="6.739670007s" podCreationTimestamp="2025-07-15 05:20:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 05:20:44.297828366 +0000 UTC m=+1.142952643" watchObservedRunningTime="2025-07-15 05:20:49.739670007 +0000 UTC m=+6.584794284" Jul 15 05:20:49.752390 systemd[1]: Created slice kubepods-besteffort-pod687ca29f_4fc4_42ef_94c4_6444b8c1213f.slice - libcontainer container kubepods-besteffort-pod687ca29f_4fc4_42ef_94c4_6444b8c1213f.slice. Jul 15 05:20:49.768207 systemd[1]: Created slice kubepods-burstable-pod3d5d9f83_e226_4ce6_a454_e80087969575.slice - libcontainer container kubepods-burstable-pod3d5d9f83_e226_4ce6_a454_e80087969575.slice. Jul 15 05:20:49.827057 systemd[1]: Created slice kubepods-besteffort-pod3689fd1d_2da5_4360_90c0_77b17e259c52.slice - libcontainer container kubepods-besteffort-pod3689fd1d_2da5_4360_90c0_77b17e259c52.slice. Jul 15 05:20:49.861955 kubelet[2719]: I0715 05:20:49.861884 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3d5d9f83-e226-4ce6-a454-e80087969575-clustermesh-secrets\") pod \"cilium-9cdtx\" (UID: \"3d5d9f83-e226-4ce6-a454-e80087969575\") " pod="kube-system/cilium-9cdtx" Jul 15 05:20:49.862105 kubelet[2719]: I0715 05:20:49.861990 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3d5d9f83-e226-4ce6-a454-e80087969575-hubble-tls\") pod \"cilium-9cdtx\" (UID: \"3d5d9f83-e226-4ce6-a454-e80087969575\") " pod="kube-system/cilium-9cdtx" Jul 15 05:20:49.862105 kubelet[2719]: I0715 05:20:49.862016 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/687ca29f-4fc4-42ef-94c4-6444b8c1213f-kube-proxy\") pod \"kube-proxy-xsb9p\" (UID: \"687ca29f-4fc4-42ef-94c4-6444b8c1213f\") " pod="kube-system/kube-proxy-xsb9p" Jul 15 05:20:49.862105 kubelet[2719]: I0715 05:20:49.862033 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/687ca29f-4fc4-42ef-94c4-6444b8c1213f-xtables-lock\") pod \"kube-proxy-xsb9p\" (UID: \"687ca29f-4fc4-42ef-94c4-6444b8c1213f\") " pod="kube-system/kube-proxy-xsb9p" Jul 15 05:20:49.862105 kubelet[2719]: I0715 05:20:49.862050 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3d5d9f83-e226-4ce6-a454-e80087969575-xtables-lock\") pod \"cilium-9cdtx\" (UID: \"3d5d9f83-e226-4ce6-a454-e80087969575\") " pod="kube-system/cilium-9cdtx" Jul 15 05:20:49.862200 kubelet[2719]: I0715 05:20:49.862163 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3d5d9f83-e226-4ce6-a454-e80087969575-host-proc-sys-kernel\") pod \"cilium-9cdtx\" (UID: \"3d5d9f83-e226-4ce6-a454-e80087969575\") " pod="kube-system/cilium-9cdtx" Jul 15 05:20:49.862249 kubelet[2719]: I0715 05:20:49.862218 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3d5d9f83-e226-4ce6-a454-e80087969575-bpf-maps\") pod \"cilium-9cdtx\" (UID: \"3d5d9f83-e226-4ce6-a454-e80087969575\") " pod="kube-system/cilium-9cdtx" Jul 15 05:20:49.862272 kubelet[2719]: I0715 05:20:49.862251 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmd4l\" (UniqueName: \"kubernetes.io/projected/3d5d9f83-e226-4ce6-a454-e80087969575-kube-api-access-qmd4l\") pod \"cilium-9cdtx\" (UID: \"3d5d9f83-e226-4ce6-a454-e80087969575\") " pod="kube-system/cilium-9cdtx" Jul 15 05:20:49.862298 kubelet[2719]: I0715 05:20:49.862274 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3d5d9f83-e226-4ce6-a454-e80087969575-hostproc\") pod \"cilium-9cdtx\" (UID: \"3d5d9f83-e226-4ce6-a454-e80087969575\") " pod="kube-system/cilium-9cdtx" Jul 15 05:20:49.862394 kubelet[2719]: I0715 05:20:49.862296 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3d5d9f83-e226-4ce6-a454-e80087969575-cilium-cgroup\") pod \"cilium-9cdtx\" (UID: \"3d5d9f83-e226-4ce6-a454-e80087969575\") " pod="kube-system/cilium-9cdtx" Jul 15 05:20:49.862394 kubelet[2719]: I0715 05:20:49.862319 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3d5d9f83-e226-4ce6-a454-e80087969575-cni-path\") pod \"cilium-9cdtx\" (UID: \"3d5d9f83-e226-4ce6-a454-e80087969575\") " pod="kube-system/cilium-9cdtx" Jul 15 05:20:49.862394 kubelet[2719]: I0715 05:20:49.862367 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3d5d9f83-e226-4ce6-a454-e80087969575-lib-modules\") pod \"cilium-9cdtx\" (UID: \"3d5d9f83-e226-4ce6-a454-e80087969575\") " pod="kube-system/cilium-9cdtx" Jul 15 05:20:49.862394 kubelet[2719]: I0715 05:20:49.862389 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3d5d9f83-e226-4ce6-a454-e80087969575-cilium-config-path\") pod \"cilium-9cdtx\" (UID: \"3d5d9f83-e226-4ce6-a454-e80087969575\") " pod="kube-system/cilium-9cdtx" Jul 15 05:20:49.862478 kubelet[2719]: I0715 05:20:49.862415 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/687ca29f-4fc4-42ef-94c4-6444b8c1213f-lib-modules\") pod \"kube-proxy-xsb9p\" (UID: \"687ca29f-4fc4-42ef-94c4-6444b8c1213f\") " pod="kube-system/kube-proxy-xsb9p" Jul 15 05:20:49.862478 kubelet[2719]: I0715 05:20:49.862439 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65pwl\" (UniqueName: \"kubernetes.io/projected/687ca29f-4fc4-42ef-94c4-6444b8c1213f-kube-api-access-65pwl\") pod \"kube-proxy-xsb9p\" (UID: \"687ca29f-4fc4-42ef-94c4-6444b8c1213f\") " pod="kube-system/kube-proxy-xsb9p" Jul 15 05:20:49.862478 kubelet[2719]: I0715 05:20:49.862459 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3d5d9f83-e226-4ce6-a454-e80087969575-cilium-run\") pod \"cilium-9cdtx\" (UID: \"3d5d9f83-e226-4ce6-a454-e80087969575\") " pod="kube-system/cilium-9cdtx" Jul 15 05:20:49.862543 kubelet[2719]: I0715 05:20:49.862479 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3d5d9f83-e226-4ce6-a454-e80087969575-etc-cni-netd\") pod \"cilium-9cdtx\" (UID: \"3d5d9f83-e226-4ce6-a454-e80087969575\") " pod="kube-system/cilium-9cdtx" Jul 15 05:20:49.862543 kubelet[2719]: I0715 05:20:49.862503 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3d5d9f83-e226-4ce6-a454-e80087969575-host-proc-sys-net\") pod \"cilium-9cdtx\" (UID: \"3d5d9f83-e226-4ce6-a454-e80087969575\") " pod="kube-system/cilium-9cdtx" Jul 15 05:20:49.963686 kubelet[2719]: I0715 05:20:49.963346 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbg98\" (UniqueName: \"kubernetes.io/projected/3689fd1d-2da5-4360-90c0-77b17e259c52-kube-api-access-vbg98\") pod \"cilium-operator-5d85765b45-8vdls\" (UID: \"3689fd1d-2da5-4360-90c0-77b17e259c52\") " pod="kube-system/cilium-operator-5d85765b45-8vdls" Jul 15 05:20:49.963686 kubelet[2719]: I0715 05:20:49.963443 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3689fd1d-2da5-4360-90c0-77b17e259c52-cilium-config-path\") pod \"cilium-operator-5d85765b45-8vdls\" (UID: \"3689fd1d-2da5-4360-90c0-77b17e259c52\") " pod="kube-system/cilium-operator-5d85765b45-8vdls" Jul 15 05:20:50.066851 containerd[1561]: time="2025-07-15T05:20:50.066684803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xsb9p,Uid:687ca29f-4fc4-42ef-94c4-6444b8c1213f,Namespace:kube-system,Attempt:0,}" Jul 15 05:20:50.071850 containerd[1561]: time="2025-07-15T05:20:50.071814850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9cdtx,Uid:3d5d9f83-e226-4ce6-a454-e80087969575,Namespace:kube-system,Attempt:0,}" Jul 15 05:20:50.132759 containerd[1561]: time="2025-07-15T05:20:50.132705196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-8vdls,Uid:3689fd1d-2da5-4360-90c0-77b17e259c52,Namespace:kube-system,Attempt:0,}" Jul 15 05:20:50.479603 containerd[1561]: time="2025-07-15T05:20:50.479484208Z" level=info msg="connecting to shim 00fda6b6de716e0c06a416025409a59fed0b2b9c6f889244a0f751a660a2eb39" address="unix:///run/containerd/s/1e213e2a3109ba5e616af1422d9321cd1c298196c03311904aa54228fb4f04b8" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:20:50.486291 containerd[1561]: time="2025-07-15T05:20:50.486204313Z" level=info msg="connecting to shim e9bbab8a06a87e180a3e1535ebf99de1afa78909e2d242685a61d89141d796e7" address="unix:///run/containerd/s/bb57eff188887b13e1e2d0508eb817db1b872e9bab2d8592f032e9a2306623a8" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:20:50.490626 containerd[1561]: time="2025-07-15T05:20:50.489857197Z" level=info msg="connecting to shim fae62e08ec4d050ab1a605e53c6d379c3f3b96621fac379c5b358a77ac0c0bdd" address="unix:///run/containerd/s/8704f893640ac265d1fa3b19da56e3f340e7aa22303f3174431f2933920501ae" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:20:50.509832 systemd[1]: Started cri-containerd-00fda6b6de716e0c06a416025409a59fed0b2b9c6f889244a0f751a660a2eb39.scope - libcontainer container 00fda6b6de716e0c06a416025409a59fed0b2b9c6f889244a0f751a660a2eb39. Jul 15 05:20:50.514560 systemd[1]: Started cri-containerd-fae62e08ec4d050ab1a605e53c6d379c3f3b96621fac379c5b358a77ac0c0bdd.scope - libcontainer container fae62e08ec4d050ab1a605e53c6d379c3f3b96621fac379c5b358a77ac0c0bdd. Jul 15 05:20:50.519848 systemd[1]: Started cri-containerd-e9bbab8a06a87e180a3e1535ebf99de1afa78909e2d242685a61d89141d796e7.scope - libcontainer container e9bbab8a06a87e180a3e1535ebf99de1afa78909e2d242685a61d89141d796e7. Jul 15 05:20:50.550319 containerd[1561]: time="2025-07-15T05:20:50.550276502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xsb9p,Uid:687ca29f-4fc4-42ef-94c4-6444b8c1213f,Namespace:kube-system,Attempt:0,} returns sandbox id \"00fda6b6de716e0c06a416025409a59fed0b2b9c6f889244a0f751a660a2eb39\"" Jul 15 05:20:50.554311 containerd[1561]: time="2025-07-15T05:20:50.554271189Z" level=info msg="CreateContainer within sandbox \"00fda6b6de716e0c06a416025409a59fed0b2b9c6f889244a0f751a660a2eb39\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 15 05:20:50.558436 containerd[1561]: time="2025-07-15T05:20:50.558395732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9cdtx,Uid:3d5d9f83-e226-4ce6-a454-e80087969575,Namespace:kube-system,Attempt:0,} returns sandbox id \"e9bbab8a06a87e180a3e1535ebf99de1afa78909e2d242685a61d89141d796e7\"" Jul 15 05:20:50.562182 containerd[1561]: time="2025-07-15T05:20:50.562070417Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 15 05:20:50.571825 containerd[1561]: time="2025-07-15T05:20:50.571765249Z" level=info msg="Container b5e323164ecbb2506533690662b8931e61a77a25200b10fc9ae9e99a73b96d2f: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:20:50.580220 containerd[1561]: time="2025-07-15T05:20:50.580177830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-8vdls,Uid:3689fd1d-2da5-4360-90c0-77b17e259c52,Namespace:kube-system,Attempt:0,} returns sandbox id \"fae62e08ec4d050ab1a605e53c6d379c3f3b96621fac379c5b358a77ac0c0bdd\"" Jul 15 05:20:50.581730 containerd[1561]: time="2025-07-15T05:20:50.581623384Z" level=info msg="CreateContainer within sandbox \"00fda6b6de716e0c06a416025409a59fed0b2b9c6f889244a0f751a660a2eb39\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b5e323164ecbb2506533690662b8931e61a77a25200b10fc9ae9e99a73b96d2f\"" Jul 15 05:20:50.582127 containerd[1561]: time="2025-07-15T05:20:50.582100516Z" level=info msg="StartContainer for \"b5e323164ecbb2506533690662b8931e61a77a25200b10fc9ae9e99a73b96d2f\"" Jul 15 05:20:50.583398 containerd[1561]: time="2025-07-15T05:20:50.583375833Z" level=info msg="connecting to shim b5e323164ecbb2506533690662b8931e61a77a25200b10fc9ae9e99a73b96d2f" address="unix:///run/containerd/s/1e213e2a3109ba5e616af1422d9321cd1c298196c03311904aa54228fb4f04b8" protocol=ttrpc version=3 Jul 15 05:20:50.613007 systemd[1]: Started cri-containerd-b5e323164ecbb2506533690662b8931e61a77a25200b10fc9ae9e99a73b96d2f.scope - libcontainer container b5e323164ecbb2506533690662b8931e61a77a25200b10fc9ae9e99a73b96d2f. Jul 15 05:20:50.666957 containerd[1561]: time="2025-07-15T05:20:50.666909575Z" level=info msg="StartContainer for \"b5e323164ecbb2506533690662b8931e61a77a25200b10fc9ae9e99a73b96d2f\" returns successfully" Jul 15 05:20:51.291380 kubelet[2719]: I0715 05:20:51.291320 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xsb9p" podStartSLOduration=2.291302236 podStartE2EDuration="2.291302236s" podCreationTimestamp="2025-07-15 05:20:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 05:20:51.291239236 +0000 UTC m=+8.136363513" watchObservedRunningTime="2025-07-15 05:20:51.291302236 +0000 UTC m=+8.136426513" Jul 15 05:20:53.754573 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount684127861.mount: Deactivated successfully. Jul 15 05:20:56.391446 update_engine[1541]: I20250715 05:20:56.391383 1541 update_attempter.cc:509] Updating boot flags... Jul 15 05:20:58.004255 containerd[1561]: time="2025-07-15T05:20:58.004177247Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:20:58.009224 containerd[1561]: time="2025-07-15T05:20:58.009130872Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jul 15 05:20:58.010755 containerd[1561]: time="2025-07-15T05:20:58.010700019Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:20:58.012018 containerd[1561]: time="2025-07-15T05:20:58.011960560Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 7.449846017s" Jul 15 05:20:58.012018 containerd[1561]: time="2025-07-15T05:20:58.012004764Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 15 05:20:58.013106 containerd[1561]: time="2025-07-15T05:20:58.013072699Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 15 05:20:58.014560 containerd[1561]: time="2025-07-15T05:20:58.014472855Z" level=info msg="CreateContainer within sandbox \"e9bbab8a06a87e180a3e1535ebf99de1afa78909e2d242685a61d89141d796e7\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 15 05:20:58.026069 containerd[1561]: time="2025-07-15T05:20:58.026013874Z" level=info msg="Container ba2d2bbf64f0ff4971627a95f571a33d5a46695f6f46cbffe5109f68c5d791d5: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:20:58.033658 containerd[1561]: time="2025-07-15T05:20:58.033578542Z" level=info msg="CreateContainer within sandbox \"e9bbab8a06a87e180a3e1535ebf99de1afa78909e2d242685a61d89141d796e7\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ba2d2bbf64f0ff4971627a95f571a33d5a46695f6f46cbffe5109f68c5d791d5\"" Jul 15 05:20:58.034265 containerd[1561]: time="2025-07-15T05:20:58.034194349Z" level=info msg="StartContainer for \"ba2d2bbf64f0ff4971627a95f571a33d5a46695f6f46cbffe5109f68c5d791d5\"" Jul 15 05:20:58.035426 containerd[1561]: time="2025-07-15T05:20:58.035381662Z" level=info msg="connecting to shim ba2d2bbf64f0ff4971627a95f571a33d5a46695f6f46cbffe5109f68c5d791d5" address="unix:///run/containerd/s/bb57eff188887b13e1e2d0508eb817db1b872e9bab2d8592f032e9a2306623a8" protocol=ttrpc version=3 Jul 15 05:20:58.090828 systemd[1]: Started cri-containerd-ba2d2bbf64f0ff4971627a95f571a33d5a46695f6f46cbffe5109f68c5d791d5.scope - libcontainer container ba2d2bbf64f0ff4971627a95f571a33d5a46695f6f46cbffe5109f68c5d791d5. Jul 15 05:20:58.134031 systemd[1]: cri-containerd-ba2d2bbf64f0ff4971627a95f571a33d5a46695f6f46cbffe5109f68c5d791d5.scope: Deactivated successfully. Jul 15 05:20:58.135767 containerd[1561]: time="2025-07-15T05:20:58.135723759Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ba2d2bbf64f0ff4971627a95f571a33d5a46695f6f46cbffe5109f68c5d791d5\" id:\"ba2d2bbf64f0ff4971627a95f571a33d5a46695f6f46cbffe5109f68c5d791d5\" pid:3159 exited_at:{seconds:1752556858 nanos:135072533}" Jul 15 05:20:58.273569 containerd[1561]: time="2025-07-15T05:20:58.273437687Z" level=info msg="received exit event container_id:\"ba2d2bbf64f0ff4971627a95f571a33d5a46695f6f46cbffe5109f68c5d791d5\" id:\"ba2d2bbf64f0ff4971627a95f571a33d5a46695f6f46cbffe5109f68c5d791d5\" pid:3159 exited_at:{seconds:1752556858 nanos:135072533}" Jul 15 05:20:58.275060 containerd[1561]: time="2025-07-15T05:20:58.275032602Z" level=info msg="StartContainer for \"ba2d2bbf64f0ff4971627a95f571a33d5a46695f6f46cbffe5109f68c5d791d5\" returns successfully" Jul 15 05:20:58.294313 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ba2d2bbf64f0ff4971627a95f571a33d5a46695f6f46cbffe5109f68c5d791d5-rootfs.mount: Deactivated successfully. Jul 15 05:20:59.298018 containerd[1561]: time="2025-07-15T05:20:59.297966990Z" level=info msg="CreateContainer within sandbox \"e9bbab8a06a87e180a3e1535ebf99de1afa78909e2d242685a61d89141d796e7\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 15 05:20:59.312338 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount645487014.mount: Deactivated successfully. Jul 15 05:20:59.334294 containerd[1561]: time="2025-07-15T05:20:59.334247523Z" level=info msg="Container dabad41491c8424d29ad9ce06676127b67c0b91c4f029766cbe378c528b06726: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:20:59.337754 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2380204383.mount: Deactivated successfully. Jul 15 05:20:59.341917 containerd[1561]: time="2025-07-15T05:20:59.341877804Z" level=info msg="CreateContainer within sandbox \"e9bbab8a06a87e180a3e1535ebf99de1afa78909e2d242685a61d89141d796e7\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"dabad41491c8424d29ad9ce06676127b67c0b91c4f029766cbe378c528b06726\"" Jul 15 05:20:59.344495 containerd[1561]: time="2025-07-15T05:20:59.344437655Z" level=info msg="StartContainer for \"dabad41491c8424d29ad9ce06676127b67c0b91c4f029766cbe378c528b06726\"" Jul 15 05:20:59.345261 containerd[1561]: time="2025-07-15T05:20:59.345228034Z" level=info msg="connecting to shim dabad41491c8424d29ad9ce06676127b67c0b91c4f029766cbe378c528b06726" address="unix:///run/containerd/s/bb57eff188887b13e1e2d0508eb817db1b872e9bab2d8592f032e9a2306623a8" protocol=ttrpc version=3 Jul 15 05:20:59.364791 systemd[1]: Started cri-containerd-dabad41491c8424d29ad9ce06676127b67c0b91c4f029766cbe378c528b06726.scope - libcontainer container dabad41491c8424d29ad9ce06676127b67c0b91c4f029766cbe378c528b06726. Jul 15 05:20:59.399657 containerd[1561]: time="2025-07-15T05:20:59.399550762Z" level=info msg="StartContainer for \"dabad41491c8424d29ad9ce06676127b67c0b91c4f029766cbe378c528b06726\" returns successfully" Jul 15 05:20:59.414796 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 15 05:20:59.415029 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 15 05:20:59.415562 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 15 05:20:59.417812 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 15 05:20:59.420127 systemd[1]: cri-containerd-dabad41491c8424d29ad9ce06676127b67c0b91c4f029766cbe378c528b06726.scope: Deactivated successfully. Jul 15 05:20:59.422233 containerd[1561]: time="2025-07-15T05:20:59.422180441Z" level=info msg="received exit event container_id:\"dabad41491c8424d29ad9ce06676127b67c0b91c4f029766cbe378c528b06726\" id:\"dabad41491c8424d29ad9ce06676127b67c0b91c4f029766cbe378c528b06726\" pid:3213 exited_at:{seconds:1752556859 nanos:421867569}" Jul 15 05:20:59.422928 containerd[1561]: time="2025-07-15T05:20:59.422895787Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dabad41491c8424d29ad9ce06676127b67c0b91c4f029766cbe378c528b06726\" id:\"dabad41491c8424d29ad9ce06676127b67c0b91c4f029766cbe378c528b06726\" pid:3213 exited_at:{seconds:1752556859 nanos:421867569}" Jul 15 05:20:59.445601 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 15 05:20:59.673147 containerd[1561]: time="2025-07-15T05:20:59.673075643Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:20:59.674014 containerd[1561]: time="2025-07-15T05:20:59.673946654Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jul 15 05:20:59.675823 containerd[1561]: time="2025-07-15T05:20:59.675780390Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 05:20:59.677208 containerd[1561]: time="2025-07-15T05:20:59.677177969Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.664075483s" Jul 15 05:20:59.677208 containerd[1561]: time="2025-07-15T05:20:59.677212054Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 15 05:20:59.679387 containerd[1561]: time="2025-07-15T05:20:59.679355736Z" level=info msg="CreateContainer within sandbox \"fae62e08ec4d050ab1a605e53c6d379c3f3b96621fac379c5b358a77ac0c0bdd\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 15 05:20:59.686955 containerd[1561]: time="2025-07-15T05:20:59.686919132Z" level=info msg="Container 7cbbb0b1a372e8c5cb638071941b0142a05205482f2cc7796b8485b5e35e81f7: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:21:00.023366 containerd[1561]: time="2025-07-15T05:21:00.023211326Z" level=info msg="CreateContainer within sandbox \"fae62e08ec4d050ab1a605e53c6d379c3f3b96621fac379c5b358a77ac0c0bdd\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"7cbbb0b1a372e8c5cb638071941b0142a05205482f2cc7796b8485b5e35e81f7\"" Jul 15 05:21:00.024129 containerd[1561]: time="2025-07-15T05:21:00.024084160Z" level=info msg="StartContainer for \"7cbbb0b1a372e8c5cb638071941b0142a05205482f2cc7796b8485b5e35e81f7\"" Jul 15 05:21:00.025135 containerd[1561]: time="2025-07-15T05:21:00.025101937Z" level=info msg="connecting to shim 7cbbb0b1a372e8c5cb638071941b0142a05205482f2cc7796b8485b5e35e81f7" address="unix:///run/containerd/s/8704f893640ac265d1fa3b19da56e3f340e7aa22303f3174431f2933920501ae" protocol=ttrpc version=3 Jul 15 05:21:00.048840 systemd[1]: Started cri-containerd-7cbbb0b1a372e8c5cb638071941b0142a05205482f2cc7796b8485b5e35e81f7.scope - libcontainer container 7cbbb0b1a372e8c5cb638071941b0142a05205482f2cc7796b8485b5e35e81f7. Jul 15 05:21:00.252804 containerd[1561]: time="2025-07-15T05:21:00.252754033Z" level=info msg="StartContainer for \"7cbbb0b1a372e8c5cb638071941b0142a05205482f2cc7796b8485b5e35e81f7\" returns successfully" Jul 15 05:21:00.303278 containerd[1561]: time="2025-07-15T05:21:00.303192424Z" level=info msg="CreateContainer within sandbox \"e9bbab8a06a87e180a3e1535ebf99de1afa78909e2d242685a61d89141d796e7\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 15 05:21:00.307618 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dabad41491c8424d29ad9ce06676127b67c0b91c4f029766cbe378c528b06726-rootfs.mount: Deactivated successfully. Jul 15 05:21:00.503024 containerd[1561]: time="2025-07-15T05:21:00.502661189Z" level=info msg="Container 2be2bbb29230b3d9c05696475735706d65c725d4a0bf845c376a9e1feee7f2c3: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:21:00.513351 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2527706038.mount: Deactivated successfully. Jul 15 05:21:00.516831 kubelet[2719]: I0715 05:21:00.515588 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-8vdls" podStartSLOduration=2.419111288 podStartE2EDuration="11.515558456s" podCreationTimestamp="2025-07-15 05:20:49 +0000 UTC" firstStartedPulling="2025-07-15 05:20:50.581459321 +0000 UTC m=+7.426583598" lastFinishedPulling="2025-07-15 05:20:59.677906479 +0000 UTC m=+16.523030766" observedRunningTime="2025-07-15 05:21:00.51490542 +0000 UTC m=+17.360029697" watchObservedRunningTime="2025-07-15 05:21:00.515558456 +0000 UTC m=+17.360682733" Jul 15 05:21:00.799016 containerd[1561]: time="2025-07-15T05:21:00.798950173Z" level=info msg="CreateContainer within sandbox \"e9bbab8a06a87e180a3e1535ebf99de1afa78909e2d242685a61d89141d796e7\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2be2bbb29230b3d9c05696475735706d65c725d4a0bf845c376a9e1feee7f2c3\"" Jul 15 05:21:00.799891 containerd[1561]: time="2025-07-15T05:21:00.799801255Z" level=info msg="StartContainer for \"2be2bbb29230b3d9c05696475735706d65c725d4a0bf845c376a9e1feee7f2c3\"" Jul 15 05:21:00.801504 containerd[1561]: time="2025-07-15T05:21:00.801450208Z" level=info msg="connecting to shim 2be2bbb29230b3d9c05696475735706d65c725d4a0bf845c376a9e1feee7f2c3" address="unix:///run/containerd/s/bb57eff188887b13e1e2d0508eb817db1b872e9bab2d8592f032e9a2306623a8" protocol=ttrpc version=3 Jul 15 05:21:00.856227 systemd[1]: Started cri-containerd-2be2bbb29230b3d9c05696475735706d65c725d4a0bf845c376a9e1feee7f2c3.scope - libcontainer container 2be2bbb29230b3d9c05696475735706d65c725d4a0bf845c376a9e1feee7f2c3. Jul 15 05:21:00.947212 systemd[1]: cri-containerd-2be2bbb29230b3d9c05696475735706d65c725d4a0bf845c376a9e1feee7f2c3.scope: Deactivated successfully. Jul 15 05:21:00.947629 containerd[1561]: time="2025-07-15T05:21:00.947582224Z" level=info msg="StartContainer for \"2be2bbb29230b3d9c05696475735706d65c725d4a0bf845c376a9e1feee7f2c3\" returns successfully" Jul 15 05:21:00.950224 containerd[1561]: time="2025-07-15T05:21:00.950174433Z" level=info msg="received exit event container_id:\"2be2bbb29230b3d9c05696475735706d65c725d4a0bf845c376a9e1feee7f2c3\" id:\"2be2bbb29230b3d9c05696475735706d65c725d4a0bf845c376a9e1feee7f2c3\" pid:3302 exited_at:{seconds:1752556860 nanos:949930912}" Jul 15 05:21:00.950526 containerd[1561]: time="2025-07-15T05:21:00.950491133Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2be2bbb29230b3d9c05696475735706d65c725d4a0bf845c376a9e1feee7f2c3\" id:\"2be2bbb29230b3d9c05696475735706d65c725d4a0bf845c376a9e1feee7f2c3\" pid:3302 exited_at:{seconds:1752556860 nanos:949930912}" Jul 15 05:21:00.987598 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2be2bbb29230b3d9c05696475735706d65c725d4a0bf845c376a9e1feee7f2c3-rootfs.mount: Deactivated successfully. Jul 15 05:21:01.308936 containerd[1561]: time="2025-07-15T05:21:01.308895402Z" level=info msg="CreateContainer within sandbox \"e9bbab8a06a87e180a3e1535ebf99de1afa78909e2d242685a61d89141d796e7\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 15 05:21:01.323457 containerd[1561]: time="2025-07-15T05:21:01.323036698Z" level=info msg="Container 3989de0b10f14b2d1a37cb7ed45bff178e87f795ff835ea653fdb9a99252d43c: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:21:01.331609 containerd[1561]: time="2025-07-15T05:21:01.331544141Z" level=info msg="CreateContainer within sandbox \"e9bbab8a06a87e180a3e1535ebf99de1afa78909e2d242685a61d89141d796e7\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"3989de0b10f14b2d1a37cb7ed45bff178e87f795ff835ea653fdb9a99252d43c\"" Jul 15 05:21:01.332080 containerd[1561]: time="2025-07-15T05:21:01.332038397Z" level=info msg="StartContainer for \"3989de0b10f14b2d1a37cb7ed45bff178e87f795ff835ea653fdb9a99252d43c\"" Jul 15 05:21:01.332971 containerd[1561]: time="2025-07-15T05:21:01.332929173Z" level=info msg="connecting to shim 3989de0b10f14b2d1a37cb7ed45bff178e87f795ff835ea653fdb9a99252d43c" address="unix:///run/containerd/s/bb57eff188887b13e1e2d0508eb817db1b872e9bab2d8592f032e9a2306623a8" protocol=ttrpc version=3 Jul 15 05:21:01.356869 systemd[1]: Started cri-containerd-3989de0b10f14b2d1a37cb7ed45bff178e87f795ff835ea653fdb9a99252d43c.scope - libcontainer container 3989de0b10f14b2d1a37cb7ed45bff178e87f795ff835ea653fdb9a99252d43c. Jul 15 05:21:01.414977 systemd[1]: cri-containerd-3989de0b10f14b2d1a37cb7ed45bff178e87f795ff835ea653fdb9a99252d43c.scope: Deactivated successfully. Jul 15 05:21:01.415568 containerd[1561]: time="2025-07-15T05:21:01.415507667Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3989de0b10f14b2d1a37cb7ed45bff178e87f795ff835ea653fdb9a99252d43c\" id:\"3989de0b10f14b2d1a37cb7ed45bff178e87f795ff835ea653fdb9a99252d43c\" pid:3340 exited_at:{seconds:1752556861 nanos:415110956}" Jul 15 05:21:01.416949 containerd[1561]: time="2025-07-15T05:21:01.416914039Z" level=info msg="received exit event container_id:\"3989de0b10f14b2d1a37cb7ed45bff178e87f795ff835ea653fdb9a99252d43c\" id:\"3989de0b10f14b2d1a37cb7ed45bff178e87f795ff835ea653fdb9a99252d43c\" pid:3340 exited_at:{seconds:1752556861 nanos:415110956}" Jul 15 05:21:01.424517 containerd[1561]: time="2025-07-15T05:21:01.424475712Z" level=info msg="StartContainer for \"3989de0b10f14b2d1a37cb7ed45bff178e87f795ff835ea653fdb9a99252d43c\" returns successfully" Jul 15 05:21:01.440119 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3989de0b10f14b2d1a37cb7ed45bff178e87f795ff835ea653fdb9a99252d43c-rootfs.mount: Deactivated successfully. Jul 15 05:21:02.317801 containerd[1561]: time="2025-07-15T05:21:02.317735995Z" level=info msg="CreateContainer within sandbox \"e9bbab8a06a87e180a3e1535ebf99de1afa78909e2d242685a61d89141d796e7\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 15 05:21:02.333521 containerd[1561]: time="2025-07-15T05:21:02.333476650Z" level=info msg="Container c54549210ccc7f3f1121b2977cbacee047a4fb0163b4b0473bee767d4ad8b5ae: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:21:02.342013 containerd[1561]: time="2025-07-15T05:21:02.341963104Z" level=info msg="CreateContainer within sandbox \"e9bbab8a06a87e180a3e1535ebf99de1afa78909e2d242685a61d89141d796e7\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c54549210ccc7f3f1121b2977cbacee047a4fb0163b4b0473bee767d4ad8b5ae\"" Jul 15 05:21:02.342623 containerd[1561]: time="2025-07-15T05:21:02.342564091Z" level=info msg="StartContainer for \"c54549210ccc7f3f1121b2977cbacee047a4fb0163b4b0473bee767d4ad8b5ae\"" Jul 15 05:21:02.343905 containerd[1561]: time="2025-07-15T05:21:02.343869560Z" level=info msg="connecting to shim c54549210ccc7f3f1121b2977cbacee047a4fb0163b4b0473bee767d4ad8b5ae" address="unix:///run/containerd/s/bb57eff188887b13e1e2d0508eb817db1b872e9bab2d8592f032e9a2306623a8" protocol=ttrpc version=3 Jul 15 05:21:02.370892 systemd[1]: Started cri-containerd-c54549210ccc7f3f1121b2977cbacee047a4fb0163b4b0473bee767d4ad8b5ae.scope - libcontainer container c54549210ccc7f3f1121b2977cbacee047a4fb0163b4b0473bee767d4ad8b5ae. Jul 15 05:21:02.409291 containerd[1561]: time="2025-07-15T05:21:02.409241101Z" level=info msg="StartContainer for \"c54549210ccc7f3f1121b2977cbacee047a4fb0163b4b0473bee767d4ad8b5ae\" returns successfully" Jul 15 05:21:02.547031 containerd[1561]: time="2025-07-15T05:21:02.546972192Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c54549210ccc7f3f1121b2977cbacee047a4fb0163b4b0473bee767d4ad8b5ae\" id:\"53580be279d3d0f3bdb3d9f27ab88d2b18f23535164753659b574b2c2e688592\" pid:3416 exited_at:{seconds:1752556862 nanos:545795457}" Jul 15 05:21:02.591657 kubelet[2719]: I0715 05:21:02.591051 2719 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 15 05:21:02.905590 systemd[1]: Created slice kubepods-burstable-podea49b9aa_fe5a_46c2_94bc_64b79e41495e.slice - libcontainer container kubepods-burstable-podea49b9aa_fe5a_46c2_94bc_64b79e41495e.slice. Jul 15 05:21:02.916487 systemd[1]: Created slice kubepods-burstable-pod7db88f2d_f887_4f61_b384_aea62659ab5c.slice - libcontainer container kubepods-burstable-pod7db88f2d_f887_4f61_b384_aea62659ab5c.slice. Jul 15 05:21:02.958464 kubelet[2719]: I0715 05:21:02.958404 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7db88f2d-f887-4f61-b384-aea62659ab5c-config-volume\") pod \"coredns-7c65d6cfc9-hqhk8\" (UID: \"7db88f2d-f887-4f61-b384-aea62659ab5c\") " pod="kube-system/coredns-7c65d6cfc9-hqhk8" Jul 15 05:21:02.958464 kubelet[2719]: I0715 05:21:02.958453 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ea49b9aa-fe5a-46c2-94bc-64b79e41495e-config-volume\") pod \"coredns-7c65d6cfc9-j7q6s\" (UID: \"ea49b9aa-fe5a-46c2-94bc-64b79e41495e\") " pod="kube-system/coredns-7c65d6cfc9-j7q6s" Jul 15 05:21:02.958464 kubelet[2719]: I0715 05:21:02.958476 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6b6xr\" (UniqueName: \"kubernetes.io/projected/ea49b9aa-fe5a-46c2-94bc-64b79e41495e-kube-api-access-6b6xr\") pod \"coredns-7c65d6cfc9-j7q6s\" (UID: \"ea49b9aa-fe5a-46c2-94bc-64b79e41495e\") " pod="kube-system/coredns-7c65d6cfc9-j7q6s" Jul 15 05:21:02.958760 kubelet[2719]: I0715 05:21:02.958527 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knt7f\" (UniqueName: \"kubernetes.io/projected/7db88f2d-f887-4f61-b384-aea62659ab5c-kube-api-access-knt7f\") pod \"coredns-7c65d6cfc9-hqhk8\" (UID: \"7db88f2d-f887-4f61-b384-aea62659ab5c\") " pod="kube-system/coredns-7c65d6cfc9-hqhk8" Jul 15 05:21:03.213740 containerd[1561]: time="2025-07-15T05:21:03.213556330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-j7q6s,Uid:ea49b9aa-fe5a-46c2-94bc-64b79e41495e,Namespace:kube-system,Attempt:0,}" Jul 15 05:21:03.220386 containerd[1561]: time="2025-07-15T05:21:03.220339014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hqhk8,Uid:7db88f2d-f887-4f61-b384-aea62659ab5c,Namespace:kube-system,Attempt:0,}" Jul 15 05:21:04.609707 systemd-networkd[1473]: cilium_host: Link UP Jul 15 05:21:04.609911 systemd-networkd[1473]: cilium_net: Link UP Jul 15 05:21:04.610136 systemd-networkd[1473]: cilium_net: Gained carrier Jul 15 05:21:04.610344 systemd-networkd[1473]: cilium_host: Gained carrier Jul 15 05:21:04.720469 systemd-networkd[1473]: cilium_vxlan: Link UP Jul 15 05:21:04.720480 systemd-networkd[1473]: cilium_vxlan: Gained carrier Jul 15 05:21:04.946677 kernel: NET: Registered PF_ALG protocol family Jul 15 05:21:05.157743 systemd-networkd[1473]: cilium_net: Gained IPv6LL Jul 15 05:21:05.220774 systemd-networkd[1473]: cilium_host: Gained IPv6LL Jul 15 05:21:05.612426 systemd-networkd[1473]: lxc_health: Link UP Jul 15 05:21:05.614901 systemd-networkd[1473]: lxc_health: Gained carrier Jul 15 05:21:05.803682 kernel: eth0: renamed from tmp109eb Jul 15 05:21:05.805987 systemd-networkd[1473]: lxc3590802f5105: Link UP Jul 15 05:21:05.817477 systemd-networkd[1473]: tmp32097: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 05:21:05.817609 systemd-networkd[1473]: tmp32097: Cannot enable IPv6, ignoring: No such file or directory Jul 15 05:21:05.817653 systemd-networkd[1473]: tmp32097: Cannot configure IPv6 privacy extensions for interface, ignoring: No such file or directory Jul 15 05:21:05.817680 systemd-networkd[1473]: tmp32097: Cannot disable kernel IPv6 accept_ra for interface, ignoring: No such file or directory Jul 15 05:21:05.817704 systemd-networkd[1473]: tmp32097: Cannot set IPv6 proxy NDP, ignoring: No such file or directory Jul 15 05:21:05.817729 systemd-networkd[1473]: tmp32097: Cannot enable promote_secondaries for interface, ignoring: No such file or directory Jul 15 05:21:05.819774 kernel: eth0: renamed from tmp32097 Jul 15 05:21:05.820374 systemd-networkd[1473]: lxc252f4ad54b7b: Link UP Jul 15 05:21:05.820792 systemd-networkd[1473]: lxc3590802f5105: Gained carrier Jul 15 05:21:05.821062 systemd-networkd[1473]: lxc252f4ad54b7b: Gained carrier Jul 15 05:21:05.923788 systemd-networkd[1473]: cilium_vxlan: Gained IPv6LL Jul 15 05:21:06.452299 kubelet[2719]: I0715 05:21:06.452229 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-9cdtx" podStartSLOduration=9.999363116 podStartE2EDuration="17.452211419s" podCreationTimestamp="2025-07-15 05:20:49 +0000 UTC" firstStartedPulling="2025-07-15 05:20:50.560114518 +0000 UTC m=+7.405238795" lastFinishedPulling="2025-07-15 05:20:58.012962831 +0000 UTC m=+14.858087098" observedRunningTime="2025-07-15 05:21:03.337766101 +0000 UTC m=+20.182890378" watchObservedRunningTime="2025-07-15 05:21:06.452211419 +0000 UTC m=+23.297335696" Jul 15 05:21:06.819868 systemd-networkd[1473]: lxc_health: Gained IPv6LL Jul 15 05:21:07.011830 systemd-networkd[1473]: lxc3590802f5105: Gained IPv6LL Jul 15 05:21:07.587847 systemd-networkd[1473]: lxc252f4ad54b7b: Gained IPv6LL Jul 15 05:21:09.819879 containerd[1561]: time="2025-07-15T05:21:09.819815995Z" level=info msg="connecting to shim 32097b7a6c395e3207f45b802ce4fbed29e64883787dd3289270d2d94d42069a" address="unix:///run/containerd/s/1a52ec93dd36b61c6d6e8f59ece98817015ed39bd751483cd76b2d48d845007d" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:21:09.822032 containerd[1561]: time="2025-07-15T05:21:09.821985316Z" level=info msg="connecting to shim 109eb00a6d75841e6b64e03e8e53b33d9817c6e006dba053df3bf51cd659cebe" address="unix:///run/containerd/s/44eb7238859069fabdeec3161735a825aa2375f3ecc835875becdf67ba52c8eb" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:21:09.850838 systemd[1]: Started cri-containerd-32097b7a6c395e3207f45b802ce4fbed29e64883787dd3289270d2d94d42069a.scope - libcontainer container 32097b7a6c395e3207f45b802ce4fbed29e64883787dd3289270d2d94d42069a. Jul 15 05:21:09.854671 systemd[1]: Started cri-containerd-109eb00a6d75841e6b64e03e8e53b33d9817c6e006dba053df3bf51cd659cebe.scope - libcontainer container 109eb00a6d75841e6b64e03e8e53b33d9817c6e006dba053df3bf51cd659cebe. Jul 15 05:21:09.866270 systemd-resolved[1411]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 05:21:09.869437 systemd-resolved[1411]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 05:21:09.972214 containerd[1561]: time="2025-07-15T05:21:09.972153154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-j7q6s,Uid:ea49b9aa-fe5a-46c2-94bc-64b79e41495e,Namespace:kube-system,Attempt:0,} returns sandbox id \"32097b7a6c395e3207f45b802ce4fbed29e64883787dd3289270d2d94d42069a\"" Jul 15 05:21:09.995665 containerd[1561]: time="2025-07-15T05:21:09.995579113Z" level=info msg="CreateContainer within sandbox \"32097b7a6c395e3207f45b802ce4fbed29e64883787dd3289270d2d94d42069a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 15 05:21:10.050466 containerd[1561]: time="2025-07-15T05:21:10.050402737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hqhk8,Uid:7db88f2d-f887-4f61-b384-aea62659ab5c,Namespace:kube-system,Attempt:0,} returns sandbox id \"109eb00a6d75841e6b64e03e8e53b33d9817c6e006dba053df3bf51cd659cebe\"" Jul 15 05:21:10.052297 containerd[1561]: time="2025-07-15T05:21:10.052261521Z" level=info msg="CreateContainer within sandbox \"109eb00a6d75841e6b64e03e8e53b33d9817c6e006dba053df3bf51cd659cebe\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 15 05:21:10.776483 containerd[1561]: time="2025-07-15T05:21:10.776437688Z" level=info msg="Container 1af8390bce9c5188438c582002fe91624c21c4363c767325684c80dd8c826c4d: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:21:10.789105 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3585372235.mount: Deactivated successfully. Jul 15 05:21:10.886401 containerd[1561]: time="2025-07-15T05:21:10.886360772Z" level=info msg="Container 3cbb52dd8a38b3f777fd9f0cbd66914a7fb252d3526d3d577692e1aadcb35360: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:21:10.899330 containerd[1561]: time="2025-07-15T05:21:10.899279160Z" level=info msg="CreateContainer within sandbox \"109eb00a6d75841e6b64e03e8e53b33d9817c6e006dba053df3bf51cd659cebe\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3cbb52dd8a38b3f777fd9f0cbd66914a7fb252d3526d3d577692e1aadcb35360\"" Jul 15 05:21:10.900106 containerd[1561]: time="2025-07-15T05:21:10.900077394Z" level=info msg="StartContainer for \"3cbb52dd8a38b3f777fd9f0cbd66914a7fb252d3526d3d577692e1aadcb35360\"" Jul 15 05:21:10.901088 containerd[1561]: time="2025-07-15T05:21:10.901047753Z" level=info msg="connecting to shim 3cbb52dd8a38b3f777fd9f0cbd66914a7fb252d3526d3d577692e1aadcb35360" address="unix:///run/containerd/s/44eb7238859069fabdeec3161735a825aa2375f3ecc835875becdf67ba52c8eb" protocol=ttrpc version=3 Jul 15 05:21:10.908339 containerd[1561]: time="2025-07-15T05:21:10.908282830Z" level=info msg="CreateContainer within sandbox \"32097b7a6c395e3207f45b802ce4fbed29e64883787dd3289270d2d94d42069a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1af8390bce9c5188438c582002fe91624c21c4363c767325684c80dd8c826c4d\"" Jul 15 05:21:10.908834 containerd[1561]: time="2025-07-15T05:21:10.908804463Z" level=info msg="StartContainer for \"1af8390bce9c5188438c582002fe91624c21c4363c767325684c80dd8c826c4d\"" Jul 15 05:21:10.909874 containerd[1561]: time="2025-07-15T05:21:10.909830256Z" level=info msg="connecting to shim 1af8390bce9c5188438c582002fe91624c21c4363c767325684c80dd8c826c4d" address="unix:///run/containerd/s/1a52ec93dd36b61c6d6e8f59ece98817015ed39bd751483cd76b2d48d845007d" protocol=ttrpc version=3 Jul 15 05:21:10.929809 systemd[1]: Started cri-containerd-3cbb52dd8a38b3f777fd9f0cbd66914a7fb252d3526d3d577692e1aadcb35360.scope - libcontainer container 3cbb52dd8a38b3f777fd9f0cbd66914a7fb252d3526d3d577692e1aadcb35360. Jul 15 05:21:10.933793 systemd[1]: Started cri-containerd-1af8390bce9c5188438c582002fe91624c21c4363c767325684c80dd8c826c4d.scope - libcontainer container 1af8390bce9c5188438c582002fe91624c21c4363c767325684c80dd8c826c4d. Jul 15 05:21:10.973248 containerd[1561]: time="2025-07-15T05:21:10.973201999Z" level=info msg="StartContainer for \"1af8390bce9c5188438c582002fe91624c21c4363c767325684c80dd8c826c4d\" returns successfully" Jul 15 05:21:10.973382 containerd[1561]: time="2025-07-15T05:21:10.973308801Z" level=info msg="StartContainer for \"3cbb52dd8a38b3f777fd9f0cbd66914a7fb252d3526d3d577692e1aadcb35360\" returns successfully" Jul 15 05:21:11.365906 kubelet[2719]: I0715 05:21:11.365844 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-hqhk8" podStartSLOduration=22.365826497 podStartE2EDuration="22.365826497s" podCreationTimestamp="2025-07-15 05:20:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 05:21:11.364832955 +0000 UTC m=+28.209957232" watchObservedRunningTime="2025-07-15 05:21:11.365826497 +0000 UTC m=+28.210950774" Jul 15 05:21:11.376775 kubelet[2719]: I0715 05:21:11.373853 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-j7q6s" podStartSLOduration=22.373838252 podStartE2EDuration="22.373838252s" podCreationTimestamp="2025-07-15 05:20:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 05:21:11.372668148 +0000 UTC m=+28.217792425" watchObservedRunningTime="2025-07-15 05:21:11.373838252 +0000 UTC m=+28.218962529" Jul 15 05:21:12.383306 systemd[1]: Started sshd@7-10.0.0.126:22-10.0.0.1:60676.service - OpenSSH per-connection server daemon (10.0.0.1:60676). Jul 15 05:21:12.438813 sshd[4060]: Accepted publickey for core from 10.0.0.1 port 60676 ssh2: RSA SHA256:u8XLUfBAvkkcme5upcPT7VprXL+p6dqsv6pgcjAevNM Jul 15 05:21:12.440170 sshd-session[4060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:21:12.444267 systemd-logind[1536]: New session 8 of user core. Jul 15 05:21:12.456778 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 15 05:21:12.584057 sshd[4063]: Connection closed by 10.0.0.1 port 60676 Jul 15 05:21:12.584350 sshd-session[4060]: pam_unix(sshd:session): session closed for user core Jul 15 05:21:12.589529 systemd[1]: sshd@7-10.0.0.126:22-10.0.0.1:60676.service: Deactivated successfully. Jul 15 05:21:12.591494 systemd[1]: session-8.scope: Deactivated successfully. Jul 15 05:21:12.593123 systemd-logind[1536]: Session 8 logged out. Waiting for processes to exit. Jul 15 05:21:12.594616 systemd-logind[1536]: Removed session 8. Jul 15 05:21:17.601038 systemd[1]: Started sshd@8-10.0.0.126:22-10.0.0.1:60684.service - OpenSSH per-connection server daemon (10.0.0.1:60684). Jul 15 05:21:17.650288 sshd[4080]: Accepted publickey for core from 10.0.0.1 port 60684 ssh2: RSA SHA256:u8XLUfBAvkkcme5upcPT7VprXL+p6dqsv6pgcjAevNM Jul 15 05:21:17.651998 sshd-session[4080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:21:17.657180 systemd-logind[1536]: New session 9 of user core. Jul 15 05:21:17.663815 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 15 05:21:17.781002 sshd[4083]: Connection closed by 10.0.0.1 port 60684 Jul 15 05:21:17.781397 sshd-session[4080]: pam_unix(sshd:session): session closed for user core Jul 15 05:21:17.785238 systemd[1]: sshd@8-10.0.0.126:22-10.0.0.1:60684.service: Deactivated successfully. Jul 15 05:21:17.787141 systemd[1]: session-9.scope: Deactivated successfully. Jul 15 05:21:17.788057 systemd-logind[1536]: Session 9 logged out. Waiting for processes to exit. Jul 15 05:21:17.789290 systemd-logind[1536]: Removed session 9. Jul 15 05:21:22.793524 systemd[1]: Started sshd@9-10.0.0.126:22-10.0.0.1:44030.service - OpenSSH per-connection server daemon (10.0.0.1:44030). Jul 15 05:21:22.844184 sshd[4100]: Accepted publickey for core from 10.0.0.1 port 44030 ssh2: RSA SHA256:u8XLUfBAvkkcme5upcPT7VprXL+p6dqsv6pgcjAevNM Jul 15 05:21:22.845773 sshd-session[4100]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:21:22.850515 systemd-logind[1536]: New session 10 of user core. Jul 15 05:21:22.862760 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 15 05:21:22.977372 sshd[4103]: Connection closed by 10.0.0.1 port 44030 Jul 15 05:21:22.977761 sshd-session[4100]: pam_unix(sshd:session): session closed for user core Jul 15 05:21:22.981946 systemd[1]: sshd@9-10.0.0.126:22-10.0.0.1:44030.service: Deactivated successfully. Jul 15 05:21:22.983779 systemd[1]: session-10.scope: Deactivated successfully. Jul 15 05:21:22.984726 systemd-logind[1536]: Session 10 logged out. Waiting for processes to exit. Jul 15 05:21:22.985890 systemd-logind[1536]: Removed session 10. Jul 15 05:21:27.996305 systemd[1]: Started sshd@10-10.0.0.126:22-10.0.0.1:36942.service - OpenSSH per-connection server daemon (10.0.0.1:36942). Jul 15 05:21:28.053351 sshd[4118]: Accepted publickey for core from 10.0.0.1 port 36942 ssh2: RSA SHA256:u8XLUfBAvkkcme5upcPT7VprXL+p6dqsv6pgcjAevNM Jul 15 05:21:28.055246 sshd-session[4118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:21:28.059500 systemd-logind[1536]: New session 11 of user core. Jul 15 05:21:28.068841 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 15 05:21:28.192584 sshd[4121]: Connection closed by 10.0.0.1 port 36942 Jul 15 05:21:28.192961 sshd-session[4118]: pam_unix(sshd:session): session closed for user core Jul 15 05:21:28.197185 systemd[1]: sshd@10-10.0.0.126:22-10.0.0.1:36942.service: Deactivated successfully. Jul 15 05:21:28.199326 systemd[1]: session-11.scope: Deactivated successfully. Jul 15 05:21:28.200151 systemd-logind[1536]: Session 11 logged out. Waiting for processes to exit. Jul 15 05:21:28.201459 systemd-logind[1536]: Removed session 11. Jul 15 05:21:33.211912 systemd[1]: Started sshd@11-10.0.0.126:22-10.0.0.1:36946.service - OpenSSH per-connection server daemon (10.0.0.1:36946). Jul 15 05:21:33.265415 sshd[4136]: Accepted publickey for core from 10.0.0.1 port 36946 ssh2: RSA SHA256:u8XLUfBAvkkcme5upcPT7VprXL+p6dqsv6pgcjAevNM Jul 15 05:21:33.267153 sshd-session[4136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:21:33.271264 systemd-logind[1536]: New session 12 of user core. Jul 15 05:21:33.284936 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 15 05:21:33.400397 sshd[4139]: Connection closed by 10.0.0.1 port 36946 Jul 15 05:21:33.400860 sshd-session[4136]: pam_unix(sshd:session): session closed for user core Jul 15 05:21:33.411693 systemd[1]: sshd@11-10.0.0.126:22-10.0.0.1:36946.service: Deactivated successfully. Jul 15 05:21:33.414076 systemd[1]: session-12.scope: Deactivated successfully. Jul 15 05:21:33.415166 systemd-logind[1536]: Session 12 logged out. Waiting for processes to exit. Jul 15 05:21:33.419567 systemd[1]: Started sshd@12-10.0.0.126:22-10.0.0.1:36958.service - OpenSSH per-connection server daemon (10.0.0.1:36958). Jul 15 05:21:33.421264 systemd-logind[1536]: Removed session 12. Jul 15 05:21:33.474821 sshd[4153]: Accepted publickey for core from 10.0.0.1 port 36958 ssh2: RSA SHA256:u8XLUfBAvkkcme5upcPT7VprXL+p6dqsv6pgcjAevNM Jul 15 05:21:33.476552 sshd-session[4153]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:21:33.481104 systemd-logind[1536]: New session 13 of user core. Jul 15 05:21:33.489019 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 15 05:21:33.636374 sshd[4156]: Connection closed by 10.0.0.1 port 36958 Jul 15 05:21:33.637352 sshd-session[4153]: pam_unix(sshd:session): session closed for user core Jul 15 05:21:33.646489 systemd[1]: sshd@12-10.0.0.126:22-10.0.0.1:36958.service: Deactivated successfully. Jul 15 05:21:33.649694 systemd[1]: session-13.scope: Deactivated successfully. Jul 15 05:21:33.650614 systemd-logind[1536]: Session 13 logged out. Waiting for processes to exit. Jul 15 05:21:33.654918 systemd[1]: Started sshd@13-10.0.0.126:22-10.0.0.1:36974.service - OpenSSH per-connection server daemon (10.0.0.1:36974). Jul 15 05:21:33.656669 systemd-logind[1536]: Removed session 13. Jul 15 05:21:33.711265 sshd[4167]: Accepted publickey for core from 10.0.0.1 port 36974 ssh2: RSA SHA256:u8XLUfBAvkkcme5upcPT7VprXL+p6dqsv6pgcjAevNM Jul 15 05:21:33.712959 sshd-session[4167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:21:33.717379 systemd-logind[1536]: New session 14 of user core. Jul 15 05:21:33.727769 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 15 05:21:33.840742 sshd[4170]: Connection closed by 10.0.0.1 port 36974 Jul 15 05:21:33.841054 sshd-session[4167]: pam_unix(sshd:session): session closed for user core Jul 15 05:21:33.845232 systemd[1]: sshd@13-10.0.0.126:22-10.0.0.1:36974.service: Deactivated successfully. Jul 15 05:21:33.847001 systemd[1]: session-14.scope: Deactivated successfully. Jul 15 05:21:33.847644 systemd-logind[1536]: Session 14 logged out. Waiting for processes to exit. Jul 15 05:21:33.848616 systemd-logind[1536]: Removed session 14. Jul 15 05:21:38.861301 systemd[1]: Started sshd@14-10.0.0.126:22-10.0.0.1:53046.service - OpenSSH per-connection server daemon (10.0.0.1:53046). Jul 15 05:21:38.919540 sshd[4184]: Accepted publickey for core from 10.0.0.1 port 53046 ssh2: RSA SHA256:u8XLUfBAvkkcme5upcPT7VprXL+p6dqsv6pgcjAevNM Jul 15 05:21:38.921731 sshd-session[4184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:21:38.927089 systemd-logind[1536]: New session 15 of user core. Jul 15 05:21:38.937834 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 15 05:21:39.054302 sshd[4187]: Connection closed by 10.0.0.1 port 53046 Jul 15 05:21:39.054623 sshd-session[4184]: pam_unix(sshd:session): session closed for user core Jul 15 05:21:39.058771 systemd[1]: sshd@14-10.0.0.126:22-10.0.0.1:53046.service: Deactivated successfully. Jul 15 05:21:39.061121 systemd[1]: session-15.scope: Deactivated successfully. Jul 15 05:21:39.061974 systemd-logind[1536]: Session 15 logged out. Waiting for processes to exit. Jul 15 05:21:39.063115 systemd-logind[1536]: Removed session 15. Jul 15 05:21:44.074697 systemd[1]: Started sshd@15-10.0.0.126:22-10.0.0.1:53052.service - OpenSSH per-connection server daemon (10.0.0.1:53052). Jul 15 05:21:44.120107 sshd[4205]: Accepted publickey for core from 10.0.0.1 port 53052 ssh2: RSA SHA256:u8XLUfBAvkkcme5upcPT7VprXL+p6dqsv6pgcjAevNM Jul 15 05:21:44.121780 sshd-session[4205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:21:44.126201 systemd-logind[1536]: New session 16 of user core. Jul 15 05:21:44.135764 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 15 05:21:44.244576 sshd[4208]: Connection closed by 10.0.0.1 port 53052 Jul 15 05:21:44.244954 sshd-session[4205]: pam_unix(sshd:session): session closed for user core Jul 15 05:21:44.259286 systemd[1]: sshd@15-10.0.0.126:22-10.0.0.1:53052.service: Deactivated successfully. Jul 15 05:21:44.261182 systemd[1]: session-16.scope: Deactivated successfully. Jul 15 05:21:44.262124 systemd-logind[1536]: Session 16 logged out. Waiting for processes to exit. Jul 15 05:21:44.264716 systemd[1]: Started sshd@16-10.0.0.126:22-10.0.0.1:53056.service - OpenSSH per-connection server daemon (10.0.0.1:53056). Jul 15 05:21:44.265578 systemd-logind[1536]: Removed session 16. Jul 15 05:21:44.323303 sshd[4221]: Accepted publickey for core from 10.0.0.1 port 53056 ssh2: RSA SHA256:u8XLUfBAvkkcme5upcPT7VprXL+p6dqsv6pgcjAevNM Jul 15 05:21:44.324768 sshd-session[4221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:21:44.329388 systemd-logind[1536]: New session 17 of user core. Jul 15 05:21:44.338780 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 15 05:21:44.517723 sshd[4224]: Connection closed by 10.0.0.1 port 53056 Jul 15 05:21:44.518227 sshd-session[4221]: pam_unix(sshd:session): session closed for user core Jul 15 05:21:44.527461 systemd[1]: sshd@16-10.0.0.126:22-10.0.0.1:53056.service: Deactivated successfully. Jul 15 05:21:44.529435 systemd[1]: session-17.scope: Deactivated successfully. Jul 15 05:21:44.530213 systemd-logind[1536]: Session 17 logged out. Waiting for processes to exit. Jul 15 05:21:44.532673 systemd[1]: Started sshd@17-10.0.0.126:22-10.0.0.1:53062.service - OpenSSH per-connection server daemon (10.0.0.1:53062). Jul 15 05:21:44.533312 systemd-logind[1536]: Removed session 17. Jul 15 05:21:44.587197 sshd[4236]: Accepted publickey for core from 10.0.0.1 port 53062 ssh2: RSA SHA256:u8XLUfBAvkkcme5upcPT7VprXL+p6dqsv6pgcjAevNM Jul 15 05:21:44.589070 sshd-session[4236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:21:44.593864 systemd-logind[1536]: New session 18 of user core. Jul 15 05:21:44.604840 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 15 05:21:45.757563 sshd[4239]: Connection closed by 10.0.0.1 port 53062 Jul 15 05:21:45.758212 sshd-session[4236]: pam_unix(sshd:session): session closed for user core Jul 15 05:21:45.769807 systemd[1]: sshd@17-10.0.0.126:22-10.0.0.1:53062.service: Deactivated successfully. Jul 15 05:21:45.772165 systemd[1]: session-18.scope: Deactivated successfully. Jul 15 05:21:45.774079 systemd-logind[1536]: Session 18 logged out. Waiting for processes to exit. Jul 15 05:21:45.777977 systemd[1]: Started sshd@18-10.0.0.126:22-10.0.0.1:53072.service - OpenSSH per-connection server daemon (10.0.0.1:53072). Jul 15 05:21:45.780124 systemd-logind[1536]: Removed session 18. Jul 15 05:21:45.835246 sshd[4258]: Accepted publickey for core from 10.0.0.1 port 53072 ssh2: RSA SHA256:u8XLUfBAvkkcme5upcPT7VprXL+p6dqsv6pgcjAevNM Jul 15 05:21:45.836532 sshd-session[4258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:21:45.840826 systemd-logind[1536]: New session 19 of user core. Jul 15 05:21:45.848799 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 15 05:21:46.069200 sshd[4261]: Connection closed by 10.0.0.1 port 53072 Jul 15 05:21:46.069860 sshd-session[4258]: pam_unix(sshd:session): session closed for user core Jul 15 05:21:46.079889 systemd[1]: sshd@18-10.0.0.126:22-10.0.0.1:53072.service: Deactivated successfully. Jul 15 05:21:46.081950 systemd[1]: session-19.scope: Deactivated successfully. Jul 15 05:21:46.082988 systemd-logind[1536]: Session 19 logged out. Waiting for processes to exit. Jul 15 05:21:46.086117 systemd[1]: Started sshd@19-10.0.0.126:22-10.0.0.1:53074.service - OpenSSH per-connection server daemon (10.0.0.1:53074). Jul 15 05:21:46.087075 systemd-logind[1536]: Removed session 19. Jul 15 05:21:46.142537 sshd[4272]: Accepted publickey for core from 10.0.0.1 port 53074 ssh2: RSA SHA256:u8XLUfBAvkkcme5upcPT7VprXL+p6dqsv6pgcjAevNM Jul 15 05:21:46.144336 sshd-session[4272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:21:46.148939 systemd-logind[1536]: New session 20 of user core. Jul 15 05:21:46.156771 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 15 05:21:46.278164 sshd[4275]: Connection closed by 10.0.0.1 port 53074 Jul 15 05:21:46.278518 sshd-session[4272]: pam_unix(sshd:session): session closed for user core Jul 15 05:21:46.283358 systemd[1]: sshd@19-10.0.0.126:22-10.0.0.1:53074.service: Deactivated successfully. Jul 15 05:21:46.285486 systemd[1]: session-20.scope: Deactivated successfully. Jul 15 05:21:46.286425 systemd-logind[1536]: Session 20 logged out. Waiting for processes to exit. Jul 15 05:21:46.288034 systemd-logind[1536]: Removed session 20. Jul 15 05:21:51.298417 systemd[1]: Started sshd@20-10.0.0.126:22-10.0.0.1:52196.service - OpenSSH per-connection server daemon (10.0.0.1:52196). Jul 15 05:21:51.368685 sshd[4290]: Accepted publickey for core from 10.0.0.1 port 52196 ssh2: RSA SHA256:u8XLUfBAvkkcme5upcPT7VprXL+p6dqsv6pgcjAevNM Jul 15 05:21:51.370330 sshd-session[4290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:21:51.375771 systemd-logind[1536]: New session 21 of user core. Jul 15 05:21:51.387841 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 15 05:21:51.503041 sshd[4293]: Connection closed by 10.0.0.1 port 52196 Jul 15 05:21:51.503390 sshd-session[4290]: pam_unix(sshd:session): session closed for user core Jul 15 05:21:51.507500 systemd[1]: sshd@20-10.0.0.126:22-10.0.0.1:52196.service: Deactivated successfully. Jul 15 05:21:51.509416 systemd[1]: session-21.scope: Deactivated successfully. Jul 15 05:21:51.510238 systemd-logind[1536]: Session 21 logged out. Waiting for processes to exit. Jul 15 05:21:51.511318 systemd-logind[1536]: Removed session 21. Jul 15 05:21:56.521409 systemd[1]: Started sshd@21-10.0.0.126:22-10.0.0.1:52200.service - OpenSSH per-connection server daemon (10.0.0.1:52200). Jul 15 05:21:56.590498 sshd[4311]: Accepted publickey for core from 10.0.0.1 port 52200 ssh2: RSA SHA256:u8XLUfBAvkkcme5upcPT7VprXL+p6dqsv6pgcjAevNM Jul 15 05:21:56.592138 sshd-session[4311]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:21:56.596963 systemd-logind[1536]: New session 22 of user core. Jul 15 05:21:56.606882 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 15 05:21:56.717613 sshd[4314]: Connection closed by 10.0.0.1 port 52200 Jul 15 05:21:56.717957 sshd-session[4311]: pam_unix(sshd:session): session closed for user core Jul 15 05:21:56.722488 systemd[1]: sshd@21-10.0.0.126:22-10.0.0.1:52200.service: Deactivated successfully. Jul 15 05:21:56.724225 systemd[1]: session-22.scope: Deactivated successfully. Jul 15 05:21:56.725096 systemd-logind[1536]: Session 22 logged out. Waiting for processes to exit. Jul 15 05:21:56.726166 systemd-logind[1536]: Removed session 22. Jul 15 05:22:01.730287 systemd[1]: Started sshd@22-10.0.0.126:22-10.0.0.1:58788.service - OpenSSH per-connection server daemon (10.0.0.1:58788). Jul 15 05:22:01.855215 sshd[4327]: Accepted publickey for core from 10.0.0.1 port 58788 ssh2: RSA SHA256:u8XLUfBAvkkcme5upcPT7VprXL+p6dqsv6pgcjAevNM Jul 15 05:22:01.855343 sshd-session[4327]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:22:01.859971 systemd-logind[1536]: New session 23 of user core. Jul 15 05:22:01.870815 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 15 05:22:02.101800 sshd[4330]: Connection closed by 10.0.0.1 port 58788 Jul 15 05:22:02.102150 sshd-session[4327]: pam_unix(sshd:session): session closed for user core Jul 15 05:22:02.106349 systemd[1]: sshd@22-10.0.0.126:22-10.0.0.1:58788.service: Deactivated successfully. Jul 15 05:22:02.108619 systemd[1]: session-23.scope: Deactivated successfully. Jul 15 05:22:02.109698 systemd-logind[1536]: Session 23 logged out. Waiting for processes to exit. Jul 15 05:22:02.111185 systemd-logind[1536]: Removed session 23. Jul 15 05:22:07.118207 systemd[1]: Started sshd@23-10.0.0.126:22-10.0.0.1:58796.service - OpenSSH per-connection server daemon (10.0.0.1:58796). Jul 15 05:22:07.170924 sshd[4344]: Accepted publickey for core from 10.0.0.1 port 58796 ssh2: RSA SHA256:u8XLUfBAvkkcme5upcPT7VprXL+p6dqsv6pgcjAevNM Jul 15 05:22:07.172680 sshd-session[4344]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:22:07.177425 systemd-logind[1536]: New session 24 of user core. Jul 15 05:22:07.188842 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 15 05:22:07.301718 sshd[4347]: Connection closed by 10.0.0.1 port 58796 Jul 15 05:22:07.302064 sshd-session[4344]: pam_unix(sshd:session): session closed for user core Jul 15 05:22:07.316706 systemd[1]: sshd@23-10.0.0.126:22-10.0.0.1:58796.service: Deactivated successfully. Jul 15 05:22:07.318615 systemd[1]: session-24.scope: Deactivated successfully. Jul 15 05:22:07.319579 systemd-logind[1536]: Session 24 logged out. Waiting for processes to exit. Jul 15 05:22:07.323100 systemd[1]: Started sshd@24-10.0.0.126:22-10.0.0.1:58804.service - OpenSSH per-connection server daemon (10.0.0.1:58804). Jul 15 05:22:07.323982 systemd-logind[1536]: Removed session 24. Jul 15 05:22:07.385111 sshd[4360]: Accepted publickey for core from 10.0.0.1 port 58804 ssh2: RSA SHA256:u8XLUfBAvkkcme5upcPT7VprXL+p6dqsv6pgcjAevNM Jul 15 05:22:07.386819 sshd-session[4360]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:22:07.391893 systemd-logind[1536]: New session 25 of user core. Jul 15 05:22:07.402766 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 15 05:22:08.748458 containerd[1561]: time="2025-07-15T05:22:08.748407967Z" level=info msg="StopContainer for \"7cbbb0b1a372e8c5cb638071941b0142a05205482f2cc7796b8485b5e35e81f7\" with timeout 30 (s)" Jul 15 05:22:08.755045 containerd[1561]: time="2025-07-15T05:22:08.755006231Z" level=info msg="Stop container \"7cbbb0b1a372e8c5cb638071941b0142a05205482f2cc7796b8485b5e35e81f7\" with signal terminated" Jul 15 05:22:08.767419 systemd[1]: cri-containerd-7cbbb0b1a372e8c5cb638071941b0142a05205482f2cc7796b8485b5e35e81f7.scope: Deactivated successfully. Jul 15 05:22:08.769219 containerd[1561]: time="2025-07-15T05:22:08.769094627Z" level=info msg="received exit event container_id:\"7cbbb0b1a372e8c5cb638071941b0142a05205482f2cc7796b8485b5e35e81f7\" id:\"7cbbb0b1a372e8c5cb638071941b0142a05205482f2cc7796b8485b5e35e81f7\" pid:3268 exited_at:{seconds:1752556928 nanos:768731583}" Jul 15 05:22:08.769219 containerd[1561]: time="2025-07-15T05:22:08.769198005Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7cbbb0b1a372e8c5cb638071941b0142a05205482f2cc7796b8485b5e35e81f7\" id:\"7cbbb0b1a372e8c5cb638071941b0142a05205482f2cc7796b8485b5e35e81f7\" pid:3268 exited_at:{seconds:1752556928 nanos:768731583}" Jul 15 05:22:08.769885 containerd[1561]: time="2025-07-15T05:22:08.769849472Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 15 05:22:08.777425 containerd[1561]: time="2025-07-15T05:22:08.777366184Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c54549210ccc7f3f1121b2977cbacee047a4fb0163b4b0473bee767d4ad8b5ae\" id:\"dd1da8aaf1e549c47fde712c774253000b27352133e2fc4334c6c30de93025f7\" pid:4383 exited_at:{seconds:1752556928 nanos:776932744}" Jul 15 05:22:08.780738 containerd[1561]: time="2025-07-15T05:22:08.780622444Z" level=info msg="StopContainer for \"c54549210ccc7f3f1121b2977cbacee047a4fb0163b4b0473bee767d4ad8b5ae\" with timeout 2 (s)" Jul 15 05:22:08.781015 containerd[1561]: time="2025-07-15T05:22:08.780990878Z" level=info msg="Stop container \"c54549210ccc7f3f1121b2977cbacee047a4fb0163b4b0473bee767d4ad8b5ae\" with signal terminated" Jul 15 05:22:08.788819 systemd-networkd[1473]: lxc_health: Link DOWN Jul 15 05:22:08.788831 systemd-networkd[1473]: lxc_health: Lost carrier Jul 15 05:22:08.793891 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7cbbb0b1a372e8c5cb638071941b0142a05205482f2cc7796b8485b5e35e81f7-rootfs.mount: Deactivated successfully. Jul 15 05:22:08.812395 systemd[1]: cri-containerd-c54549210ccc7f3f1121b2977cbacee047a4fb0163b4b0473bee767d4ad8b5ae.scope: Deactivated successfully. Jul 15 05:22:08.812963 systemd[1]: cri-containerd-c54549210ccc7f3f1121b2977cbacee047a4fb0163b4b0473bee767d4ad8b5ae.scope: Consumed 6.740s CPU time, 122.9M memory peak, 708K read from disk, 13.3M written to disk. Jul 15 05:22:08.813419 containerd[1561]: time="2025-07-15T05:22:08.813383647Z" level=info msg="received exit event container_id:\"c54549210ccc7f3f1121b2977cbacee047a4fb0163b4b0473bee767d4ad8b5ae\" id:\"c54549210ccc7f3f1121b2977cbacee047a4fb0163b4b0473bee767d4ad8b5ae\" pid:3379 exited_at:{seconds:1752556928 nanos:813008981}" Jul 15 05:22:08.813790 containerd[1561]: time="2025-07-15T05:22:08.813759827Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c54549210ccc7f3f1121b2977cbacee047a4fb0163b4b0473bee767d4ad8b5ae\" id:\"c54549210ccc7f3f1121b2977cbacee047a4fb0163b4b0473bee767d4ad8b5ae\" pid:3379 exited_at:{seconds:1752556928 nanos:813008981}" Jul 15 05:22:08.814876 containerd[1561]: time="2025-07-15T05:22:08.814838721Z" level=info msg="StopContainer for \"7cbbb0b1a372e8c5cb638071941b0142a05205482f2cc7796b8485b5e35e81f7\" returns successfully" Jul 15 05:22:08.817433 containerd[1561]: time="2025-07-15T05:22:08.817404430Z" level=info msg="StopPodSandbox for \"fae62e08ec4d050ab1a605e53c6d379c3f3b96621fac379c5b358a77ac0c0bdd\"" Jul 15 05:22:08.817493 containerd[1561]: time="2025-07-15T05:22:08.817467410Z" level=info msg="Container to stop \"7cbbb0b1a372e8c5cb638071941b0142a05205482f2cc7796b8485b5e35e81f7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 05:22:08.824814 systemd[1]: cri-containerd-fae62e08ec4d050ab1a605e53c6d379c3f3b96621fac379c5b358a77ac0c0bdd.scope: Deactivated successfully. Jul 15 05:22:08.826438 containerd[1561]: time="2025-07-15T05:22:08.825868445Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fae62e08ec4d050ab1a605e53c6d379c3f3b96621fac379c5b358a77ac0c0bdd\" id:\"fae62e08ec4d050ab1a605e53c6d379c3f3b96621fac379c5b358a77ac0c0bdd\" pid:2903 exit_status:137 exited_at:{seconds:1752556928 nanos:825454833}" Jul 15 05:22:08.835832 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c54549210ccc7f3f1121b2977cbacee047a4fb0163b4b0473bee767d4ad8b5ae-rootfs.mount: Deactivated successfully. Jul 15 05:22:08.847311 containerd[1561]: time="2025-07-15T05:22:08.847273941Z" level=info msg="StopContainer for \"c54549210ccc7f3f1121b2977cbacee047a4fb0163b4b0473bee767d4ad8b5ae\" returns successfully" Jul 15 05:22:08.847718 containerd[1561]: time="2025-07-15T05:22:08.847697511Z" level=info msg="StopPodSandbox for \"e9bbab8a06a87e180a3e1535ebf99de1afa78909e2d242685a61d89141d796e7\"" Jul 15 05:22:08.847779 containerd[1561]: time="2025-07-15T05:22:08.847754049Z" level=info msg="Container to stop \"ba2d2bbf64f0ff4971627a95f571a33d5a46695f6f46cbffe5109f68c5d791d5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 05:22:08.847779 containerd[1561]: time="2025-07-15T05:22:08.847765311Z" level=info msg="Container to stop \"dabad41491c8424d29ad9ce06676127b67c0b91c4f029766cbe378c528b06726\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 05:22:08.847779 containerd[1561]: time="2025-07-15T05:22:08.847773777Z" level=info msg="Container to stop \"2be2bbb29230b3d9c05696475735706d65c725d4a0bf845c376a9e1feee7f2c3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 05:22:08.848297 containerd[1561]: time="2025-07-15T05:22:08.848269154Z" level=info msg="Container to stop \"3989de0b10f14b2d1a37cb7ed45bff178e87f795ff835ea653fdb9a99252d43c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 05:22:08.848659 containerd[1561]: time="2025-07-15T05:22:08.848437026Z" level=info msg="Container to stop \"c54549210ccc7f3f1121b2977cbacee047a4fb0163b4b0473bee767d4ad8b5ae\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 05:22:08.857109 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fae62e08ec4d050ab1a605e53c6d379c3f3b96621fac379c5b358a77ac0c0bdd-rootfs.mount: Deactivated successfully. Jul 15 05:22:08.858265 systemd[1]: cri-containerd-e9bbab8a06a87e180a3e1535ebf99de1afa78909e2d242685a61d89141d796e7.scope: Deactivated successfully. Jul 15 05:22:08.866327 containerd[1561]: time="2025-07-15T05:22:08.866285727Z" level=info msg="shim disconnected" id=fae62e08ec4d050ab1a605e53c6d379c3f3b96621fac379c5b358a77ac0c0bdd namespace=k8s.io Jul 15 05:22:08.866327 containerd[1561]: time="2025-07-15T05:22:08.866315924Z" level=warning msg="cleaning up after shim disconnected" id=fae62e08ec4d050ab1a605e53c6d379c3f3b96621fac379c5b358a77ac0c0bdd namespace=k8s.io Jul 15 05:22:08.878548 containerd[1561]: time="2025-07-15T05:22:08.866323478Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 15 05:22:08.879026 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e9bbab8a06a87e180a3e1535ebf99de1afa78909e2d242685a61d89141d796e7-rootfs.mount: Deactivated successfully. Jul 15 05:22:08.882668 containerd[1561]: time="2025-07-15T05:22:08.882381543Z" level=info msg="shim disconnected" id=e9bbab8a06a87e180a3e1535ebf99de1afa78909e2d242685a61d89141d796e7 namespace=k8s.io Jul 15 05:22:08.882788 containerd[1561]: time="2025-07-15T05:22:08.882773694Z" level=warning msg="cleaning up after shim disconnected" id=e9bbab8a06a87e180a3e1535ebf99de1afa78909e2d242685a61d89141d796e7 namespace=k8s.io Jul 15 05:22:08.882851 containerd[1561]: time="2025-07-15T05:22:08.882825302Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 15 05:22:08.902033 containerd[1561]: time="2025-07-15T05:22:08.901959212Z" level=info msg="received exit event sandbox_id:\"e9bbab8a06a87e180a3e1535ebf99de1afa78909e2d242685a61d89141d796e7\" exit_status:137 exited_at:{seconds:1752556928 nanos:858003340}" Jul 15 05:22:08.903301 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e9bbab8a06a87e180a3e1535ebf99de1afa78909e2d242685a61d89141d796e7-shm.mount: Deactivated successfully. Jul 15 05:22:08.909779 containerd[1561]: time="2025-07-15T05:22:08.909732905Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e9bbab8a06a87e180a3e1535ebf99de1afa78909e2d242685a61d89141d796e7\" id:\"e9bbab8a06a87e180a3e1535ebf99de1afa78909e2d242685a61d89141d796e7\" pid:2911 exit_status:137 exited_at:{seconds:1752556928 nanos:858003340}" Jul 15 05:22:08.910051 containerd[1561]: time="2025-07-15T05:22:08.909776770Z" level=info msg="received exit event sandbox_id:\"fae62e08ec4d050ab1a605e53c6d379c3f3b96621fac379c5b358a77ac0c0bdd\" exit_status:137 exited_at:{seconds:1752556928 nanos:825454833}" Jul 15 05:22:08.910182 containerd[1561]: time="2025-07-15T05:22:08.910135807Z" level=info msg="TearDown network for sandbox \"fae62e08ec4d050ab1a605e53c6d379c3f3b96621fac379c5b358a77ac0c0bdd\" successfully" Jul 15 05:22:08.910182 containerd[1561]: time="2025-07-15T05:22:08.910169190Z" level=info msg="StopPodSandbox for \"fae62e08ec4d050ab1a605e53c6d379c3f3b96621fac379c5b358a77ac0c0bdd\" returns successfully" Jul 15 05:22:08.912484 containerd[1561]: time="2025-07-15T05:22:08.912449793Z" level=info msg="TearDown network for sandbox \"e9bbab8a06a87e180a3e1535ebf99de1afa78909e2d242685a61d89141d796e7\" successfully" Jul 15 05:22:08.912484 containerd[1561]: time="2025-07-15T05:22:08.912476995Z" level=info msg="StopPodSandbox for \"e9bbab8a06a87e180a3e1535ebf99de1afa78909e2d242685a61d89141d796e7\" returns successfully" Jul 15 05:22:09.027832 kubelet[2719]: I0715 05:22:09.027683 2719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3d5d9f83-e226-4ce6-a454-e80087969575-cni-path\") pod \"3d5d9f83-e226-4ce6-a454-e80087969575\" (UID: \"3d5d9f83-e226-4ce6-a454-e80087969575\") " Jul 15 05:22:09.027832 kubelet[2719]: I0715 05:22:09.027744 2719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3d5d9f83-e226-4ce6-a454-e80087969575-hostproc\") pod \"3d5d9f83-e226-4ce6-a454-e80087969575\" (UID: \"3d5d9f83-e226-4ce6-a454-e80087969575\") " Jul 15 05:22:09.027832 kubelet[2719]: I0715 05:22:09.027771 2719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3d5d9f83-e226-4ce6-a454-e80087969575-hubble-tls\") pod \"3d5d9f83-e226-4ce6-a454-e80087969575\" (UID: \"3d5d9f83-e226-4ce6-a454-e80087969575\") " Jul 15 05:22:09.027832 kubelet[2719]: I0715 05:22:09.027768 2719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d5d9f83-e226-4ce6-a454-e80087969575-cni-path" (OuterVolumeSpecName: "cni-path") pod "3d5d9f83-e226-4ce6-a454-e80087969575" (UID: "3d5d9f83-e226-4ce6-a454-e80087969575"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 05:22:09.027832 kubelet[2719]: I0715 05:22:09.027788 2719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d5d9f83-e226-4ce6-a454-e80087969575-hostproc" (OuterVolumeSpecName: "hostproc") pod "3d5d9f83-e226-4ce6-a454-e80087969575" (UID: "3d5d9f83-e226-4ce6-a454-e80087969575"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 05:22:09.027832 kubelet[2719]: I0715 05:22:09.027792 2719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3d5d9f83-e226-4ce6-a454-e80087969575-host-proc-sys-kernel\") pod \"3d5d9f83-e226-4ce6-a454-e80087969575\" (UID: \"3d5d9f83-e226-4ce6-a454-e80087969575\") " Jul 15 05:22:09.028359 kubelet[2719]: I0715 05:22:09.027835 2719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d5d9f83-e226-4ce6-a454-e80087969575-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "3d5d9f83-e226-4ce6-a454-e80087969575" (UID: "3d5d9f83-e226-4ce6-a454-e80087969575"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 05:22:09.028359 kubelet[2719]: I0715 05:22:09.027862 2719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3d5d9f83-e226-4ce6-a454-e80087969575-cilium-cgroup\") pod \"3d5d9f83-e226-4ce6-a454-e80087969575\" (UID: \"3d5d9f83-e226-4ce6-a454-e80087969575\") " Jul 15 05:22:09.028359 kubelet[2719]: I0715 05:22:09.027887 2719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3d5d9f83-e226-4ce6-a454-e80087969575-xtables-lock\") pod \"3d5d9f83-e226-4ce6-a454-e80087969575\" (UID: \"3d5d9f83-e226-4ce6-a454-e80087969575\") " Jul 15 05:22:09.028359 kubelet[2719]: I0715 05:22:09.027911 2719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3689fd1d-2da5-4360-90c0-77b17e259c52-cilium-config-path\") pod \"3689fd1d-2da5-4360-90c0-77b17e259c52\" (UID: \"3689fd1d-2da5-4360-90c0-77b17e259c52\") " Jul 15 05:22:09.028359 kubelet[2719]: I0715 05:22:09.027954 2719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3d5d9f83-e226-4ce6-a454-e80087969575-clustermesh-secrets\") pod \"3d5d9f83-e226-4ce6-a454-e80087969575\" (UID: \"3d5d9f83-e226-4ce6-a454-e80087969575\") " Jul 15 05:22:09.028478 kubelet[2719]: I0715 05:22:09.027966 2719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d5d9f83-e226-4ce6-a454-e80087969575-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "3d5d9f83-e226-4ce6-a454-e80087969575" (UID: "3d5d9f83-e226-4ce6-a454-e80087969575"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 05:22:09.028478 kubelet[2719]: I0715 05:22:09.027973 2719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3d5d9f83-e226-4ce6-a454-e80087969575-bpf-maps\") pod \"3d5d9f83-e226-4ce6-a454-e80087969575\" (UID: \"3d5d9f83-e226-4ce6-a454-e80087969575\") " Jul 15 05:22:09.028478 kubelet[2719]: I0715 05:22:09.027993 2719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3d5d9f83-e226-4ce6-a454-e80087969575-etc-cni-netd\") pod \"3d5d9f83-e226-4ce6-a454-e80087969575\" (UID: \"3d5d9f83-e226-4ce6-a454-e80087969575\") " Jul 15 05:22:09.028478 kubelet[2719]: I0715 05:22:09.028014 2719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3d5d9f83-e226-4ce6-a454-e80087969575-host-proc-sys-net\") pod \"3d5d9f83-e226-4ce6-a454-e80087969575\" (UID: \"3d5d9f83-e226-4ce6-a454-e80087969575\") " Jul 15 05:22:09.028478 kubelet[2719]: I0715 05:22:09.028034 2719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3d5d9f83-e226-4ce6-a454-e80087969575-lib-modules\") pod \"3d5d9f83-e226-4ce6-a454-e80087969575\" (UID: \"3d5d9f83-e226-4ce6-a454-e80087969575\") " Jul 15 05:22:09.028478 kubelet[2719]: I0715 05:22:09.028055 2719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qmd4l\" (UniqueName: \"kubernetes.io/projected/3d5d9f83-e226-4ce6-a454-e80087969575-kube-api-access-qmd4l\") pod \"3d5d9f83-e226-4ce6-a454-e80087969575\" (UID: \"3d5d9f83-e226-4ce6-a454-e80087969575\") " Jul 15 05:22:09.028615 kubelet[2719]: I0715 05:22:09.028077 2719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3d5d9f83-e226-4ce6-a454-e80087969575-cilium-config-path\") pod \"3d5d9f83-e226-4ce6-a454-e80087969575\" (UID: \"3d5d9f83-e226-4ce6-a454-e80087969575\") " Jul 15 05:22:09.028615 kubelet[2719]: I0715 05:22:09.028094 2719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3d5d9f83-e226-4ce6-a454-e80087969575-cilium-run\") pod \"3d5d9f83-e226-4ce6-a454-e80087969575\" (UID: \"3d5d9f83-e226-4ce6-a454-e80087969575\") " Jul 15 05:22:09.028615 kubelet[2719]: I0715 05:22:09.028117 2719 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbg98\" (UniqueName: \"kubernetes.io/projected/3689fd1d-2da5-4360-90c0-77b17e259c52-kube-api-access-vbg98\") pod \"3689fd1d-2da5-4360-90c0-77b17e259c52\" (UID: \"3689fd1d-2da5-4360-90c0-77b17e259c52\") " Jul 15 05:22:09.028615 kubelet[2719]: I0715 05:22:09.028150 2719 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3d5d9f83-e226-4ce6-a454-e80087969575-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 15 05:22:09.028615 kubelet[2719]: I0715 05:22:09.028162 2719 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3d5d9f83-e226-4ce6-a454-e80087969575-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 15 05:22:09.028615 kubelet[2719]: I0715 05:22:09.028173 2719 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3d5d9f83-e226-4ce6-a454-e80087969575-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 15 05:22:09.028615 kubelet[2719]: I0715 05:22:09.028185 2719 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3d5d9f83-e226-4ce6-a454-e80087969575-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 15 05:22:09.029895 kubelet[2719]: I0715 05:22:09.027993 2719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d5d9f83-e226-4ce6-a454-e80087969575-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "3d5d9f83-e226-4ce6-a454-e80087969575" (UID: "3d5d9f83-e226-4ce6-a454-e80087969575"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 05:22:09.029895 kubelet[2719]: I0715 05:22:09.029744 2719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d5d9f83-e226-4ce6-a454-e80087969575-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "3d5d9f83-e226-4ce6-a454-e80087969575" (UID: "3d5d9f83-e226-4ce6-a454-e80087969575"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 05:22:09.030348 kubelet[2719]: I0715 05:22:09.029772 2719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d5d9f83-e226-4ce6-a454-e80087969575-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "3d5d9f83-e226-4ce6-a454-e80087969575" (UID: "3d5d9f83-e226-4ce6-a454-e80087969575"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 05:22:09.030348 kubelet[2719]: I0715 05:22:09.029786 2719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d5d9f83-e226-4ce6-a454-e80087969575-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "3d5d9f83-e226-4ce6-a454-e80087969575" (UID: "3d5d9f83-e226-4ce6-a454-e80087969575"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 05:22:09.030834 kubelet[2719]: I0715 05:22:09.030802 2719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d5d9f83-e226-4ce6-a454-e80087969575-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "3d5d9f83-e226-4ce6-a454-e80087969575" (UID: "3d5d9f83-e226-4ce6-a454-e80087969575"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 05:22:09.031664 kubelet[2719]: I0715 05:22:09.030953 2719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d5d9f83-e226-4ce6-a454-e80087969575-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "3d5d9f83-e226-4ce6-a454-e80087969575" (UID: "3d5d9f83-e226-4ce6-a454-e80087969575"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 05:22:09.034122 kubelet[2719]: I0715 05:22:09.034101 2719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3689fd1d-2da5-4360-90c0-77b17e259c52-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3689fd1d-2da5-4360-90c0-77b17e259c52" (UID: "3689fd1d-2da5-4360-90c0-77b17e259c52"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 15 05:22:09.034546 kubelet[2719]: I0715 05:22:09.034468 2719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d5d9f83-e226-4ce6-a454-e80087969575-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "3d5d9f83-e226-4ce6-a454-e80087969575" (UID: "3d5d9f83-e226-4ce6-a454-e80087969575"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 15 05:22:09.034689 kubelet[2719]: I0715 05:22:09.034552 2719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d5d9f83-e226-4ce6-a454-e80087969575-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3d5d9f83-e226-4ce6-a454-e80087969575" (UID: "3d5d9f83-e226-4ce6-a454-e80087969575"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 15 05:22:09.034822 kubelet[2719]: I0715 05:22:09.034792 2719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3689fd1d-2da5-4360-90c0-77b17e259c52-kube-api-access-vbg98" (OuterVolumeSpecName: "kube-api-access-vbg98") pod "3689fd1d-2da5-4360-90c0-77b17e259c52" (UID: "3689fd1d-2da5-4360-90c0-77b17e259c52"). InnerVolumeSpecName "kube-api-access-vbg98". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 15 05:22:09.034885 kubelet[2719]: I0715 05:22:09.034868 2719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d5d9f83-e226-4ce6-a454-e80087969575-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "3d5d9f83-e226-4ce6-a454-e80087969575" (UID: "3d5d9f83-e226-4ce6-a454-e80087969575"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 15 05:22:09.036241 kubelet[2719]: I0715 05:22:09.036204 2719 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d5d9f83-e226-4ce6-a454-e80087969575-kube-api-access-qmd4l" (OuterVolumeSpecName: "kube-api-access-qmd4l") pod "3d5d9f83-e226-4ce6-a454-e80087969575" (UID: "3d5d9f83-e226-4ce6-a454-e80087969575"). InnerVolumeSpecName "kube-api-access-qmd4l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 15 05:22:09.128621 kubelet[2719]: I0715 05:22:09.128531 2719 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3d5d9f83-e226-4ce6-a454-e80087969575-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 15 05:22:09.128621 kubelet[2719]: I0715 05:22:09.128585 2719 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3689fd1d-2da5-4360-90c0-77b17e259c52-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 15 05:22:09.128621 kubelet[2719]: I0715 05:22:09.128614 2719 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3d5d9f83-e226-4ce6-a454-e80087969575-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 15 05:22:09.128851 kubelet[2719]: I0715 05:22:09.128666 2719 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3d5d9f83-e226-4ce6-a454-e80087969575-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 15 05:22:09.128851 kubelet[2719]: I0715 05:22:09.128680 2719 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3d5d9f83-e226-4ce6-a454-e80087969575-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 15 05:22:09.128851 kubelet[2719]: I0715 05:22:09.128690 2719 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3d5d9f83-e226-4ce6-a454-e80087969575-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 15 05:22:09.128851 kubelet[2719]: I0715 05:22:09.128702 2719 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3d5d9f83-e226-4ce6-a454-e80087969575-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 15 05:22:09.128851 kubelet[2719]: I0715 05:22:09.128713 2719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qmd4l\" (UniqueName: \"kubernetes.io/projected/3d5d9f83-e226-4ce6-a454-e80087969575-kube-api-access-qmd4l\") on node \"localhost\" DevicePath \"\"" Jul 15 05:22:09.128851 kubelet[2719]: I0715 05:22:09.128723 2719 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3d5d9f83-e226-4ce6-a454-e80087969575-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 15 05:22:09.128851 kubelet[2719]: I0715 05:22:09.128733 2719 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3d5d9f83-e226-4ce6-a454-e80087969575-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 15 05:22:09.128851 kubelet[2719]: I0715 05:22:09.128744 2719 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbg98\" (UniqueName: \"kubernetes.io/projected/3689fd1d-2da5-4360-90c0-77b17e259c52-kube-api-access-vbg98\") on node \"localhost\" DevicePath \"\"" Jul 15 05:22:09.129144 kubelet[2719]: I0715 05:22:09.128756 2719 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3d5d9f83-e226-4ce6-a454-e80087969575-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 15 05:22:09.263323 systemd[1]: Removed slice kubepods-burstable-pod3d5d9f83_e226_4ce6_a454_e80087969575.slice - libcontainer container kubepods-burstable-pod3d5d9f83_e226_4ce6_a454_e80087969575.slice. Jul 15 05:22:09.263620 systemd[1]: kubepods-burstable-pod3d5d9f83_e226_4ce6_a454_e80087969575.slice: Consumed 6.858s CPU time, 123.3M memory peak, 796K read from disk, 13.3M written to disk. Jul 15 05:22:09.264865 systemd[1]: Removed slice kubepods-besteffort-pod3689fd1d_2da5_4360_90c0_77b17e259c52.slice - libcontainer container kubepods-besteffort-pod3689fd1d_2da5_4360_90c0_77b17e259c52.slice. Jul 15 05:22:09.489517 kubelet[2719]: I0715 05:22:09.489454 2719 scope.go:117] "RemoveContainer" containerID="7cbbb0b1a372e8c5cb638071941b0142a05205482f2cc7796b8485b5e35e81f7" Jul 15 05:22:09.491498 containerd[1561]: time="2025-07-15T05:22:09.491336166Z" level=info msg="RemoveContainer for \"7cbbb0b1a372e8c5cb638071941b0142a05205482f2cc7796b8485b5e35e81f7\"" Jul 15 05:22:09.514443 containerd[1561]: time="2025-07-15T05:22:09.514113354Z" level=info msg="RemoveContainer for \"7cbbb0b1a372e8c5cb638071941b0142a05205482f2cc7796b8485b5e35e81f7\" returns successfully" Jul 15 05:22:09.519920 kubelet[2719]: I0715 05:22:09.519863 2719 scope.go:117] "RemoveContainer" containerID="7cbbb0b1a372e8c5cb638071941b0142a05205482f2cc7796b8485b5e35e81f7" Jul 15 05:22:09.520265 containerd[1561]: time="2025-07-15T05:22:09.520221575Z" level=error msg="ContainerStatus for \"7cbbb0b1a372e8c5cb638071941b0142a05205482f2cc7796b8485b5e35e81f7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7cbbb0b1a372e8c5cb638071941b0142a05205482f2cc7796b8485b5e35e81f7\": not found" Jul 15 05:22:09.522508 kubelet[2719]: E0715 05:22:09.522446 2719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7cbbb0b1a372e8c5cb638071941b0142a05205482f2cc7796b8485b5e35e81f7\": not found" containerID="7cbbb0b1a372e8c5cb638071941b0142a05205482f2cc7796b8485b5e35e81f7" Jul 15 05:22:09.523457 kubelet[2719]: I0715 05:22:09.523277 2719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7cbbb0b1a372e8c5cb638071941b0142a05205482f2cc7796b8485b5e35e81f7"} err="failed to get container status \"7cbbb0b1a372e8c5cb638071941b0142a05205482f2cc7796b8485b5e35e81f7\": rpc error: code = NotFound desc = an error occurred when try to find container \"7cbbb0b1a372e8c5cb638071941b0142a05205482f2cc7796b8485b5e35e81f7\": not found" Jul 15 05:22:09.523457 kubelet[2719]: I0715 05:22:09.523382 2719 scope.go:117] "RemoveContainer" containerID="c54549210ccc7f3f1121b2977cbacee047a4fb0163b4b0473bee767d4ad8b5ae" Jul 15 05:22:09.525742 containerd[1561]: time="2025-07-15T05:22:09.525706234Z" level=info msg="RemoveContainer for \"c54549210ccc7f3f1121b2977cbacee047a4fb0163b4b0473bee767d4ad8b5ae\"" Jul 15 05:22:09.533497 containerd[1561]: time="2025-07-15T05:22:09.533438321Z" level=info msg="RemoveContainer for \"c54549210ccc7f3f1121b2977cbacee047a4fb0163b4b0473bee767d4ad8b5ae\" returns successfully" Jul 15 05:22:09.533761 kubelet[2719]: I0715 05:22:09.533725 2719 scope.go:117] "RemoveContainer" containerID="3989de0b10f14b2d1a37cb7ed45bff178e87f795ff835ea653fdb9a99252d43c" Jul 15 05:22:09.535672 containerd[1561]: time="2025-07-15T05:22:09.535209447Z" level=info msg="RemoveContainer for \"3989de0b10f14b2d1a37cb7ed45bff178e87f795ff835ea653fdb9a99252d43c\"" Jul 15 05:22:09.543320 containerd[1561]: time="2025-07-15T05:22:09.543282055Z" level=info msg="RemoveContainer for \"3989de0b10f14b2d1a37cb7ed45bff178e87f795ff835ea653fdb9a99252d43c\" returns successfully" Jul 15 05:22:09.543441 kubelet[2719]: I0715 05:22:09.543415 2719 scope.go:117] "RemoveContainer" containerID="2be2bbb29230b3d9c05696475735706d65c725d4a0bf845c376a9e1feee7f2c3" Jul 15 05:22:09.545357 containerd[1561]: time="2025-07-15T05:22:09.545317718Z" level=info msg="RemoveContainer for \"2be2bbb29230b3d9c05696475735706d65c725d4a0bf845c376a9e1feee7f2c3\"" Jul 15 05:22:09.550968 containerd[1561]: time="2025-07-15T05:22:09.550913409Z" level=info msg="RemoveContainer for \"2be2bbb29230b3d9c05696475735706d65c725d4a0bf845c376a9e1feee7f2c3\" returns successfully" Jul 15 05:22:09.551101 kubelet[2719]: I0715 05:22:09.551082 2719 scope.go:117] "RemoveContainer" containerID="dabad41491c8424d29ad9ce06676127b67c0b91c4f029766cbe378c528b06726" Jul 15 05:22:09.552369 containerd[1561]: time="2025-07-15T05:22:09.552338385Z" level=info msg="RemoveContainer for \"dabad41491c8424d29ad9ce06676127b67c0b91c4f029766cbe378c528b06726\"" Jul 15 05:22:09.556713 containerd[1561]: time="2025-07-15T05:22:09.556686209Z" level=info msg="RemoveContainer for \"dabad41491c8424d29ad9ce06676127b67c0b91c4f029766cbe378c528b06726\" returns successfully" Jul 15 05:22:09.556843 kubelet[2719]: I0715 05:22:09.556822 2719 scope.go:117] "RemoveContainer" containerID="ba2d2bbf64f0ff4971627a95f571a33d5a46695f6f46cbffe5109f68c5d791d5" Jul 15 05:22:09.558151 containerd[1561]: time="2025-07-15T05:22:09.558117877Z" level=info msg="RemoveContainer for \"ba2d2bbf64f0ff4971627a95f571a33d5a46695f6f46cbffe5109f68c5d791d5\"" Jul 15 05:22:09.562506 containerd[1561]: time="2025-07-15T05:22:09.562458367Z" level=info msg="RemoveContainer for \"ba2d2bbf64f0ff4971627a95f571a33d5a46695f6f46cbffe5109f68c5d791d5\" returns successfully" Jul 15 05:22:09.562624 kubelet[2719]: I0715 05:22:09.562607 2719 scope.go:117] "RemoveContainer" containerID="c54549210ccc7f3f1121b2977cbacee047a4fb0163b4b0473bee767d4ad8b5ae" Jul 15 05:22:09.562905 containerd[1561]: time="2025-07-15T05:22:09.562821281Z" level=error msg="ContainerStatus for \"c54549210ccc7f3f1121b2977cbacee047a4fb0163b4b0473bee767d4ad8b5ae\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c54549210ccc7f3f1121b2977cbacee047a4fb0163b4b0473bee767d4ad8b5ae\": not found" Jul 15 05:22:09.563061 kubelet[2719]: E0715 05:22:09.562974 2719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c54549210ccc7f3f1121b2977cbacee047a4fb0163b4b0473bee767d4ad8b5ae\": not found" containerID="c54549210ccc7f3f1121b2977cbacee047a4fb0163b4b0473bee767d4ad8b5ae" Jul 15 05:22:09.563061 kubelet[2719]: I0715 05:22:09.563000 2719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c54549210ccc7f3f1121b2977cbacee047a4fb0163b4b0473bee767d4ad8b5ae"} err="failed to get container status \"c54549210ccc7f3f1121b2977cbacee047a4fb0163b4b0473bee767d4ad8b5ae\": rpc error: code = NotFound desc = an error occurred when try to find container \"c54549210ccc7f3f1121b2977cbacee047a4fb0163b4b0473bee767d4ad8b5ae\": not found" Jul 15 05:22:09.563061 kubelet[2719]: I0715 05:22:09.563022 2719 scope.go:117] "RemoveContainer" containerID="3989de0b10f14b2d1a37cb7ed45bff178e87f795ff835ea653fdb9a99252d43c" Jul 15 05:22:09.563224 containerd[1561]: time="2025-07-15T05:22:09.563189876Z" level=error msg="ContainerStatus for \"3989de0b10f14b2d1a37cb7ed45bff178e87f795ff835ea653fdb9a99252d43c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3989de0b10f14b2d1a37cb7ed45bff178e87f795ff835ea653fdb9a99252d43c\": not found" Jul 15 05:22:09.563374 kubelet[2719]: E0715 05:22:09.563289 2719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3989de0b10f14b2d1a37cb7ed45bff178e87f795ff835ea653fdb9a99252d43c\": not found" containerID="3989de0b10f14b2d1a37cb7ed45bff178e87f795ff835ea653fdb9a99252d43c" Jul 15 05:22:09.563374 kubelet[2719]: I0715 05:22:09.563309 2719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3989de0b10f14b2d1a37cb7ed45bff178e87f795ff835ea653fdb9a99252d43c"} err="failed to get container status \"3989de0b10f14b2d1a37cb7ed45bff178e87f795ff835ea653fdb9a99252d43c\": rpc error: code = NotFound desc = an error occurred when try to find container \"3989de0b10f14b2d1a37cb7ed45bff178e87f795ff835ea653fdb9a99252d43c\": not found" Jul 15 05:22:09.563374 kubelet[2719]: I0715 05:22:09.563325 2719 scope.go:117] "RemoveContainer" containerID="2be2bbb29230b3d9c05696475735706d65c725d4a0bf845c376a9e1feee7f2c3" Jul 15 05:22:09.563531 containerd[1561]: time="2025-07-15T05:22:09.563495110Z" level=error msg="ContainerStatus for \"2be2bbb29230b3d9c05696475735706d65c725d4a0bf845c376a9e1feee7f2c3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2be2bbb29230b3d9c05696475735706d65c725d4a0bf845c376a9e1feee7f2c3\": not found" Jul 15 05:22:09.563681 kubelet[2719]: E0715 05:22:09.563619 2719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2be2bbb29230b3d9c05696475735706d65c725d4a0bf845c376a9e1feee7f2c3\": not found" containerID="2be2bbb29230b3d9c05696475735706d65c725d4a0bf845c376a9e1feee7f2c3" Jul 15 05:22:09.563681 kubelet[2719]: I0715 05:22:09.563659 2719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2be2bbb29230b3d9c05696475735706d65c725d4a0bf845c376a9e1feee7f2c3"} err="failed to get container status \"2be2bbb29230b3d9c05696475735706d65c725d4a0bf845c376a9e1feee7f2c3\": rpc error: code = NotFound desc = an error occurred when try to find container \"2be2bbb29230b3d9c05696475735706d65c725d4a0bf845c376a9e1feee7f2c3\": not found" Jul 15 05:22:09.563681 kubelet[2719]: I0715 05:22:09.563674 2719 scope.go:117] "RemoveContainer" containerID="dabad41491c8424d29ad9ce06676127b67c0b91c4f029766cbe378c528b06726" Jul 15 05:22:09.563864 containerd[1561]: time="2025-07-15T05:22:09.563819590Z" level=error msg="ContainerStatus for \"dabad41491c8424d29ad9ce06676127b67c0b91c4f029766cbe378c528b06726\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dabad41491c8424d29ad9ce06676127b67c0b91c4f029766cbe378c528b06726\": not found" Jul 15 05:22:09.563948 kubelet[2719]: E0715 05:22:09.563910 2719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dabad41491c8424d29ad9ce06676127b67c0b91c4f029766cbe378c528b06726\": not found" containerID="dabad41491c8424d29ad9ce06676127b67c0b91c4f029766cbe378c528b06726" Jul 15 05:22:09.563993 kubelet[2719]: I0715 05:22:09.563948 2719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dabad41491c8424d29ad9ce06676127b67c0b91c4f029766cbe378c528b06726"} err="failed to get container status \"dabad41491c8424d29ad9ce06676127b67c0b91c4f029766cbe378c528b06726\": rpc error: code = NotFound desc = an error occurred when try to find container \"dabad41491c8424d29ad9ce06676127b67c0b91c4f029766cbe378c528b06726\": not found" Jul 15 05:22:09.563993 kubelet[2719]: I0715 05:22:09.563966 2719 scope.go:117] "RemoveContainer" containerID="ba2d2bbf64f0ff4971627a95f571a33d5a46695f6f46cbffe5109f68c5d791d5" Jul 15 05:22:09.564119 containerd[1561]: time="2025-07-15T05:22:09.564085589Z" level=error msg="ContainerStatus for \"ba2d2bbf64f0ff4971627a95f571a33d5a46695f6f46cbffe5109f68c5d791d5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ba2d2bbf64f0ff4971627a95f571a33d5a46695f6f46cbffe5109f68c5d791d5\": not found" Jul 15 05:22:09.564198 kubelet[2719]: E0715 05:22:09.564178 2719 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ba2d2bbf64f0ff4971627a95f571a33d5a46695f6f46cbffe5109f68c5d791d5\": not found" containerID="ba2d2bbf64f0ff4971627a95f571a33d5a46695f6f46cbffe5109f68c5d791d5" Jul 15 05:22:09.564234 kubelet[2719]: I0715 05:22:09.564201 2719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ba2d2bbf64f0ff4971627a95f571a33d5a46695f6f46cbffe5109f68c5d791d5"} err="failed to get container status \"ba2d2bbf64f0ff4971627a95f571a33d5a46695f6f46cbffe5109f68c5d791d5\": rpc error: code = NotFound desc = an error occurred when try to find container \"ba2d2bbf64f0ff4971627a95f571a33d5a46695f6f46cbffe5109f68c5d791d5\": not found" Jul 15 05:22:09.795564 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fae62e08ec4d050ab1a605e53c6d379c3f3b96621fac379c5b358a77ac0c0bdd-shm.mount: Deactivated successfully. Jul 15 05:22:09.795697 systemd[1]: var-lib-kubelet-pods-3689fd1d\x2d2da5\x2d4360\x2d90c0\x2d77b17e259c52-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvbg98.mount: Deactivated successfully. Jul 15 05:22:09.795792 systemd[1]: var-lib-kubelet-pods-3d5d9f83\x2de226\x2d4ce6\x2da454\x2de80087969575-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqmd4l.mount: Deactivated successfully. Jul 15 05:22:09.795885 systemd[1]: var-lib-kubelet-pods-3d5d9f83\x2de226\x2d4ce6\x2da454\x2de80087969575-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 15 05:22:09.796004 systemd[1]: var-lib-kubelet-pods-3d5d9f83\x2de226\x2d4ce6\x2da454\x2de80087969575-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 15 05:22:10.707917 sshd[4363]: Connection closed by 10.0.0.1 port 58804 Jul 15 05:22:10.708321 sshd-session[4360]: pam_unix(sshd:session): session closed for user core Jul 15 05:22:10.720518 systemd[1]: sshd@24-10.0.0.126:22-10.0.0.1:58804.service: Deactivated successfully. Jul 15 05:22:10.722582 systemd[1]: session-25.scope: Deactivated successfully. Jul 15 05:22:10.723568 systemd-logind[1536]: Session 25 logged out. Waiting for processes to exit. Jul 15 05:22:10.726955 systemd[1]: Started sshd@25-10.0.0.126:22-10.0.0.1:43710.service - OpenSSH per-connection server daemon (10.0.0.1:43710). Jul 15 05:22:10.727797 systemd-logind[1536]: Removed session 25. Jul 15 05:22:10.785099 sshd[4513]: Accepted publickey for core from 10.0.0.1 port 43710 ssh2: RSA SHA256:u8XLUfBAvkkcme5upcPT7VprXL+p6dqsv6pgcjAevNM Jul 15 05:22:10.787180 sshd-session[4513]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:22:10.792255 systemd-logind[1536]: New session 26 of user core. Jul 15 05:22:10.805807 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 15 05:22:11.254236 kubelet[2719]: I0715 05:22:11.254193 2719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3689fd1d-2da5-4360-90c0-77b17e259c52" path="/var/lib/kubelet/pods/3689fd1d-2da5-4360-90c0-77b17e259c52/volumes" Jul 15 05:22:11.254727 kubelet[2719]: I0715 05:22:11.254713 2719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d5d9f83-e226-4ce6-a454-e80087969575" path="/var/lib/kubelet/pods/3d5d9f83-e226-4ce6-a454-e80087969575/volumes" Jul 15 05:22:11.452048 sshd[4516]: Connection closed by 10.0.0.1 port 43710 Jul 15 05:22:11.453970 sshd-session[4513]: pam_unix(sshd:session): session closed for user core Jul 15 05:22:11.467193 systemd[1]: sshd@25-10.0.0.126:22-10.0.0.1:43710.service: Deactivated successfully. Jul 15 05:22:11.472195 systemd[1]: session-26.scope: Deactivated successfully. Jul 15 05:22:11.477331 kubelet[2719]: E0715 05:22:11.477274 2719 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3d5d9f83-e226-4ce6-a454-e80087969575" containerName="mount-cgroup" Jul 15 05:22:11.477331 kubelet[2719]: E0715 05:22:11.477309 2719 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3d5d9f83-e226-4ce6-a454-e80087969575" containerName="apply-sysctl-overwrites" Jul 15 05:22:11.477331 kubelet[2719]: E0715 05:22:11.477318 2719 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3689fd1d-2da5-4360-90c0-77b17e259c52" containerName="cilium-operator" Jul 15 05:22:11.477331 kubelet[2719]: E0715 05:22:11.477326 2719 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3d5d9f83-e226-4ce6-a454-e80087969575" containerName="mount-bpf-fs" Jul 15 05:22:11.477331 kubelet[2719]: E0715 05:22:11.477334 2719 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3d5d9f83-e226-4ce6-a454-e80087969575" containerName="cilium-agent" Jul 15 05:22:11.477537 kubelet[2719]: E0715 05:22:11.477343 2719 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3d5d9f83-e226-4ce6-a454-e80087969575" containerName="clean-cilium-state" Jul 15 05:22:11.477537 kubelet[2719]: I0715 05:22:11.477368 2719 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d5d9f83-e226-4ce6-a454-e80087969575" containerName="cilium-agent" Jul 15 05:22:11.477537 kubelet[2719]: I0715 05:22:11.477376 2719 memory_manager.go:354] "RemoveStaleState removing state" podUID="3689fd1d-2da5-4360-90c0-77b17e259c52" containerName="cilium-operator" Jul 15 05:22:11.477697 systemd-logind[1536]: Session 26 logged out. Waiting for processes to exit. Jul 15 05:22:11.482049 systemd[1]: Started sshd@26-10.0.0.126:22-10.0.0.1:43716.service - OpenSSH per-connection server daemon (10.0.0.1:43716). Jul 15 05:22:11.489763 systemd-logind[1536]: Removed session 26. Jul 15 05:22:11.504238 systemd[1]: Created slice kubepods-burstable-pode1c14463_e5a1_44a8_ad41_118d265e611c.slice - libcontainer container kubepods-burstable-pode1c14463_e5a1_44a8_ad41_118d265e611c.slice. Jul 15 05:22:11.543176 kubelet[2719]: I0715 05:22:11.543121 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e1c14463-e5a1-44a8-ad41-118d265e611c-cni-path\") pod \"cilium-kgggf\" (UID: \"e1c14463-e5a1-44a8-ad41-118d265e611c\") " pod="kube-system/cilium-kgggf" Jul 15 05:22:11.543176 kubelet[2719]: I0715 05:22:11.543162 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e1c14463-e5a1-44a8-ad41-118d265e611c-cilium-run\") pod \"cilium-kgggf\" (UID: \"e1c14463-e5a1-44a8-ad41-118d265e611c\") " pod="kube-system/cilium-kgggf" Jul 15 05:22:11.543319 kubelet[2719]: I0715 05:22:11.543184 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e1c14463-e5a1-44a8-ad41-118d265e611c-host-proc-sys-kernel\") pod \"cilium-kgggf\" (UID: \"e1c14463-e5a1-44a8-ad41-118d265e611c\") " pod="kube-system/cilium-kgggf" Jul 15 05:22:11.543319 kubelet[2719]: I0715 05:22:11.543203 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e1c14463-e5a1-44a8-ad41-118d265e611c-cilium-cgroup\") pod \"cilium-kgggf\" (UID: \"e1c14463-e5a1-44a8-ad41-118d265e611c\") " pod="kube-system/cilium-kgggf" Jul 15 05:22:11.543319 kubelet[2719]: I0715 05:22:11.543221 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e1c14463-e5a1-44a8-ad41-118d265e611c-cilium-config-path\") pod \"cilium-kgggf\" (UID: \"e1c14463-e5a1-44a8-ad41-118d265e611c\") " pod="kube-system/cilium-kgggf" Jul 15 05:22:11.543319 kubelet[2719]: I0715 05:22:11.543241 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e1c14463-e5a1-44a8-ad41-118d265e611c-xtables-lock\") pod \"cilium-kgggf\" (UID: \"e1c14463-e5a1-44a8-ad41-118d265e611c\") " pod="kube-system/cilium-kgggf" Jul 15 05:22:11.543319 kubelet[2719]: I0715 05:22:11.543258 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e1c14463-e5a1-44a8-ad41-118d265e611c-clustermesh-secrets\") pod \"cilium-kgggf\" (UID: \"e1c14463-e5a1-44a8-ad41-118d265e611c\") " pod="kube-system/cilium-kgggf" Jul 15 05:22:11.543319 kubelet[2719]: I0715 05:22:11.543276 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e1c14463-e5a1-44a8-ad41-118d265e611c-lib-modules\") pod \"cilium-kgggf\" (UID: \"e1c14463-e5a1-44a8-ad41-118d265e611c\") " pod="kube-system/cilium-kgggf" Jul 15 05:22:11.543452 kubelet[2719]: I0715 05:22:11.543296 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e1c14463-e5a1-44a8-ad41-118d265e611c-hubble-tls\") pod \"cilium-kgggf\" (UID: \"e1c14463-e5a1-44a8-ad41-118d265e611c\") " pod="kube-system/cilium-kgggf" Jul 15 05:22:11.543452 kubelet[2719]: I0715 05:22:11.543316 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e1c14463-e5a1-44a8-ad41-118d265e611c-hostproc\") pod \"cilium-kgggf\" (UID: \"e1c14463-e5a1-44a8-ad41-118d265e611c\") " pod="kube-system/cilium-kgggf" Jul 15 05:22:11.543452 kubelet[2719]: I0715 05:22:11.543332 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e1c14463-e5a1-44a8-ad41-118d265e611c-bpf-maps\") pod \"cilium-kgggf\" (UID: \"e1c14463-e5a1-44a8-ad41-118d265e611c\") " pod="kube-system/cilium-kgggf" Jul 15 05:22:11.543452 kubelet[2719]: I0715 05:22:11.543350 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e1c14463-e5a1-44a8-ad41-118d265e611c-etc-cni-netd\") pod \"cilium-kgggf\" (UID: \"e1c14463-e5a1-44a8-ad41-118d265e611c\") " pod="kube-system/cilium-kgggf" Jul 15 05:22:11.543452 kubelet[2719]: I0715 05:22:11.543369 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e1c14463-e5a1-44a8-ad41-118d265e611c-host-proc-sys-net\") pod \"cilium-kgggf\" (UID: \"e1c14463-e5a1-44a8-ad41-118d265e611c\") " pod="kube-system/cilium-kgggf" Jul 15 05:22:11.543452 kubelet[2719]: I0715 05:22:11.543387 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e1c14463-e5a1-44a8-ad41-118d265e611c-cilium-ipsec-secrets\") pod \"cilium-kgggf\" (UID: \"e1c14463-e5a1-44a8-ad41-118d265e611c\") " pod="kube-system/cilium-kgggf" Jul 15 05:22:11.543574 kubelet[2719]: I0715 05:22:11.543405 2719 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgxzv\" (UniqueName: \"kubernetes.io/projected/e1c14463-e5a1-44a8-ad41-118d265e611c-kube-api-access-vgxzv\") pod \"cilium-kgggf\" (UID: \"e1c14463-e5a1-44a8-ad41-118d265e611c\") " pod="kube-system/cilium-kgggf" Jul 15 05:22:11.544061 sshd[4529]: Accepted publickey for core from 10.0.0.1 port 43716 ssh2: RSA SHA256:u8XLUfBAvkkcme5upcPT7VprXL+p6dqsv6pgcjAevNM Jul 15 05:22:11.545726 sshd-session[4529]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:22:11.550422 systemd-logind[1536]: New session 27 of user core. Jul 15 05:22:11.561885 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 15 05:22:11.614361 sshd[4532]: Connection closed by 10.0.0.1 port 43716 Jul 15 05:22:11.614827 sshd-session[4529]: pam_unix(sshd:session): session closed for user core Jul 15 05:22:11.629789 systemd[1]: sshd@26-10.0.0.126:22-10.0.0.1:43716.service: Deactivated successfully. Jul 15 05:22:11.631837 systemd[1]: session-27.scope: Deactivated successfully. Jul 15 05:22:11.632597 systemd-logind[1536]: Session 27 logged out. Waiting for processes to exit. Jul 15 05:22:11.635873 systemd[1]: Started sshd@27-10.0.0.126:22-10.0.0.1:43718.service - OpenSSH per-connection server daemon (10.0.0.1:43718). Jul 15 05:22:11.636582 systemd-logind[1536]: Removed session 27. Jul 15 05:22:11.694822 sshd[4539]: Accepted publickey for core from 10.0.0.1 port 43718 ssh2: RSA SHA256:u8XLUfBAvkkcme5upcPT7VprXL+p6dqsv6pgcjAevNM Jul 15 05:22:11.696314 sshd-session[4539]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 05:22:11.700426 systemd-logind[1536]: New session 28 of user core. Jul 15 05:22:11.712841 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 15 05:22:11.811098 containerd[1561]: time="2025-07-15T05:22:11.810607138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kgggf,Uid:e1c14463-e5a1-44a8-ad41-118d265e611c,Namespace:kube-system,Attempt:0,}" Jul 15 05:22:12.026219 containerd[1561]: time="2025-07-15T05:22:12.026156554Z" level=info msg="connecting to shim d68c9dff071124c50ed9e2ffe91aa622f336e856b878bda6a9c5a74fb593ea34" address="unix:///run/containerd/s/ff27b17fadf3f068084520169cba2b93d39c6018d98753f7e1774495e45562ef" namespace=k8s.io protocol=ttrpc version=3 Jul 15 05:22:12.047806 systemd[1]: Started cri-containerd-d68c9dff071124c50ed9e2ffe91aa622f336e856b878bda6a9c5a74fb593ea34.scope - libcontainer container d68c9dff071124c50ed9e2ffe91aa622f336e856b878bda6a9c5a74fb593ea34. Jul 15 05:22:12.072469 containerd[1561]: time="2025-07-15T05:22:12.072337461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kgggf,Uid:e1c14463-e5a1-44a8-ad41-118d265e611c,Namespace:kube-system,Attempt:0,} returns sandbox id \"d68c9dff071124c50ed9e2ffe91aa622f336e856b878bda6a9c5a74fb593ea34\"" Jul 15 05:22:12.074677 containerd[1561]: time="2025-07-15T05:22:12.074605262Z" level=info msg="CreateContainer within sandbox \"d68c9dff071124c50ed9e2ffe91aa622f336e856b878bda6a9c5a74fb593ea34\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 15 05:22:12.083452 containerd[1561]: time="2025-07-15T05:22:12.083401730Z" level=info msg="Container f55499dd3fcdd74b0833f478354a761cf925abc58fc87afb3e1d050cdab066ab: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:22:12.090977 containerd[1561]: time="2025-07-15T05:22:12.090930506Z" level=info msg="CreateContainer within sandbox \"d68c9dff071124c50ed9e2ffe91aa622f336e856b878bda6a9c5a74fb593ea34\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f55499dd3fcdd74b0833f478354a761cf925abc58fc87afb3e1d050cdab066ab\"" Jul 15 05:22:12.091496 containerd[1561]: time="2025-07-15T05:22:12.091471169Z" level=info msg="StartContainer for \"f55499dd3fcdd74b0833f478354a761cf925abc58fc87afb3e1d050cdab066ab\"" Jul 15 05:22:12.092347 containerd[1561]: time="2025-07-15T05:22:12.092311394Z" level=info msg="connecting to shim f55499dd3fcdd74b0833f478354a761cf925abc58fc87afb3e1d050cdab066ab" address="unix:///run/containerd/s/ff27b17fadf3f068084520169cba2b93d39c6018d98753f7e1774495e45562ef" protocol=ttrpc version=3 Jul 15 05:22:12.117922 systemd[1]: Started cri-containerd-f55499dd3fcdd74b0833f478354a761cf925abc58fc87afb3e1d050cdab066ab.scope - libcontainer container f55499dd3fcdd74b0833f478354a761cf925abc58fc87afb3e1d050cdab066ab. Jul 15 05:22:12.149240 containerd[1561]: time="2025-07-15T05:22:12.149198224Z" level=info msg="StartContainer for \"f55499dd3fcdd74b0833f478354a761cf925abc58fc87afb3e1d050cdab066ab\" returns successfully" Jul 15 05:22:12.159002 systemd[1]: cri-containerd-f55499dd3fcdd74b0833f478354a761cf925abc58fc87afb3e1d050cdab066ab.scope: Deactivated successfully. Jul 15 05:22:12.160069 containerd[1561]: time="2025-07-15T05:22:12.160023436Z" level=info msg="received exit event container_id:\"f55499dd3fcdd74b0833f478354a761cf925abc58fc87afb3e1d050cdab066ab\" id:\"f55499dd3fcdd74b0833f478354a761cf925abc58fc87afb3e1d050cdab066ab\" pid:4612 exited_at:{seconds:1752556932 nanos:159743391}" Jul 15 05:22:12.160212 containerd[1561]: time="2025-07-15T05:22:12.160179495Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f55499dd3fcdd74b0833f478354a761cf925abc58fc87afb3e1d050cdab066ab\" id:\"f55499dd3fcdd74b0833f478354a761cf925abc58fc87afb3e1d050cdab066ab\" pid:4612 exited_at:{seconds:1752556932 nanos:159743391}" Jul 15 05:22:12.504691 containerd[1561]: time="2025-07-15T05:22:12.504615740Z" level=info msg="CreateContainer within sandbox \"d68c9dff071124c50ed9e2ffe91aa622f336e856b878bda6a9c5a74fb593ea34\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 15 05:22:12.511707 containerd[1561]: time="2025-07-15T05:22:12.511671854Z" level=info msg="Container b78e8cfd93ad0e839dfe004b6a4ccaba6022bc898549980387dbc639d5accda0: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:22:12.518605 containerd[1561]: time="2025-07-15T05:22:12.518551992Z" level=info msg="CreateContainer within sandbox \"d68c9dff071124c50ed9e2ffe91aa622f336e856b878bda6a9c5a74fb593ea34\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b78e8cfd93ad0e839dfe004b6a4ccaba6022bc898549980387dbc639d5accda0\"" Jul 15 05:22:12.518985 containerd[1561]: time="2025-07-15T05:22:12.518962156Z" level=info msg="StartContainer for \"b78e8cfd93ad0e839dfe004b6a4ccaba6022bc898549980387dbc639d5accda0\"" Jul 15 05:22:12.519771 containerd[1561]: time="2025-07-15T05:22:12.519749099Z" level=info msg="connecting to shim b78e8cfd93ad0e839dfe004b6a4ccaba6022bc898549980387dbc639d5accda0" address="unix:///run/containerd/s/ff27b17fadf3f068084520169cba2b93d39c6018d98753f7e1774495e45562ef" protocol=ttrpc version=3 Jul 15 05:22:12.546832 systemd[1]: Started cri-containerd-b78e8cfd93ad0e839dfe004b6a4ccaba6022bc898549980387dbc639d5accda0.scope - libcontainer container b78e8cfd93ad0e839dfe004b6a4ccaba6022bc898549980387dbc639d5accda0. Jul 15 05:22:12.578689 systemd[1]: cri-containerd-b78e8cfd93ad0e839dfe004b6a4ccaba6022bc898549980387dbc639d5accda0.scope: Deactivated successfully. Jul 15 05:22:12.579093 containerd[1561]: time="2025-07-15T05:22:12.579062184Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b78e8cfd93ad0e839dfe004b6a4ccaba6022bc898549980387dbc639d5accda0\" id:\"b78e8cfd93ad0e839dfe004b6a4ccaba6022bc898549980387dbc639d5accda0\" pid:4657 exited_at:{seconds:1752556932 nanos:578808830}" Jul 15 05:22:12.706261 containerd[1561]: time="2025-07-15T05:22:12.706202881Z" level=info msg="received exit event container_id:\"b78e8cfd93ad0e839dfe004b6a4ccaba6022bc898549980387dbc639d5accda0\" id:\"b78e8cfd93ad0e839dfe004b6a4ccaba6022bc898549980387dbc639d5accda0\" pid:4657 exited_at:{seconds:1752556932 nanos:578808830}" Jul 15 05:22:12.707109 containerd[1561]: time="2025-07-15T05:22:12.707091188Z" level=info msg="StartContainer for \"b78e8cfd93ad0e839dfe004b6a4ccaba6022bc898549980387dbc639d5accda0\" returns successfully" Jul 15 05:22:12.729251 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b78e8cfd93ad0e839dfe004b6a4ccaba6022bc898549980387dbc639d5accda0-rootfs.mount: Deactivated successfully. Jul 15 05:22:13.314333 kubelet[2719]: E0715 05:22:13.314287 2719 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 15 05:22:13.508424 containerd[1561]: time="2025-07-15T05:22:13.508380512Z" level=info msg="CreateContainer within sandbox \"d68c9dff071124c50ed9e2ffe91aa622f336e856b878bda6a9c5a74fb593ea34\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 15 05:22:13.527164 containerd[1561]: time="2025-07-15T05:22:13.527123027Z" level=info msg="Container e3d32ae86043742ce6f41145490c96137c7695fa00f80ccb372a994ea309ddb6: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:22:13.536848 containerd[1561]: time="2025-07-15T05:22:13.536805678Z" level=info msg="CreateContainer within sandbox \"d68c9dff071124c50ed9e2ffe91aa622f336e856b878bda6a9c5a74fb593ea34\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e3d32ae86043742ce6f41145490c96137c7695fa00f80ccb372a994ea309ddb6\"" Jul 15 05:22:13.537361 containerd[1561]: time="2025-07-15T05:22:13.537281406Z" level=info msg="StartContainer for \"e3d32ae86043742ce6f41145490c96137c7695fa00f80ccb372a994ea309ddb6\"" Jul 15 05:22:13.538873 containerd[1561]: time="2025-07-15T05:22:13.538843157Z" level=info msg="connecting to shim e3d32ae86043742ce6f41145490c96137c7695fa00f80ccb372a994ea309ddb6" address="unix:///run/containerd/s/ff27b17fadf3f068084520169cba2b93d39c6018d98753f7e1774495e45562ef" protocol=ttrpc version=3 Jul 15 05:22:13.567805 systemd[1]: Started cri-containerd-e3d32ae86043742ce6f41145490c96137c7695fa00f80ccb372a994ea309ddb6.scope - libcontainer container e3d32ae86043742ce6f41145490c96137c7695fa00f80ccb372a994ea309ddb6. Jul 15 05:22:13.609014 containerd[1561]: time="2025-07-15T05:22:13.608948420Z" level=info msg="StartContainer for \"e3d32ae86043742ce6f41145490c96137c7695fa00f80ccb372a994ea309ddb6\" returns successfully" Jul 15 05:22:13.610598 systemd[1]: cri-containerd-e3d32ae86043742ce6f41145490c96137c7695fa00f80ccb372a994ea309ddb6.scope: Deactivated successfully. Jul 15 05:22:13.611514 containerd[1561]: time="2025-07-15T05:22:13.611484259Z" level=info msg="received exit event container_id:\"e3d32ae86043742ce6f41145490c96137c7695fa00f80ccb372a994ea309ddb6\" id:\"e3d32ae86043742ce6f41145490c96137c7695fa00f80ccb372a994ea309ddb6\" pid:4702 exited_at:{seconds:1752556933 nanos:611311430}" Jul 15 05:22:13.611750 containerd[1561]: time="2025-07-15T05:22:13.611731632Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e3d32ae86043742ce6f41145490c96137c7695fa00f80ccb372a994ea309ddb6\" id:\"e3d32ae86043742ce6f41145490c96137c7695fa00f80ccb372a994ea309ddb6\" pid:4702 exited_at:{seconds:1752556933 nanos:611311430}" Jul 15 05:22:13.649513 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e3d32ae86043742ce6f41145490c96137c7695fa00f80ccb372a994ea309ddb6-rootfs.mount: Deactivated successfully. Jul 15 05:22:14.514789 containerd[1561]: time="2025-07-15T05:22:14.514622206Z" level=info msg="CreateContainer within sandbox \"d68c9dff071124c50ed9e2ffe91aa622f336e856b878bda6a9c5a74fb593ea34\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 15 05:22:14.523685 containerd[1561]: time="2025-07-15T05:22:14.523426284Z" level=info msg="Container 081ba4129460aa750a3fc69fb82f4aef104755c695e0aa84e34e3f7f5632e6f4: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:22:14.533282 containerd[1561]: time="2025-07-15T05:22:14.533216753Z" level=info msg="CreateContainer within sandbox \"d68c9dff071124c50ed9e2ffe91aa622f336e856b878bda6a9c5a74fb593ea34\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"081ba4129460aa750a3fc69fb82f4aef104755c695e0aa84e34e3f7f5632e6f4\"" Jul 15 05:22:14.534246 containerd[1561]: time="2025-07-15T05:22:14.533797651Z" level=info msg="StartContainer for \"081ba4129460aa750a3fc69fb82f4aef104755c695e0aa84e34e3f7f5632e6f4\"" Jul 15 05:22:14.535003 containerd[1561]: time="2025-07-15T05:22:14.534979156Z" level=info msg="connecting to shim 081ba4129460aa750a3fc69fb82f4aef104755c695e0aa84e34e3f7f5632e6f4" address="unix:///run/containerd/s/ff27b17fadf3f068084520169cba2b93d39c6018d98753f7e1774495e45562ef" protocol=ttrpc version=3 Jul 15 05:22:14.561754 systemd[1]: Started cri-containerd-081ba4129460aa750a3fc69fb82f4aef104755c695e0aa84e34e3f7f5632e6f4.scope - libcontainer container 081ba4129460aa750a3fc69fb82f4aef104755c695e0aa84e34e3f7f5632e6f4. Jul 15 05:22:14.590671 systemd[1]: cri-containerd-081ba4129460aa750a3fc69fb82f4aef104755c695e0aa84e34e3f7f5632e6f4.scope: Deactivated successfully. Jul 15 05:22:14.591053 containerd[1561]: time="2025-07-15T05:22:14.590956886Z" level=info msg="TaskExit event in podsandbox handler container_id:\"081ba4129460aa750a3fc69fb82f4aef104755c695e0aa84e34e3f7f5632e6f4\" id:\"081ba4129460aa750a3fc69fb82f4aef104755c695e0aa84e34e3f7f5632e6f4\" pid:4741 exited_at:{seconds:1752556934 nanos:590734742}" Jul 15 05:22:14.683548 containerd[1561]: time="2025-07-15T05:22:14.683481624Z" level=info msg="received exit event container_id:\"081ba4129460aa750a3fc69fb82f4aef104755c695e0aa84e34e3f7f5632e6f4\" id:\"081ba4129460aa750a3fc69fb82f4aef104755c695e0aa84e34e3f7f5632e6f4\" pid:4741 exited_at:{seconds:1752556934 nanos:590734742}" Jul 15 05:22:14.691355 containerd[1561]: time="2025-07-15T05:22:14.691310431Z" level=info msg="StartContainer for \"081ba4129460aa750a3fc69fb82f4aef104755c695e0aa84e34e3f7f5632e6f4\" returns successfully" Jul 15 05:22:14.704144 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-081ba4129460aa750a3fc69fb82f4aef104755c695e0aa84e34e3f7f5632e6f4-rootfs.mount: Deactivated successfully. Jul 15 05:22:15.249285 kubelet[2719]: I0715 05:22:15.249207 2719 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-15T05:22:15Z","lastTransitionTime":"2025-07-15T05:22:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 15 05:22:15.519981 containerd[1561]: time="2025-07-15T05:22:15.519831581Z" level=info msg="CreateContainer within sandbox \"d68c9dff071124c50ed9e2ffe91aa622f336e856b878bda6a9c5a74fb593ea34\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 15 05:22:15.541624 containerd[1561]: time="2025-07-15T05:22:15.541557645Z" level=info msg="Container 9d9b9ba12860bc6d9f627ff27c4adae18e47d63840a6817c147082a1af133588: CDI devices from CRI Config.CDIDevices: []" Jul 15 05:22:15.550844 containerd[1561]: time="2025-07-15T05:22:15.550780093Z" level=info msg="CreateContainer within sandbox \"d68c9dff071124c50ed9e2ffe91aa622f336e856b878bda6a9c5a74fb593ea34\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9d9b9ba12860bc6d9f627ff27c4adae18e47d63840a6817c147082a1af133588\"" Jul 15 05:22:15.551436 containerd[1561]: time="2025-07-15T05:22:15.551399935Z" level=info msg="StartContainer for \"9d9b9ba12860bc6d9f627ff27c4adae18e47d63840a6817c147082a1af133588\"" Jul 15 05:22:15.552585 containerd[1561]: time="2025-07-15T05:22:15.552552603Z" level=info msg="connecting to shim 9d9b9ba12860bc6d9f627ff27c4adae18e47d63840a6817c147082a1af133588" address="unix:///run/containerd/s/ff27b17fadf3f068084520169cba2b93d39c6018d98753f7e1774495e45562ef" protocol=ttrpc version=3 Jul 15 05:22:15.582813 systemd[1]: Started cri-containerd-9d9b9ba12860bc6d9f627ff27c4adae18e47d63840a6817c147082a1af133588.scope - libcontainer container 9d9b9ba12860bc6d9f627ff27c4adae18e47d63840a6817c147082a1af133588. Jul 15 05:22:15.621738 containerd[1561]: time="2025-07-15T05:22:15.621680577Z" level=info msg="StartContainer for \"9d9b9ba12860bc6d9f627ff27c4adae18e47d63840a6817c147082a1af133588\" returns successfully" Jul 15 05:22:15.694517 containerd[1561]: time="2025-07-15T05:22:15.694444134Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9d9b9ba12860bc6d9f627ff27c4adae18e47d63840a6817c147082a1af133588\" id:\"6b2bc49859fa6c79e2d47d0eca7364628801d796b48b2da98700b68b53aaa244\" pid:4810 exited_at:{seconds:1752556935 nanos:694103103}" Jul 15 05:22:16.114710 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Jul 15 05:22:16.931550 kubelet[2719]: I0715 05:22:16.931469 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-kgggf" podStartSLOduration=5.931450184 podStartE2EDuration="5.931450184s" podCreationTimestamp="2025-07-15 05:22:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 05:22:16.931207843 +0000 UTC m=+93.776332130" watchObservedRunningTime="2025-07-15 05:22:16.931450184 +0000 UTC m=+93.776574461" Jul 15 05:22:18.002778 containerd[1561]: time="2025-07-15T05:22:18.002678039Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9d9b9ba12860bc6d9f627ff27c4adae18e47d63840a6817c147082a1af133588\" id:\"b1d67ec679323493993997ad0c72e0bfa04bc17fc9a87d5a68d2633031bc58fe\" pid:4969 exit_status:1 exited_at:{seconds:1752556938 nanos:2366256}" Jul 15 05:22:19.333603 systemd-networkd[1473]: lxc_health: Link UP Jul 15 05:22:19.335052 systemd-networkd[1473]: lxc_health: Gained carrier Jul 15 05:22:20.136228 containerd[1561]: time="2025-07-15T05:22:20.136169561Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9d9b9ba12860bc6d9f627ff27c4adae18e47d63840a6817c147082a1af133588\" id:\"908538834cf99840d7a8d192a9302879d49488381150f176a0466968a8a77cab\" pid:5342 exited_at:{seconds:1752556940 nanos:135829363}" Jul 15 05:22:20.678806 systemd-networkd[1473]: lxc_health: Gained IPv6LL Jul 15 05:22:22.236959 containerd[1561]: time="2025-07-15T05:22:22.236902270Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9d9b9ba12860bc6d9f627ff27c4adae18e47d63840a6817c147082a1af133588\" id:\"c9917f2f07e05791954f253452550c035c67cf2b42af3acf5bb2de96921bbd7e\" pid:5379 exited_at:{seconds:1752556942 nanos:236137595}" Jul 15 05:22:24.346698 containerd[1561]: time="2025-07-15T05:22:24.346627484Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9d9b9ba12860bc6d9f627ff27c4adae18e47d63840a6817c147082a1af133588\" id:\"3617db1e92f3e92f483170660bb78b7876c9676c4273c96e5c5a87c228cff24f\" pid:5409 exited_at:{seconds:1752556944 nanos:346314309}" Jul 15 05:22:24.355932 sshd[4546]: Connection closed by 10.0.0.1 port 43718 Jul 15 05:22:24.356308 sshd-session[4539]: pam_unix(sshd:session): session closed for user core Jul 15 05:22:24.360989 systemd[1]: sshd@27-10.0.0.126:22-10.0.0.1:43718.service: Deactivated successfully. Jul 15 05:22:24.363404 systemd[1]: session-28.scope: Deactivated successfully. Jul 15 05:22:24.364212 systemd-logind[1536]: Session 28 logged out. Waiting for processes to exit. Jul 15 05:22:24.365616 systemd-logind[1536]: Removed session 28.