Jul 10 00:25:15.950485 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Wed Jul 9 22:15:30 -00 2025 Jul 10 00:25:15.950506 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=844005237fb9709f65a093d5533c4229fb6c54e8e257736d9c3d041b6d3080ea Jul 10 00:25:15.950518 kernel: BIOS-provided physical RAM map: Jul 10 00:25:15.950524 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000002ffff] usable Jul 10 00:25:15.950530 kernel: BIOS-e820: [mem 0x0000000000030000-0x000000000004ffff] reserved Jul 10 00:25:15.950537 kernel: BIOS-e820: [mem 0x0000000000050000-0x000000000009efff] usable Jul 10 00:25:15.950544 kernel: BIOS-e820: [mem 0x000000000009f000-0x000000000009ffff] reserved Jul 10 00:25:15.950551 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009b8ecfff] usable Jul 10 00:25:15.950561 kernel: BIOS-e820: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Jul 10 00:25:15.950568 kernel: BIOS-e820: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Jul 10 00:25:15.950575 kernel: BIOS-e820: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Jul 10 00:25:15.950583 kernel: BIOS-e820: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Jul 10 00:25:15.950590 kernel: BIOS-e820: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Jul 10 00:25:15.950597 kernel: BIOS-e820: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Jul 10 00:25:15.950605 kernel: BIOS-e820: [mem 0x000000009bfb7000-0x000000009bffffff] usable Jul 10 00:25:15.950612 kernel: BIOS-e820: [mem 0x000000009c000000-0x000000009cffffff] reserved Jul 10 00:25:15.950627 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jul 10 00:25:15.950634 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 10 00:25:15.950641 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 10 00:25:15.950647 kernel: NX (Execute Disable) protection: active Jul 10 00:25:15.950654 kernel: APIC: Static calls initialized Jul 10 00:25:15.950661 kernel: e820: update [mem 0x9a13e018-0x9a147c57] usable ==> usable Jul 10 00:25:15.950669 kernel: e820: update [mem 0x9a101018-0x9a13de57] usable ==> usable Jul 10 00:25:15.950682 kernel: extended physical RAM map: Jul 10 00:25:15.950689 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000002ffff] usable Jul 10 00:25:15.950696 kernel: reserve setup_data: [mem 0x0000000000030000-0x000000000004ffff] reserved Jul 10 00:25:15.950703 kernel: reserve setup_data: [mem 0x0000000000050000-0x000000000009efff] usable Jul 10 00:25:15.950713 kernel: reserve setup_data: [mem 0x000000000009f000-0x000000000009ffff] reserved Jul 10 00:25:15.950720 kernel: reserve setup_data: [mem 0x0000000000100000-0x000000009a101017] usable Jul 10 00:25:15.950727 kernel: reserve setup_data: [mem 0x000000009a101018-0x000000009a13de57] usable Jul 10 00:25:15.950734 kernel: reserve setup_data: [mem 0x000000009a13de58-0x000000009a13e017] usable Jul 10 00:25:15.950741 kernel: reserve setup_data: [mem 0x000000009a13e018-0x000000009a147c57] usable Jul 10 00:25:15.950747 kernel: reserve setup_data: [mem 0x000000009a147c58-0x000000009b8ecfff] usable Jul 10 00:25:15.950754 kernel: reserve setup_data: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Jul 10 00:25:15.950761 kernel: reserve setup_data: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Jul 10 00:25:15.950768 kernel: reserve setup_data: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Jul 10 00:25:15.950775 kernel: reserve setup_data: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Jul 10 00:25:15.950782 kernel: reserve setup_data: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Jul 10 00:25:15.950790 kernel: reserve setup_data: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Jul 10 00:25:15.950798 kernel: reserve setup_data: [mem 0x000000009bfb7000-0x000000009bffffff] usable Jul 10 00:25:15.950808 kernel: reserve setup_data: [mem 0x000000009c000000-0x000000009cffffff] reserved Jul 10 00:25:15.950815 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jul 10 00:25:15.950822 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 10 00:25:15.950830 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 10 00:25:15.950839 kernel: efi: EFI v2.7 by EDK II Jul 10 00:25:15.950846 kernel: efi: SMBIOS=0x9b9d5000 ACPI=0x9bb7e000 ACPI 2.0=0x9bb7e014 MEMATTR=0x9a1af018 RNG=0x9bb73018 Jul 10 00:25:15.950853 kernel: random: crng init done Jul 10 00:25:15.950860 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Jul 10 00:25:15.950867 kernel: secureboot: Secure boot enabled Jul 10 00:25:15.950875 kernel: SMBIOS 2.8 present. Jul 10 00:25:15.950882 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Jul 10 00:25:15.950889 kernel: DMI: Memory slots populated: 1/1 Jul 10 00:25:15.950898 kernel: Hypervisor detected: KVM Jul 10 00:25:15.950906 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 10 00:25:15.950913 kernel: kvm-clock: using sched offset of 7029412364 cycles Jul 10 00:25:15.950923 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 10 00:25:15.950930 kernel: tsc: Detected 2794.748 MHz processor Jul 10 00:25:15.950938 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 10 00:25:15.950945 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 10 00:25:15.950952 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x400000000 Jul 10 00:25:15.950960 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jul 10 00:25:15.950971 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 10 00:25:15.950978 kernel: Using GB pages for direct mapping Jul 10 00:25:15.950988 kernel: ACPI: Early table checksum verification disabled Jul 10 00:25:15.950998 kernel: ACPI: RSDP 0x000000009BB7E014 000024 (v02 BOCHS ) Jul 10 00:25:15.951005 kernel: ACPI: XSDT 0x000000009BB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jul 10 00:25:15.951013 kernel: ACPI: FACP 0x000000009BB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:25:15.951020 kernel: ACPI: DSDT 0x000000009BB7A000 002237 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:25:15.951027 kernel: ACPI: FACS 0x000000009BBDD000 000040 Jul 10 00:25:15.951034 kernel: ACPI: APIC 0x000000009BB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:25:15.951042 kernel: ACPI: HPET 0x000000009BB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:25:15.951049 kernel: ACPI: MCFG 0x000000009BB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:25:15.951059 kernel: ACPI: WAET 0x000000009BB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:25:15.951066 kernel: ACPI: BGRT 0x000000009BB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jul 10 00:25:15.951073 kernel: ACPI: Reserving FACP table memory at [mem 0x9bb79000-0x9bb790f3] Jul 10 00:25:15.951080 kernel: ACPI: Reserving DSDT table memory at [mem 0x9bb7a000-0x9bb7c236] Jul 10 00:25:15.951088 kernel: ACPI: Reserving FACS table memory at [mem 0x9bbdd000-0x9bbdd03f] Jul 10 00:25:15.951095 kernel: ACPI: Reserving APIC table memory at [mem 0x9bb78000-0x9bb7808f] Jul 10 00:25:15.951102 kernel: ACPI: Reserving HPET table memory at [mem 0x9bb77000-0x9bb77037] Jul 10 00:25:15.951109 kernel: ACPI: Reserving MCFG table memory at [mem 0x9bb76000-0x9bb7603b] Jul 10 00:25:15.951117 kernel: ACPI: Reserving WAET table memory at [mem 0x9bb75000-0x9bb75027] Jul 10 00:25:15.951126 kernel: ACPI: Reserving BGRT table memory at [mem 0x9bb74000-0x9bb74037] Jul 10 00:25:15.951137 kernel: No NUMA configuration found Jul 10 00:25:15.951145 kernel: Faking a node at [mem 0x0000000000000000-0x000000009bffffff] Jul 10 00:25:15.951156 kernel: NODE_DATA(0) allocated [mem 0x9bf57dc0-0x9bf5efff] Jul 10 00:25:15.951170 kernel: Zone ranges: Jul 10 00:25:15.951184 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 10 00:25:15.951202 kernel: DMA32 [mem 0x0000000001000000-0x000000009bffffff] Jul 10 00:25:15.951210 kernel: Normal empty Jul 10 00:25:15.951217 kernel: Device empty Jul 10 00:25:15.951224 kernel: Movable zone start for each node Jul 10 00:25:15.951233 kernel: Early memory node ranges Jul 10 00:25:15.951241 kernel: node 0: [mem 0x0000000000001000-0x000000000002ffff] Jul 10 00:25:15.951248 kernel: node 0: [mem 0x0000000000050000-0x000000000009efff] Jul 10 00:25:15.951255 kernel: node 0: [mem 0x0000000000100000-0x000000009b8ecfff] Jul 10 00:25:15.951262 kernel: node 0: [mem 0x000000009bbff000-0x000000009bfb0fff] Jul 10 00:25:15.951269 kernel: node 0: [mem 0x000000009bfb7000-0x000000009bffffff] Jul 10 00:25:15.951277 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009bffffff] Jul 10 00:25:15.951284 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 10 00:25:15.951291 kernel: On node 0, zone DMA: 32 pages in unavailable ranges Jul 10 00:25:15.951301 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 10 00:25:15.951308 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jul 10 00:25:15.951315 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Jul 10 00:25:15.951323 kernel: On node 0, zone DMA32: 16384 pages in unavailable ranges Jul 10 00:25:15.951330 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 10 00:25:15.951337 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 10 00:25:15.951344 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 10 00:25:15.951352 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 10 00:25:15.951361 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 10 00:25:15.951433 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 10 00:25:15.951440 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 10 00:25:15.951456 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 10 00:25:15.951465 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 10 00:25:15.951472 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 10 00:25:15.951489 kernel: TSC deadline timer available Jul 10 00:25:15.951496 kernel: CPU topo: Max. logical packages: 1 Jul 10 00:25:15.951503 kernel: CPU topo: Max. logical dies: 1 Jul 10 00:25:15.951511 kernel: CPU topo: Max. dies per package: 1 Jul 10 00:25:15.951528 kernel: CPU topo: Max. threads per core: 1 Jul 10 00:25:15.951535 kernel: CPU topo: Num. cores per package: 4 Jul 10 00:25:15.951543 kernel: CPU topo: Num. threads per package: 4 Jul 10 00:25:15.951552 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jul 10 00:25:15.951562 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 10 00:25:15.951570 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 10 00:25:15.951577 kernel: kvm-guest: setup PV sched yield Jul 10 00:25:15.951585 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Jul 10 00:25:15.951595 kernel: Booting paravirtualized kernel on KVM Jul 10 00:25:15.951603 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 10 00:25:15.951611 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jul 10 00:25:15.951618 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jul 10 00:25:15.951626 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jul 10 00:25:15.951633 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 10 00:25:15.951641 kernel: kvm-guest: PV spinlocks enabled Jul 10 00:25:15.951648 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 10 00:25:15.951657 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=844005237fb9709f65a093d5533c4229fb6c54e8e257736d9c3d041b6d3080ea Jul 10 00:25:15.951667 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 10 00:25:15.951682 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 10 00:25:15.951690 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 10 00:25:15.951697 kernel: Fallback order for Node 0: 0 Jul 10 00:25:15.951705 kernel: Built 1 zonelists, mobility grouping on. Total pages: 638054 Jul 10 00:25:15.951712 kernel: Policy zone: DMA32 Jul 10 00:25:15.951721 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 10 00:25:15.951728 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 10 00:25:15.951738 kernel: ftrace: allocating 40095 entries in 157 pages Jul 10 00:25:15.951746 kernel: ftrace: allocated 157 pages with 5 groups Jul 10 00:25:15.951753 kernel: Dynamic Preempt: voluntary Jul 10 00:25:15.951761 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 10 00:25:15.951769 kernel: rcu: RCU event tracing is enabled. Jul 10 00:25:15.951777 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 10 00:25:15.951785 kernel: Trampoline variant of Tasks RCU enabled. Jul 10 00:25:15.951792 kernel: Rude variant of Tasks RCU enabled. Jul 10 00:25:15.951800 kernel: Tracing variant of Tasks RCU enabled. Jul 10 00:25:15.951809 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 10 00:25:15.951817 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 10 00:25:15.951825 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 10 00:25:15.951832 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 10 00:25:15.951842 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 10 00:25:15.951850 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 10 00:25:15.951858 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 10 00:25:15.951865 kernel: Console: colour dummy device 80x25 Jul 10 00:25:15.951873 kernel: printk: legacy console [ttyS0] enabled Jul 10 00:25:15.951883 kernel: ACPI: Core revision 20240827 Jul 10 00:25:15.951890 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 10 00:25:15.951898 kernel: APIC: Switch to symmetric I/O mode setup Jul 10 00:25:15.951906 kernel: x2apic enabled Jul 10 00:25:15.951913 kernel: APIC: Switched APIC routing to: physical x2apic Jul 10 00:25:15.951921 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jul 10 00:25:15.951928 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jul 10 00:25:15.951936 kernel: kvm-guest: setup PV IPIs Jul 10 00:25:15.951944 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 10 00:25:15.951954 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Jul 10 00:25:15.951961 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jul 10 00:25:15.951969 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 10 00:25:15.951976 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 10 00:25:15.951984 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 10 00:25:15.951994 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 10 00:25:15.952001 kernel: Spectre V2 : Mitigation: Retpolines Jul 10 00:25:15.952009 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 10 00:25:15.952016 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 10 00:25:15.952026 kernel: RETBleed: Mitigation: untrained return thunk Jul 10 00:25:15.952034 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 10 00:25:15.952041 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 10 00:25:15.952049 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jul 10 00:25:15.952057 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jul 10 00:25:15.952065 kernel: x86/bugs: return thunk changed Jul 10 00:25:15.952072 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jul 10 00:25:15.952080 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 10 00:25:15.952090 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 10 00:25:15.952097 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 10 00:25:15.952105 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 10 00:25:15.952112 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 10 00:25:15.952120 kernel: Freeing SMP alternatives memory: 32K Jul 10 00:25:15.952128 kernel: pid_max: default: 32768 minimum: 301 Jul 10 00:25:15.952135 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 10 00:25:15.952143 kernel: landlock: Up and running. Jul 10 00:25:15.952150 kernel: SELinux: Initializing. Jul 10 00:25:15.952160 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 10 00:25:15.952168 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 10 00:25:15.952175 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 10 00:25:15.952183 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 10 00:25:15.952190 kernel: ... version: 0 Jul 10 00:25:15.952198 kernel: ... bit width: 48 Jul 10 00:25:15.952207 kernel: ... generic registers: 6 Jul 10 00:25:15.952215 kernel: ... value mask: 0000ffffffffffff Jul 10 00:25:15.952222 kernel: ... max period: 00007fffffffffff Jul 10 00:25:15.952232 kernel: ... fixed-purpose events: 0 Jul 10 00:25:15.952240 kernel: ... event mask: 000000000000003f Jul 10 00:25:15.952247 kernel: signal: max sigframe size: 1776 Jul 10 00:25:15.952255 kernel: rcu: Hierarchical SRCU implementation. Jul 10 00:25:15.952262 kernel: rcu: Max phase no-delay instances is 400. Jul 10 00:25:15.952270 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 10 00:25:15.952279 kernel: smp: Bringing up secondary CPUs ... Jul 10 00:25:15.952289 kernel: smpboot: x86: Booting SMP configuration: Jul 10 00:25:15.952299 kernel: .... node #0, CPUs: #1 #2 #3 Jul 10 00:25:15.952309 kernel: smp: Brought up 1 node, 4 CPUs Jul 10 00:25:15.952319 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jul 10 00:25:15.952327 kernel: Memory: 2409212K/2552216K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54420K init, 2548K bss, 137068K reserved, 0K cma-reserved) Jul 10 00:25:15.952334 kernel: devtmpfs: initialized Jul 10 00:25:15.952342 kernel: x86/mm: Memory block size: 128MB Jul 10 00:25:15.952349 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bb7f000-0x9bbfefff] (524288 bytes) Jul 10 00:25:15.952357 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bfb5000-0x9bfb6fff] (8192 bytes) Jul 10 00:25:15.952365 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 10 00:25:15.952385 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 10 00:25:15.952396 kernel: pinctrl core: initialized pinctrl subsystem Jul 10 00:25:15.952403 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 10 00:25:15.952411 kernel: audit: initializing netlink subsys (disabled) Jul 10 00:25:15.952419 kernel: audit: type=2000 audit(1752107113.167:1): state=initialized audit_enabled=0 res=1 Jul 10 00:25:15.952426 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 10 00:25:15.952434 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 10 00:25:15.952441 kernel: cpuidle: using governor menu Jul 10 00:25:15.952449 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 10 00:25:15.952456 kernel: dca service started, version 1.12.1 Jul 10 00:25:15.952466 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Jul 10 00:25:15.952474 kernel: PCI: Using configuration type 1 for base access Jul 10 00:25:15.952481 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 10 00:25:15.952489 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 10 00:25:15.952497 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 10 00:25:15.952504 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 10 00:25:15.952512 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 10 00:25:15.952519 kernel: ACPI: Added _OSI(Module Device) Jul 10 00:25:15.952527 kernel: ACPI: Added _OSI(Processor Device) Jul 10 00:25:15.952536 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 10 00:25:15.952544 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 10 00:25:15.952551 kernel: ACPI: Interpreter enabled Jul 10 00:25:15.952559 kernel: ACPI: PM: (supports S0 S5) Jul 10 00:25:15.952566 kernel: ACPI: Using IOAPIC for interrupt routing Jul 10 00:25:15.952574 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 10 00:25:15.952581 kernel: PCI: Using E820 reservations for host bridge windows Jul 10 00:25:15.952589 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 10 00:25:15.952597 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 10 00:25:15.952846 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 10 00:25:15.952977 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jul 10 00:25:15.953100 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jul 10 00:25:15.953110 kernel: PCI host bridge to bus 0000:00 Jul 10 00:25:15.953240 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 10 00:25:15.953353 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 10 00:25:15.953521 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 10 00:25:15.953703 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Jul 10 00:25:15.953835 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jul 10 00:25:15.953960 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Jul 10 00:25:15.954078 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 10 00:25:15.954228 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jul 10 00:25:15.954395 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jul 10 00:25:15.954526 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Jul 10 00:25:15.954647 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Jul 10 00:25:15.954790 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Jul 10 00:25:15.954915 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 10 00:25:15.955055 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jul 10 00:25:15.955178 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Jul 10 00:25:15.955297 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Jul 10 00:25:15.955438 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Jul 10 00:25:15.955574 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jul 10 00:25:15.955704 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Jul 10 00:25:15.955840 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Jul 10 00:25:15.955975 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Jul 10 00:25:15.956115 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jul 10 00:25:15.956242 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Jul 10 00:25:15.956362 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Jul 10 00:25:15.956531 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Jul 10 00:25:15.956652 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Jul 10 00:25:15.956800 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jul 10 00:25:15.956942 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 10 00:25:15.957090 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jul 10 00:25:15.957218 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Jul 10 00:25:15.957335 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Jul 10 00:25:15.957489 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jul 10 00:25:15.957610 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Jul 10 00:25:15.957620 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 10 00:25:15.957628 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 10 00:25:15.957636 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 10 00:25:15.957644 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 10 00:25:15.957656 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 10 00:25:15.957663 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 10 00:25:15.957671 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 10 00:25:15.957686 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 10 00:25:15.957694 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 10 00:25:15.957701 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 10 00:25:15.957709 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 10 00:25:15.957717 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 10 00:25:15.957727 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 10 00:25:15.957734 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 10 00:25:15.957742 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 10 00:25:15.957749 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 10 00:25:15.957757 kernel: iommu: Default domain type: Translated Jul 10 00:25:15.957764 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 10 00:25:15.957772 kernel: efivars: Registered efivars operations Jul 10 00:25:15.957780 kernel: PCI: Using ACPI for IRQ routing Jul 10 00:25:15.957787 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 10 00:25:15.957795 kernel: e820: reserve RAM buffer [mem 0x0009f000-0x0009ffff] Jul 10 00:25:15.957805 kernel: e820: reserve RAM buffer [mem 0x9a101018-0x9bffffff] Jul 10 00:25:15.957812 kernel: e820: reserve RAM buffer [mem 0x9a13e018-0x9bffffff] Jul 10 00:25:15.957820 kernel: e820: reserve RAM buffer [mem 0x9b8ed000-0x9bffffff] Jul 10 00:25:15.957827 kernel: e820: reserve RAM buffer [mem 0x9bfb1000-0x9bffffff] Jul 10 00:25:15.957958 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 10 00:25:15.958107 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 10 00:25:15.958231 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 10 00:25:15.958241 kernel: vgaarb: loaded Jul 10 00:25:15.958252 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 10 00:25:15.958260 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 10 00:25:15.958268 kernel: clocksource: Switched to clocksource kvm-clock Jul 10 00:25:15.958275 kernel: VFS: Disk quotas dquot_6.6.0 Jul 10 00:25:15.958283 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 10 00:25:15.958291 kernel: pnp: PnP ACPI init Jul 10 00:25:15.958457 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Jul 10 00:25:15.958470 kernel: pnp: PnP ACPI: found 6 devices Jul 10 00:25:15.958481 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 10 00:25:15.958489 kernel: NET: Registered PF_INET protocol family Jul 10 00:25:15.958497 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 10 00:25:15.958504 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 10 00:25:15.958512 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 10 00:25:15.958520 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 10 00:25:15.958528 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 10 00:25:15.958535 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 10 00:25:15.958543 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 10 00:25:15.958553 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 10 00:25:15.958560 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 10 00:25:15.958568 kernel: NET: Registered PF_XDP protocol family Jul 10 00:25:15.958697 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Jul 10 00:25:15.958820 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Jul 10 00:25:15.958931 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 10 00:25:15.959040 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 10 00:25:15.959148 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 10 00:25:15.959262 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Jul 10 00:25:15.959386 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jul 10 00:25:15.959497 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Jul 10 00:25:15.959507 kernel: PCI: CLS 0 bytes, default 64 Jul 10 00:25:15.959516 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Jul 10 00:25:15.959524 kernel: Initialise system trusted keyrings Jul 10 00:25:15.959532 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 10 00:25:15.959540 kernel: Key type asymmetric registered Jul 10 00:25:15.959547 kernel: Asymmetric key parser 'x509' registered Jul 10 00:25:15.959559 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 10 00:25:15.959584 kernel: io scheduler mq-deadline registered Jul 10 00:25:15.959594 kernel: io scheduler kyber registered Jul 10 00:25:15.959601 kernel: io scheduler bfq registered Jul 10 00:25:15.959609 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 10 00:25:15.959618 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 10 00:25:15.959626 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 10 00:25:15.959634 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jul 10 00:25:15.959641 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 10 00:25:15.959652 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 10 00:25:15.959660 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 10 00:25:15.959668 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 10 00:25:15.959684 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 10 00:25:15.959826 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 10 00:25:15.959838 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 10 00:25:15.959956 kernel: rtc_cmos 00:04: registered as rtc0 Jul 10 00:25:15.960070 kernel: rtc_cmos 00:04: setting system clock to 2025-07-10T00:25:15 UTC (1752107115) Jul 10 00:25:15.960187 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jul 10 00:25:15.960197 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 10 00:25:15.960205 kernel: efifb: probing for efifb Jul 10 00:25:15.960214 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Jul 10 00:25:15.960222 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jul 10 00:25:15.960230 kernel: efifb: scrolling: redraw Jul 10 00:25:15.960238 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 10 00:25:15.960248 kernel: Console: switching to colour frame buffer device 160x50 Jul 10 00:25:15.960256 kernel: fb0: EFI VGA frame buffer device Jul 10 00:25:15.960266 kernel: pstore: Using crash dump compression: deflate Jul 10 00:25:15.960274 kernel: pstore: Registered efi_pstore as persistent store backend Jul 10 00:25:15.960284 kernel: NET: Registered PF_INET6 protocol family Jul 10 00:25:15.960292 kernel: Segment Routing with IPv6 Jul 10 00:25:15.960300 kernel: In-situ OAM (IOAM) with IPv6 Jul 10 00:25:15.960310 kernel: NET: Registered PF_PACKET protocol family Jul 10 00:25:15.960318 kernel: Key type dns_resolver registered Jul 10 00:25:15.960326 kernel: IPI shorthand broadcast: enabled Jul 10 00:25:15.960334 kernel: sched_clock: Marking stable (3460004642, 196142549)->(3688820695, -32673504) Jul 10 00:25:15.960342 kernel: registered taskstats version 1 Jul 10 00:25:15.960350 kernel: Loading compiled-in X.509 certificates Jul 10 00:25:15.960358 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: f515550de55d4e43b2ea11ae212aa0cb3a4e55cf' Jul 10 00:25:15.960384 kernel: Demotion targets for Node 0: null Jul 10 00:25:15.960401 kernel: Key type .fscrypt registered Jul 10 00:25:15.960430 kernel: Key type fscrypt-provisioning registered Jul 10 00:25:15.960440 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 10 00:25:15.960449 kernel: ima: Allocated hash algorithm: sha1 Jul 10 00:25:15.960457 kernel: ima: No architecture policies found Jul 10 00:25:15.960465 kernel: clk: Disabling unused clocks Jul 10 00:25:15.960472 kernel: Warning: unable to open an initial console. Jul 10 00:25:15.960481 kernel: Freeing unused kernel image (initmem) memory: 54420K Jul 10 00:25:15.960488 kernel: Write protecting the kernel read-only data: 24576k Jul 10 00:25:15.960497 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 10 00:25:15.960507 kernel: Run /init as init process Jul 10 00:25:15.960515 kernel: with arguments: Jul 10 00:25:15.960524 kernel: /init Jul 10 00:25:15.960531 kernel: with environment: Jul 10 00:25:15.960540 kernel: HOME=/ Jul 10 00:25:15.960547 kernel: TERM=linux Jul 10 00:25:15.960555 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 10 00:25:15.960564 systemd[1]: Successfully made /usr/ read-only. Jul 10 00:25:15.960578 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 10 00:25:15.960587 systemd[1]: Detected virtualization kvm. Jul 10 00:25:15.960595 systemd[1]: Detected architecture x86-64. Jul 10 00:25:15.960603 systemd[1]: Running in initrd. Jul 10 00:25:15.960612 systemd[1]: No hostname configured, using default hostname. Jul 10 00:25:15.960620 systemd[1]: Hostname set to . Jul 10 00:25:15.960629 systemd[1]: Initializing machine ID from VM UUID. Jul 10 00:25:15.960637 systemd[1]: Queued start job for default target initrd.target. Jul 10 00:25:15.960647 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 00:25:15.960656 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 00:25:15.960665 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 10 00:25:15.960681 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 10 00:25:15.960690 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 10 00:25:15.960699 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 10 00:25:15.960711 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 10 00:25:15.960720 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 10 00:25:15.960729 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 00:25:15.960737 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 10 00:25:15.960746 systemd[1]: Reached target paths.target - Path Units. Jul 10 00:25:15.960755 systemd[1]: Reached target slices.target - Slice Units. Jul 10 00:25:15.960763 systemd[1]: Reached target swap.target - Swaps. Jul 10 00:25:15.960771 systemd[1]: Reached target timers.target - Timer Units. Jul 10 00:25:15.960780 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 10 00:25:15.960791 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 10 00:25:15.960799 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 10 00:25:15.960808 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 10 00:25:15.960819 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 10 00:25:15.960827 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 10 00:25:15.960836 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 00:25:15.960846 systemd[1]: Reached target sockets.target - Socket Units. Jul 10 00:25:15.960857 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 10 00:25:15.960870 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 10 00:25:15.960881 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 10 00:25:15.960892 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 10 00:25:15.960903 systemd[1]: Starting systemd-fsck-usr.service... Jul 10 00:25:15.960913 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 10 00:25:15.960924 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 10 00:25:15.960934 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:25:15.960945 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 10 00:25:15.960959 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 00:25:15.960969 systemd[1]: Finished systemd-fsck-usr.service. Jul 10 00:25:15.960980 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 10 00:25:15.961015 systemd-journald[220]: Collecting audit messages is disabled. Jul 10 00:25:15.961037 systemd-journald[220]: Journal started Jul 10 00:25:15.961055 systemd-journald[220]: Runtime Journal (/run/log/journal/6632137a2fd84694a29ba7876901cdf2) is 6M, max 48.2M, 42.2M free. Jul 10 00:25:15.951507 systemd-modules-load[221]: Inserted module 'overlay' Jul 10 00:25:15.963448 systemd[1]: Started systemd-journald.service - Journal Service. Jul 10 00:25:15.963296 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 10 00:25:15.967484 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 10 00:25:15.970450 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 10 00:25:15.980397 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 10 00:25:15.981630 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:25:15.984639 kernel: Bridge firewalling registered Jul 10 00:25:15.982117 systemd-modules-load[221]: Inserted module 'br_netfilter' Jul 10 00:25:15.985440 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 10 00:25:15.989543 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 10 00:25:15.992008 systemd-tmpfiles[238]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 10 00:25:15.992708 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 10 00:25:16.006536 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 00:25:16.007662 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 00:25:16.019489 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 10 00:25:16.022783 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 10 00:25:16.038553 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 10 00:25:16.039800 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 10 00:25:16.062207 dracut-cmdline[262]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=844005237fb9709f65a093d5533c4229fb6c54e8e257736d9c3d041b6d3080ea Jul 10 00:25:16.079661 systemd-resolved[258]: Positive Trust Anchors: Jul 10 00:25:16.079685 systemd-resolved[258]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 10 00:25:16.079716 systemd-resolved[258]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 10 00:25:16.082885 systemd-resolved[258]: Defaulting to hostname 'linux'. Jul 10 00:25:16.084341 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 10 00:25:16.090828 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 10 00:25:16.195429 kernel: SCSI subsystem initialized Jul 10 00:25:16.205399 kernel: Loading iSCSI transport class v2.0-870. Jul 10 00:25:16.216402 kernel: iscsi: registered transport (tcp) Jul 10 00:25:16.238405 kernel: iscsi: registered transport (qla4xxx) Jul 10 00:25:16.238440 kernel: QLogic iSCSI HBA Driver Jul 10 00:25:16.260232 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 10 00:25:16.285337 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 10 00:25:16.286027 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 10 00:25:16.356709 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 10 00:25:16.360753 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 10 00:25:16.424435 kernel: raid6: avx2x4 gen() 29260 MB/s Jul 10 00:25:16.441414 kernel: raid6: avx2x2 gen() 30114 MB/s Jul 10 00:25:16.458504 kernel: raid6: avx2x1 gen() 24092 MB/s Jul 10 00:25:16.458586 kernel: raid6: using algorithm avx2x2 gen() 30114 MB/s Jul 10 00:25:16.476572 kernel: raid6: .... xor() 19419 MB/s, rmw enabled Jul 10 00:25:16.476715 kernel: raid6: using avx2x2 recovery algorithm Jul 10 00:25:16.498430 kernel: xor: automatically using best checksumming function avx Jul 10 00:25:16.679440 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 10 00:25:16.691879 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 10 00:25:16.699945 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 00:25:16.774833 systemd-udevd[471]: Using default interface naming scheme 'v255'. Jul 10 00:25:16.786949 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 00:25:16.789216 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 10 00:25:16.830645 dracut-pre-trigger[477]: rd.md=0: removing MD RAID activation Jul 10 00:25:16.867559 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 10 00:25:16.871923 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 10 00:25:17.276862 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 00:25:17.281386 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 10 00:25:17.331405 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jul 10 00:25:17.344404 kernel: cryptd: max_cpu_qlen set to 1000 Jul 10 00:25:17.347029 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 10 00:25:17.355401 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jul 10 00:25:17.361729 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 10 00:25:17.361760 kernel: GPT:9289727 != 19775487 Jul 10 00:25:17.361771 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 10 00:25:17.364060 kernel: GPT:9289727 != 19775487 Jul 10 00:25:17.364081 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 10 00:25:17.364091 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 00:25:17.369403 kernel: libata version 3.00 loaded. Jul 10 00:25:17.372463 kernel: AES CTR mode by8 optimization enabled Jul 10 00:25:17.382628 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 00:25:17.384052 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:25:17.404483 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:25:17.408970 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:25:17.413494 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 10 00:25:17.416423 kernel: ahci 0000:00:1f.2: version 3.0 Jul 10 00:25:17.419492 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 10 00:25:17.424063 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jul 10 00:25:17.424462 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jul 10 00:25:17.424693 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 10 00:25:17.443418 kernel: scsi host0: ahci Jul 10 00:25:17.450427 kernel: scsi host1: ahci Jul 10 00:25:17.451411 kernel: scsi host2: ahci Jul 10 00:25:17.453404 kernel: scsi host3: ahci Jul 10 00:25:17.453664 kernel: scsi host4: ahci Jul 10 00:25:17.453614 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 10 00:25:17.455394 kernel: scsi host5: ahci Jul 10 00:25:17.458596 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 0 Jul 10 00:25:17.458627 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 0 Jul 10 00:25:17.458653 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 0 Jul 10 00:25:17.462596 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 0 Jul 10 00:25:17.462676 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 0 Jul 10 00:25:17.462691 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 0 Jul 10 00:25:17.470675 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:25:17.483584 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 10 00:25:17.495673 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 10 00:25:17.505064 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 10 00:25:17.505784 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 10 00:25:17.507170 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 10 00:25:17.541805 disk-uuid[633]: Primary Header is updated. Jul 10 00:25:17.541805 disk-uuid[633]: Secondary Entries is updated. Jul 10 00:25:17.541805 disk-uuid[633]: Secondary Header is updated. Jul 10 00:25:17.547399 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 00:25:17.551406 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 00:25:17.771878 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 10 00:25:17.771953 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jul 10 00:25:17.771965 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 10 00:25:17.771975 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 10 00:25:17.773418 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 10 00:25:17.773498 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 10 00:25:17.774422 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 10 00:25:17.775406 kernel: ata3.00: applying bridge limits Jul 10 00:25:17.776399 kernel: ata3.00: configured for UDMA/100 Jul 10 00:25:17.776427 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 10 00:25:17.829478 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 10 00:25:17.829873 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 10 00:25:17.856420 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 10 00:25:18.294106 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 10 00:25:18.295308 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 10 00:25:18.296663 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 00:25:18.296953 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 10 00:25:18.298157 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 10 00:25:18.329267 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 10 00:25:18.581402 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 00:25:18.581475 disk-uuid[634]: The operation has completed successfully. Jul 10 00:25:18.745419 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 10 00:25:18.745555 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 10 00:25:18.747562 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 10 00:25:18.783813 sh[663]: Success Jul 10 00:25:18.802473 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 10 00:25:18.802529 kernel: device-mapper: uevent: version 1.0.3 Jul 10 00:25:18.803689 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 10 00:25:18.813393 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jul 10 00:25:18.847211 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 10 00:25:18.857108 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 10 00:25:18.886585 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 10 00:25:18.895426 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 10 00:25:18.895474 kernel: BTRFS: device fsid c4cb30b0-bb74-4f98-aab6-7a1c6f47edee devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (675) Jul 10 00:25:18.896922 kernel: BTRFS info (device dm-0): first mount of filesystem c4cb30b0-bb74-4f98-aab6-7a1c6f47edee Jul 10 00:25:18.896949 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 10 00:25:18.898548 kernel: BTRFS info (device dm-0): using free-space-tree Jul 10 00:25:18.904200 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 10 00:25:18.905798 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 10 00:25:18.907353 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 10 00:25:18.908392 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 10 00:25:18.910256 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 10 00:25:18.938436 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (708) Jul 10 00:25:18.940848 kernel: BTRFS info (device vda6): first mount of filesystem 66535909-6865-4f30-ad42-a3000fffd5f6 Jul 10 00:25:18.940876 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 10 00:25:18.940890 kernel: BTRFS info (device vda6): using free-space-tree Jul 10 00:25:18.949404 kernel: BTRFS info (device vda6): last unmount of filesystem 66535909-6865-4f30-ad42-a3000fffd5f6 Jul 10 00:25:18.950999 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 10 00:25:18.954901 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 10 00:25:19.100752 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 10 00:25:19.108513 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 10 00:25:19.151184 ignition[753]: Ignition 2.21.0 Jul 10 00:25:19.151208 ignition[753]: Stage: fetch-offline Jul 10 00:25:19.151285 ignition[753]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:25:19.151299 ignition[753]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:25:19.151460 ignition[753]: parsed url from cmdline: "" Jul 10 00:25:19.151469 ignition[753]: no config URL provided Jul 10 00:25:19.151478 ignition[753]: reading system config file "/usr/lib/ignition/user.ign" Jul 10 00:25:19.151491 ignition[753]: no config at "/usr/lib/ignition/user.ign" Jul 10 00:25:19.151523 ignition[753]: op(1): [started] loading QEMU firmware config module Jul 10 00:25:19.151535 ignition[753]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 10 00:25:19.164084 ignition[753]: op(1): [finished] loading QEMU firmware config module Jul 10 00:25:19.172828 systemd-networkd[850]: lo: Link UP Jul 10 00:25:19.172844 systemd-networkd[850]: lo: Gained carrier Jul 10 00:25:19.174813 systemd-networkd[850]: Enumeration completed Jul 10 00:25:19.175277 systemd-networkd[850]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 00:25:19.175282 systemd-networkd[850]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 10 00:25:19.175449 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 10 00:25:19.178534 systemd-networkd[850]: eth0: Link UP Jul 10 00:25:19.178539 systemd-networkd[850]: eth0: Gained carrier Jul 10 00:25:19.178549 systemd-networkd[850]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 00:25:19.179967 systemd[1]: Reached target network.target - Network. Jul 10 00:25:19.197433 systemd-networkd[850]: eth0: DHCPv4 address 10.0.0.116/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 10 00:25:19.226072 ignition[753]: parsing config with SHA512: e463d5ef561e9ec625f53dac38ef520443ce76ec00e0fa7e72305ce746529f868814ae680a2ab5988cf0b8b5c08ee01e0798c741bf65fab548d060fd2aba164a Jul 10 00:25:19.373065 unknown[753]: fetched base config from "system" Jul 10 00:25:19.373084 unknown[753]: fetched user config from "qemu" Jul 10 00:25:19.373582 ignition[753]: fetch-offline: fetch-offline passed Jul 10 00:25:19.376681 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 10 00:25:19.373677 ignition[753]: Ignition finished successfully Jul 10 00:25:19.378274 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 10 00:25:19.379170 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 10 00:25:19.429767 ignition[859]: Ignition 2.21.0 Jul 10 00:25:19.429782 ignition[859]: Stage: kargs Jul 10 00:25:19.429948 ignition[859]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:25:19.429960 ignition[859]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:25:19.488205 ignition[859]: kargs: kargs passed Jul 10 00:25:19.488351 ignition[859]: Ignition finished successfully Jul 10 00:25:19.494779 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 10 00:25:19.497131 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 10 00:25:19.534698 ignition[867]: Ignition 2.21.0 Jul 10 00:25:19.536295 ignition[867]: Stage: disks Jul 10 00:25:19.536567 ignition[867]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:25:19.536597 ignition[867]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:25:19.537531 ignition[867]: disks: disks passed Jul 10 00:25:19.537596 ignition[867]: Ignition finished successfully Jul 10 00:25:19.541994 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 10 00:25:19.542533 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 10 00:25:19.542852 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 10 00:25:19.543196 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 10 00:25:19.544327 systemd[1]: Reached target sysinit.target - System Initialization. Jul 10 00:25:19.552527 systemd[1]: Reached target basic.target - Basic System. Jul 10 00:25:19.554283 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 10 00:25:19.588529 systemd-fsck[878]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 10 00:25:19.613279 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 10 00:25:19.618070 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 10 00:25:19.811410 kernel: EXT4-fs (vda9): mounted filesystem a310c019-7915-47f5-9fce-db4a09ac26c2 r/w with ordered data mode. Quota mode: none. Jul 10 00:25:19.812337 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 10 00:25:19.813504 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 10 00:25:19.816731 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 10 00:25:19.818730 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 10 00:25:19.821100 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 10 00:25:19.821163 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 10 00:25:19.823071 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 10 00:25:19.832214 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 10 00:25:19.835650 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 10 00:25:19.839390 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (887) Jul 10 00:25:19.841621 kernel: BTRFS info (device vda6): first mount of filesystem 66535909-6865-4f30-ad42-a3000fffd5f6 Jul 10 00:25:19.841641 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 10 00:25:19.841652 kernel: BTRFS info (device vda6): using free-space-tree Jul 10 00:25:19.846329 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 10 00:25:19.876595 initrd-setup-root[911]: cut: /sysroot/etc/passwd: No such file or directory Jul 10 00:25:19.922689 initrd-setup-root[918]: cut: /sysroot/etc/group: No such file or directory Jul 10 00:25:19.927184 initrd-setup-root[925]: cut: /sysroot/etc/shadow: No such file or directory Jul 10 00:25:19.932004 initrd-setup-root[932]: cut: /sysroot/etc/gshadow: No such file or directory Jul 10 00:25:20.026817 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 10 00:25:20.029550 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 10 00:25:20.031493 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 10 00:25:20.050991 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 10 00:25:20.052303 kernel: BTRFS info (device vda6): last unmount of filesystem 66535909-6865-4f30-ad42-a3000fffd5f6 Jul 10 00:25:20.066747 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 10 00:25:20.106022 ignition[1001]: INFO : Ignition 2.21.0 Jul 10 00:25:20.106022 ignition[1001]: INFO : Stage: mount Jul 10 00:25:20.107953 ignition[1001]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 00:25:20.107953 ignition[1001]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:25:20.107953 ignition[1001]: INFO : mount: mount passed Jul 10 00:25:20.107953 ignition[1001]: INFO : Ignition finished successfully Jul 10 00:25:20.114202 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 10 00:25:20.115970 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 10 00:25:20.142469 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 10 00:25:20.176421 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1013) Jul 10 00:25:20.178588 kernel: BTRFS info (device vda6): first mount of filesystem 66535909-6865-4f30-ad42-a3000fffd5f6 Jul 10 00:25:20.178610 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 10 00:25:20.178621 kernel: BTRFS info (device vda6): using free-space-tree Jul 10 00:25:20.183230 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 10 00:25:20.230444 ignition[1030]: INFO : Ignition 2.21.0 Jul 10 00:25:20.230444 ignition[1030]: INFO : Stage: files Jul 10 00:25:20.232620 ignition[1030]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 00:25:20.232620 ignition[1030]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:25:20.232620 ignition[1030]: DEBUG : files: compiled without relabeling support, skipping Jul 10 00:25:20.236126 ignition[1030]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 10 00:25:20.236126 ignition[1030]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 10 00:25:20.239342 ignition[1030]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 10 00:25:20.239342 ignition[1030]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 10 00:25:20.239342 ignition[1030]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 10 00:25:20.239088 unknown[1030]: wrote ssh authorized keys file for user: core Jul 10 00:25:20.244950 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 10 00:25:20.244950 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jul 10 00:25:20.312757 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 10 00:25:20.465352 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 10 00:25:20.465352 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 10 00:25:20.469293 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 10 00:25:20.944934 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 10 00:25:21.135207 systemd-networkd[850]: eth0: Gained IPv6LL Jul 10 00:25:21.268286 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 10 00:25:21.268286 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 10 00:25:21.272688 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 10 00:25:21.272688 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 10 00:25:21.272688 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 10 00:25:21.272688 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 10 00:25:21.272688 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 10 00:25:21.272688 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 10 00:25:21.272688 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 10 00:25:21.286003 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 10 00:25:21.286003 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 10 00:25:21.286003 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 10 00:25:21.286003 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 10 00:25:21.286003 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 10 00:25:21.286003 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jul 10 00:25:21.922021 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 10 00:25:22.620028 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 10 00:25:22.620028 ignition[1030]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 10 00:25:22.625560 ignition[1030]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 10 00:25:23.023869 ignition[1030]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 10 00:25:23.023869 ignition[1030]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 10 00:25:23.023869 ignition[1030]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jul 10 00:25:23.023869 ignition[1030]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 10 00:25:23.031286 ignition[1030]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 10 00:25:23.031286 ignition[1030]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jul 10 00:25:23.031286 ignition[1030]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jul 10 00:25:23.058265 ignition[1030]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 10 00:25:23.066692 ignition[1030]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 10 00:25:23.068920 ignition[1030]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jul 10 00:25:23.068920 ignition[1030]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jul 10 00:25:23.072182 ignition[1030]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jul 10 00:25:23.072182 ignition[1030]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 10 00:25:23.072182 ignition[1030]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 10 00:25:23.072182 ignition[1030]: INFO : files: files passed Jul 10 00:25:23.072182 ignition[1030]: INFO : Ignition finished successfully Jul 10 00:25:23.076673 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 10 00:25:23.080271 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 10 00:25:23.082490 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 10 00:25:23.110398 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 10 00:25:23.110612 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 10 00:25:23.114042 initrd-setup-root-after-ignition[1058]: grep: /sysroot/oem/oem-release: No such file or directory Jul 10 00:25:23.116988 initrd-setup-root-after-ignition[1061]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 10 00:25:23.116988 initrd-setup-root-after-ignition[1061]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 10 00:25:23.121009 initrd-setup-root-after-ignition[1065]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 10 00:25:23.124144 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 10 00:25:23.127644 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 10 00:25:23.130124 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 10 00:25:23.201640 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 10 00:25:23.201780 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 10 00:25:23.202902 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 10 00:25:23.208429 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 10 00:25:23.208929 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 10 00:25:23.212259 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 10 00:25:23.256107 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 10 00:25:23.259020 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 10 00:25:23.290305 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 10 00:25:23.290781 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 00:25:23.291163 systemd[1]: Stopped target timers.target - Timer Units. Jul 10 00:25:23.291758 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 10 00:25:23.291926 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 10 00:25:23.297098 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 10 00:25:23.297477 systemd[1]: Stopped target basic.target - Basic System. Jul 10 00:25:23.298327 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 10 00:25:23.303148 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 10 00:25:23.303467 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 10 00:25:23.304224 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 10 00:25:23.309225 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 10 00:25:23.309726 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 10 00:25:23.310126 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 10 00:25:23.310753 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 10 00:25:23.311066 systemd[1]: Stopped target swap.target - Swaps. Jul 10 00:25:23.311361 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 10 00:25:23.311512 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 10 00:25:23.312295 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 10 00:25:23.312843 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 00:25:23.313114 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 10 00:25:23.326783 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 00:25:23.327575 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 10 00:25:23.327723 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 10 00:25:23.333077 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 10 00:25:23.333845 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 10 00:25:23.334738 systemd[1]: Stopped target paths.target - Path Units. Jul 10 00:25:23.336947 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 10 00:25:23.341459 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 00:25:23.342111 systemd[1]: Stopped target slices.target - Slice Units. Jul 10 00:25:23.342475 systemd[1]: Stopped target sockets.target - Socket Units. Jul 10 00:25:23.343015 systemd[1]: iscsid.socket: Deactivated successfully. Jul 10 00:25:23.343147 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 10 00:25:23.347930 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 10 00:25:23.348022 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 10 00:25:23.349515 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 10 00:25:23.349635 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 10 00:25:23.351720 systemd[1]: ignition-files.service: Deactivated successfully. Jul 10 00:25:23.351828 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 10 00:25:23.354807 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 10 00:25:23.360307 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 10 00:25:23.361288 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 10 00:25:23.361454 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 00:25:23.361950 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 10 00:25:23.362048 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 10 00:25:23.369210 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 10 00:25:23.369360 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 10 00:25:23.386292 ignition[1085]: INFO : Ignition 2.21.0 Jul 10 00:25:23.386292 ignition[1085]: INFO : Stage: umount Jul 10 00:25:23.388213 ignition[1085]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 00:25:23.388213 ignition[1085]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:25:23.388213 ignition[1085]: INFO : umount: umount passed Jul 10 00:25:23.388213 ignition[1085]: INFO : Ignition finished successfully Jul 10 00:25:23.393328 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 10 00:25:23.393509 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 10 00:25:23.394107 systemd[1]: Stopped target network.target - Network. Jul 10 00:25:23.396774 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 10 00:25:23.396832 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 10 00:25:23.398994 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 10 00:25:23.399044 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 10 00:25:23.399732 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 10 00:25:23.399782 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 10 00:25:23.400092 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 10 00:25:23.400135 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 10 00:25:23.400553 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 10 00:25:23.400962 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 10 00:25:23.402652 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 10 00:25:23.411571 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 10 00:25:23.411731 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 10 00:25:23.416102 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 10 00:25:23.416424 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 10 00:25:23.416560 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 10 00:25:23.418105 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 10 00:25:23.418215 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 10 00:25:23.419292 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 10 00:25:23.419345 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 00:25:23.423751 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 10 00:25:23.425859 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 10 00:25:23.426013 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 10 00:25:23.430396 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 10 00:25:23.430650 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 10 00:25:23.431680 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 10 00:25:23.431723 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 10 00:25:23.436527 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 10 00:25:23.437012 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 10 00:25:23.437063 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 10 00:25:23.437395 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 10 00:25:23.437438 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 10 00:25:23.443802 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 10 00:25:23.443852 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 10 00:25:23.444340 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 00:25:23.445984 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 10 00:25:23.462802 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 10 00:25:23.463010 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 00:25:23.463944 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 10 00:25:23.464048 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 10 00:25:23.466791 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 10 00:25:23.466829 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 00:25:23.467086 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 10 00:25:23.467137 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 10 00:25:23.467963 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 10 00:25:23.468007 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 10 00:25:23.474551 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 10 00:25:23.474603 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 10 00:25:23.478515 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 10 00:25:23.480592 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 10 00:25:23.481960 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 10 00:25:23.485498 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 10 00:25:23.485570 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 00:25:23.490104 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 10 00:25:23.490163 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 10 00:25:23.493801 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 10 00:25:23.493854 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 00:25:23.494282 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 00:25:23.494325 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:25:23.499933 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 10 00:25:23.500046 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 10 00:25:23.501006 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 10 00:25:23.501126 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 10 00:25:23.504414 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 10 00:25:23.505618 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 10 00:25:23.523639 systemd[1]: Switching root. Jul 10 00:25:23.569634 systemd-journald[220]: Journal stopped Jul 10 00:25:25.416041 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). Jul 10 00:25:25.416109 kernel: SELinux: policy capability network_peer_controls=1 Jul 10 00:25:25.416129 kernel: SELinux: policy capability open_perms=1 Jul 10 00:25:25.416140 kernel: SELinux: policy capability extended_socket_class=1 Jul 10 00:25:25.416152 kernel: SELinux: policy capability always_check_network=0 Jul 10 00:25:25.416163 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 10 00:25:25.416178 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 10 00:25:25.416190 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 10 00:25:25.416201 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 10 00:25:25.416212 kernel: SELinux: policy capability userspace_initial_context=0 Jul 10 00:25:25.416224 kernel: audit: type=1403 audit(1752107124.387:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 10 00:25:25.416246 systemd[1]: Successfully loaded SELinux policy in 56.458ms. Jul 10 00:25:25.416265 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 15.589ms. Jul 10 00:25:25.416278 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 10 00:25:25.416291 systemd[1]: Detected virtualization kvm. Jul 10 00:25:25.416306 systemd[1]: Detected architecture x86-64. Jul 10 00:25:25.416318 systemd[1]: Detected first boot. Jul 10 00:25:25.416330 systemd[1]: Initializing machine ID from VM UUID. Jul 10 00:25:25.416343 zram_generator::config[1130]: No configuration found. Jul 10 00:25:25.416362 kernel: Guest personality initialized and is inactive Jul 10 00:25:25.417539 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 10 00:25:25.417555 kernel: Initialized host personality Jul 10 00:25:25.417581 kernel: NET: Registered PF_VSOCK protocol family Jul 10 00:25:25.417595 systemd[1]: Populated /etc with preset unit settings. Jul 10 00:25:25.417614 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 10 00:25:25.417626 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 10 00:25:25.417639 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 10 00:25:25.417651 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 10 00:25:25.417664 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 10 00:25:25.417676 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 10 00:25:25.417688 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 10 00:25:25.417700 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 10 00:25:25.417726 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 10 00:25:25.417739 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 10 00:25:25.417751 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 10 00:25:25.417764 systemd[1]: Created slice user.slice - User and Session Slice. Jul 10 00:25:25.417776 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 00:25:25.417789 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 00:25:25.417801 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 10 00:25:25.417814 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 10 00:25:25.417827 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 10 00:25:25.417841 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 10 00:25:25.417854 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 10 00:25:25.417867 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 00:25:25.417879 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 10 00:25:25.417891 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 10 00:25:25.417903 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 10 00:25:25.417915 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 10 00:25:25.417931 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 10 00:25:25.417943 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 00:25:25.417955 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 10 00:25:25.417967 systemd[1]: Reached target slices.target - Slice Units. Jul 10 00:25:25.417979 systemd[1]: Reached target swap.target - Swaps. Jul 10 00:25:25.417991 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 10 00:25:25.418004 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 10 00:25:25.418016 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 10 00:25:25.418029 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 10 00:25:25.418041 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 10 00:25:25.418055 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 00:25:25.418067 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 10 00:25:25.418080 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 10 00:25:25.418092 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 10 00:25:25.418104 systemd[1]: Mounting media.mount - External Media Directory... Jul 10 00:25:25.418116 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:25:25.418130 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 10 00:25:25.418142 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 10 00:25:25.418154 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 10 00:25:25.418169 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 10 00:25:25.418181 systemd[1]: Reached target machines.target - Containers. Jul 10 00:25:25.418194 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 10 00:25:25.418206 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 00:25:25.418218 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 10 00:25:25.418230 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 10 00:25:25.418243 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 00:25:25.418254 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 10 00:25:25.418269 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 00:25:25.418281 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 10 00:25:25.418293 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 00:25:25.418306 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 10 00:25:25.418318 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 10 00:25:25.418330 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 10 00:25:25.418342 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 10 00:25:25.418354 systemd[1]: Stopped systemd-fsck-usr.service. Jul 10 00:25:25.418411 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 10 00:25:25.418426 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 10 00:25:25.418480 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 10 00:25:25.418500 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 10 00:25:25.418512 kernel: loop: module loaded Jul 10 00:25:25.418550 systemd-journald[1194]: Collecting audit messages is disabled. Jul 10 00:25:25.418574 systemd-journald[1194]: Journal started Jul 10 00:25:25.418599 systemd-journald[1194]: Runtime Journal (/run/log/journal/6632137a2fd84694a29ba7876901cdf2) is 6M, max 48.2M, 42.2M free. Jul 10 00:25:25.087973 systemd[1]: Queued start job for default target multi-user.target. Jul 10 00:25:25.114634 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 10 00:25:25.115135 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 10 00:25:25.436741 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 10 00:25:25.440631 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 10 00:25:25.451551 kernel: fuse: init (API version 7.41) Jul 10 00:25:25.451612 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 10 00:25:25.455781 systemd[1]: verity-setup.service: Deactivated successfully. Jul 10 00:25:25.455825 systemd[1]: Stopped verity-setup.service. Jul 10 00:25:25.459408 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:25:25.463423 systemd[1]: Started systemd-journald.service - Journal Service. Jul 10 00:25:25.464177 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 10 00:25:25.465320 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 10 00:25:25.466521 systemd[1]: Mounted media.mount - External Media Directory. Jul 10 00:25:25.467842 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 10 00:25:25.469161 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 10 00:25:25.472048 kernel: ACPI: bus type drm_connector registered Jul 10 00:25:25.472159 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 10 00:25:25.473691 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 00:25:25.475500 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 10 00:25:25.475720 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 10 00:25:25.478523 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:25:25.478761 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 00:25:25.481788 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 10 00:25:25.482237 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 10 00:25:25.485047 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:25:25.485390 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 00:25:25.487552 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 10 00:25:25.487934 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 10 00:25:25.489728 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:25:25.490041 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 00:25:25.491846 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 10 00:25:25.493723 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 10 00:25:25.495773 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 10 00:25:25.497556 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 10 00:25:25.512820 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 10 00:25:25.516457 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 10 00:25:25.519658 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 10 00:25:25.522183 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 10 00:25:25.523673 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 10 00:25:25.523789 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 10 00:25:25.526235 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 10 00:25:25.532246 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 10 00:25:25.535075 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 00:25:25.538086 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 10 00:25:25.542611 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 10 00:25:25.544135 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 00:25:25.555680 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 10 00:25:25.557421 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 10 00:25:25.559498 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 10 00:25:25.569491 systemd-journald[1194]: Time spent on flushing to /var/log/journal/6632137a2fd84694a29ba7876901cdf2 is 25.151ms for 1035 entries. Jul 10 00:25:25.569491 systemd-journald[1194]: System Journal (/var/log/journal/6632137a2fd84694a29ba7876901cdf2) is 8M, max 195.6M, 187.6M free. Jul 10 00:25:25.609774 systemd-journald[1194]: Received client request to flush runtime journal. Jul 10 00:25:25.563768 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 10 00:25:25.569071 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 10 00:25:25.573488 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 10 00:25:25.575105 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 10 00:25:25.580598 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 10 00:25:25.583365 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 10 00:25:25.594694 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 10 00:25:25.601602 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 00:25:25.611170 systemd-tmpfiles[1251]: ACLs are not supported, ignoring. Jul 10 00:25:25.611189 systemd-tmpfiles[1251]: ACLs are not supported, ignoring. Jul 10 00:25:25.617150 kernel: loop0: detected capacity change from 0 to 146240 Jul 10 00:25:25.613424 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 10 00:25:25.622155 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 10 00:25:25.628594 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 10 00:25:25.644690 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 10 00:25:25.648242 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 10 00:25:25.653391 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 10 00:25:25.677406 kernel: loop1: detected capacity change from 0 to 224512 Jul 10 00:25:25.683993 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 10 00:25:25.689440 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 10 00:25:25.712415 kernel: loop2: detected capacity change from 0 to 113872 Jul 10 00:25:25.721344 systemd-tmpfiles[1271]: ACLs are not supported, ignoring. Jul 10 00:25:25.721821 systemd-tmpfiles[1271]: ACLs are not supported, ignoring. Jul 10 00:25:25.728760 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 00:25:25.746416 kernel: loop3: detected capacity change from 0 to 146240 Jul 10 00:25:26.113439 kernel: loop4: detected capacity change from 0 to 224512 Jul 10 00:25:26.117606 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 10 00:25:26.132415 kernel: loop5: detected capacity change from 0 to 113872 Jul 10 00:25:26.144517 (sd-merge)[1275]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 10 00:25:26.145328 (sd-merge)[1275]: Merged extensions into '/usr'. Jul 10 00:25:26.162748 systemd[1]: Reload requested from client PID 1250 ('systemd-sysext') (unit systemd-sysext.service)... Jul 10 00:25:26.162769 systemd[1]: Reloading... Jul 10 00:25:26.385187 zram_generator::config[1298]: No configuration found. Jul 10 00:25:26.447120 ldconfig[1245]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 10 00:25:26.578362 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:25:26.671510 systemd[1]: Reloading finished in 508 ms. Jul 10 00:25:26.705691 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 10 00:25:26.707840 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 10 00:25:26.739469 systemd[1]: Starting ensure-sysext.service... Jul 10 00:25:26.742282 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 10 00:25:26.766918 systemd[1]: Reload requested from client PID 1338 ('systemctl') (unit ensure-sysext.service)... Jul 10 00:25:26.766956 systemd[1]: Reloading... Jul 10 00:25:26.802120 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 10 00:25:26.802164 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 10 00:25:26.802570 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 10 00:25:26.802901 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 10 00:25:26.804099 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 10 00:25:26.804506 systemd-tmpfiles[1339]: ACLs are not supported, ignoring. Jul 10 00:25:26.804607 systemd-tmpfiles[1339]: ACLs are not supported, ignoring. Jul 10 00:25:26.810615 systemd-tmpfiles[1339]: Detected autofs mount point /boot during canonicalization of boot. Jul 10 00:25:26.810784 systemd-tmpfiles[1339]: Skipping /boot Jul 10 00:25:26.831791 systemd-tmpfiles[1339]: Detected autofs mount point /boot during canonicalization of boot. Jul 10 00:25:26.831942 systemd-tmpfiles[1339]: Skipping /boot Jul 10 00:25:26.851415 zram_generator::config[1366]: No configuration found. Jul 10 00:25:27.148538 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:25:27.265541 systemd[1]: Reloading finished in 497 ms. Jul 10 00:25:27.288439 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 10 00:25:27.317504 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 00:25:27.328679 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 10 00:25:27.331891 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 10 00:25:27.334932 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 10 00:25:27.341800 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 10 00:25:27.345500 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 00:25:27.348611 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 10 00:25:27.352814 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:25:27.353029 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 00:25:27.356430 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 00:25:27.359611 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 00:25:27.370466 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 00:25:27.372097 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 00:25:27.372213 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 10 00:25:27.372308 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:25:27.375814 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:25:27.376091 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 00:25:27.380887 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:25:27.382519 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 00:25:27.384911 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:25:27.385468 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 00:25:27.387515 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 10 00:25:27.402116 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 10 00:25:27.410077 systemd[1]: Finished ensure-sysext.service. Jul 10 00:25:27.425405 systemd-udevd[1409]: Using default interface naming scheme 'v255'. Jul 10 00:25:27.436997 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:25:27.437337 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 00:25:27.441494 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 00:25:27.446164 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 10 00:25:27.448990 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 00:25:27.451788 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 00:25:27.453444 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 00:25:27.453638 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 10 00:25:27.459613 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 10 00:25:27.463286 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 10 00:25:27.471598 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 10 00:25:27.472920 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:25:27.484109 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:25:27.485036 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 00:25:27.487363 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:25:27.487719 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 00:25:27.491923 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 10 00:25:27.492906 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 10 00:25:27.495018 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:25:27.495345 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 00:25:27.501657 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 00:25:27.501901 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 10 00:25:27.508237 augenrules[1450]: No rules Jul 10 00:25:27.526516 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 00:25:27.529481 systemd[1]: audit-rules.service: Deactivated successfully. Jul 10 00:25:27.529868 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 10 00:25:27.531409 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 10 00:25:27.533467 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 10 00:25:27.551744 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 10 00:25:27.552968 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 10 00:25:27.604082 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 10 00:25:27.803682 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 10 00:25:27.819758 kernel: mousedev: PS/2 mouse device common for all mice Jul 10 00:25:27.834174 systemd-networkd[1485]: lo: Link UP Jul 10 00:25:27.834187 systemd-networkd[1485]: lo: Gained carrier Jul 10 00:25:27.835860 systemd-networkd[1485]: Enumeration completed Jul 10 00:25:27.835998 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 10 00:25:27.836252 systemd-networkd[1485]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 00:25:27.836263 systemd-networkd[1485]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 10 00:25:27.837203 systemd-networkd[1485]: eth0: Link UP Jul 10 00:25:27.837353 systemd-networkd[1485]: eth0: Gained carrier Jul 10 00:25:27.837391 systemd-networkd[1485]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 00:25:27.839224 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 10 00:25:27.846636 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 10 00:25:27.849435 systemd-networkd[1485]: eth0: DHCPv4 address 10.0.0.116/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 10 00:25:27.856040 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 10 00:25:27.860419 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 10 00:25:27.862052 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 10 00:25:27.888413 kernel: ACPI: button: Power Button [PWRF] Jul 10 00:25:27.890249 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 10 00:25:27.891642 systemd[1]: Reached target time-set.target - System Time Set. Jul 10 00:25:27.893094 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 10 00:25:28.799492 systemd-timesyncd[1441]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 10 00:25:28.799552 systemd-timesyncd[1441]: Initial clock synchronization to Thu 2025-07-10 00:25:28.799343 UTC. Jul 10 00:25:28.805983 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 10 00:25:28.820306 systemd-resolved[1408]: Positive Trust Anchors: Jul 10 00:25:28.820328 systemd-resolved[1408]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 10 00:25:28.820364 systemd-resolved[1408]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 10 00:25:28.834930 systemd-resolved[1408]: Defaulting to hostname 'linux'. Jul 10 00:25:28.837202 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 10 00:25:28.838545 systemd[1]: Reached target network.target - Network. Jul 10 00:25:28.839562 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 10 00:25:28.841034 systemd[1]: Reached target sysinit.target - System Initialization. Jul 10 00:25:28.842189 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 10 00:25:28.844551 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 10 00:25:28.845900 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 10 00:25:28.847986 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 10 00:25:28.859546 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jul 10 00:25:28.859912 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 10 00:25:28.860129 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 10 00:25:28.861883 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 10 00:25:28.863474 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 10 00:25:28.864819 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 10 00:25:28.864933 systemd[1]: Reached target paths.target - Path Units. Jul 10 00:25:28.865990 systemd[1]: Reached target timers.target - Timer Units. Jul 10 00:25:28.868700 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 10 00:25:28.872803 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 10 00:25:28.877861 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 10 00:25:28.882744 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 10 00:25:28.884151 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 10 00:25:28.892560 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 10 00:25:28.894340 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 10 00:25:28.896416 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 10 00:25:28.899420 systemd[1]: Reached target sockets.target - Socket Units. Jul 10 00:25:28.901563 systemd[1]: Reached target basic.target - Basic System. Jul 10 00:25:28.903039 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 10 00:25:28.903104 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 10 00:25:28.907857 systemd[1]: Starting containerd.service - containerd container runtime... Jul 10 00:25:28.910258 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 10 00:25:28.914936 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 10 00:25:28.917688 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 10 00:25:28.924183 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 10 00:25:28.925390 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 10 00:25:28.927722 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 10 00:25:28.931619 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 10 00:25:28.934773 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 10 00:25:28.937063 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 10 00:25:28.941747 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 10 00:25:28.945311 jq[1531]: false Jul 10 00:25:28.956411 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 10 00:25:28.959464 google_oslogin_nss_cache[1533]: oslogin_cache_refresh[1533]: Refreshing passwd entry cache Jul 10 00:25:28.958231 oslogin_cache_refresh[1533]: Refreshing passwd entry cache Jul 10 00:25:28.958775 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 10 00:25:28.959864 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 10 00:25:28.960747 systemd[1]: Starting update-engine.service - Update Engine... Jul 10 00:25:28.963671 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 10 00:25:28.970126 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 10 00:25:28.975577 oslogin_cache_refresh[1533]: Failure getting users, quitting Jul 10 00:25:28.985832 google_oslogin_nss_cache[1533]: oslogin_cache_refresh[1533]: Failure getting users, quitting Jul 10 00:25:28.985832 google_oslogin_nss_cache[1533]: oslogin_cache_refresh[1533]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 10 00:25:28.985832 google_oslogin_nss_cache[1533]: oslogin_cache_refresh[1533]: Refreshing group entry cache Jul 10 00:25:28.973814 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 10 00:25:28.975602 oslogin_cache_refresh[1533]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 10 00:25:28.974175 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 10 00:25:28.975680 oslogin_cache_refresh[1533]: Refreshing group entry cache Jul 10 00:25:28.976052 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 10 00:25:28.977144 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 10 00:25:28.990371 extend-filesystems[1532]: Found /dev/vda6 Jul 10 00:25:29.007606 jq[1541]: true Jul 10 00:25:28.986527 oslogin_cache_refresh[1533]: Failure getting groups, quitting Jul 10 00:25:29.007831 google_oslogin_nss_cache[1533]: oslogin_cache_refresh[1533]: Failure getting groups, quitting Jul 10 00:25:29.007831 google_oslogin_nss_cache[1533]: oslogin_cache_refresh[1533]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 10 00:25:28.989484 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 10 00:25:28.986545 oslogin_cache_refresh[1533]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 10 00:25:28.990753 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 10 00:25:29.018728 update_engine[1540]: I20250710 00:25:29.018023 1540 main.cc:92] Flatcar Update Engine starting Jul 10 00:25:29.018189 systemd[1]: motdgen.service: Deactivated successfully. Jul 10 00:25:29.025280 tar[1546]: linux-amd64/LICENSE Jul 10 00:25:29.027696 extend-filesystems[1532]: Found /dev/vda9 Jul 10 00:25:29.020275 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 10 00:25:29.031090 tar[1546]: linux-amd64/helm Jul 10 00:25:29.031116 extend-filesystems[1532]: Checking size of /dev/vda9 Jul 10 00:25:29.037804 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:25:29.059769 extend-filesystems[1532]: Resized partition /dev/vda9 Jul 10 00:25:29.067035 jq[1555]: true Jul 10 00:25:29.103651 (ntainerd)[1574]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 10 00:25:29.110642 dbus-daemon[1529]: [system] SELinux support is enabled Jul 10 00:25:29.125808 update_engine[1540]: I20250710 00:25:29.122344 1540 update_check_scheduler.cc:74] Next update check in 5m59s Jul 10 00:25:29.110897 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 10 00:25:29.118256 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 10 00:25:29.118402 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 10 00:25:29.120247 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 10 00:25:29.120359 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 10 00:25:29.122562 systemd[1]: Started update-engine.service - Update Engine. Jul 10 00:25:29.126727 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 10 00:25:29.136932 extend-filesystems[1576]: resize2fs 1.47.2 (1-Jan-2025) Jul 10 00:25:29.324419 bash[1593]: Updated "/home/core/.ssh/authorized_keys" Jul 10 00:25:29.294918 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 00:25:29.295253 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:25:29.306371 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:25:29.311261 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 10 00:25:29.318983 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 10 00:25:29.382504 kernel: kvm_amd: TSC scaling supported Jul 10 00:25:29.382589 kernel: kvm_amd: Nested Virtualization enabled Jul 10 00:25:29.382606 kernel: kvm_amd: Nested Paging enabled Jul 10 00:25:29.382618 kernel: kvm_amd: LBR virtualization supported Jul 10 00:25:29.383757 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jul 10 00:25:29.383824 kernel: kvm_amd: Virtual GIF supported Jul 10 00:25:29.406476 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 10 00:25:29.461010 systemd-logind[1538]: Watching system buttons on /dev/input/event2 (Power Button) Jul 10 00:25:29.461058 systemd-logind[1538]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 10 00:25:29.461528 systemd-logind[1538]: New seat seat0. Jul 10 00:25:29.463112 systemd[1]: Started systemd-logind.service - User Login Management. Jul 10 00:25:29.487001 locksmithd[1580]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 10 00:25:29.605556 kernel: EDAC MC: Ver: 3.0.0 Jul 10 00:25:29.728477 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 10 00:25:29.758920 extend-filesystems[1576]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 10 00:25:29.758920 extend-filesystems[1576]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 10 00:25:29.758920 extend-filesystems[1576]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 10 00:25:29.764360 extend-filesystems[1532]: Resized filesystem in /dev/vda9 Jul 10 00:25:29.765524 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 10 00:25:29.766825 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 10 00:25:29.769241 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:25:29.894699 sshd_keygen[1568]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 10 00:25:29.926494 tar[1546]: linux-amd64/README.md Jul 10 00:25:29.927530 containerd[1574]: time="2025-07-10T00:25:29Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 10 00:25:29.928727 containerd[1574]: time="2025-07-10T00:25:29.928225113Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 10 00:25:29.942573 containerd[1574]: time="2025-07-10T00:25:29.942512327Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="12.965µs" Jul 10 00:25:29.942573 containerd[1574]: time="2025-07-10T00:25:29.942551721Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 10 00:25:29.942573 containerd[1574]: time="2025-07-10T00:25:29.942569785Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 10 00:25:29.942851 containerd[1574]: time="2025-07-10T00:25:29.942822479Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 10 00:25:29.942851 containerd[1574]: time="2025-07-10T00:25:29.942844861Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 10 00:25:29.942919 containerd[1574]: time="2025-07-10T00:25:29.942870148Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 10 00:25:29.942989 containerd[1574]: time="2025-07-10T00:25:29.942958213Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 10 00:25:29.942989 containerd[1574]: time="2025-07-10T00:25:29.942979553Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 10 00:25:29.943591 containerd[1574]: time="2025-07-10T00:25:29.943561925Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 10 00:25:29.943591 containerd[1574]: time="2025-07-10T00:25:29.943582023Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 10 00:25:29.943653 containerd[1574]: time="2025-07-10T00:25:29.943613171Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 10 00:25:29.943653 containerd[1574]: time="2025-07-10T00:25:29.943626516Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 10 00:25:29.944179 containerd[1574]: time="2025-07-10T00:25:29.944146391Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 10 00:25:29.944654 containerd[1574]: time="2025-07-10T00:25:29.944618637Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 10 00:25:29.944689 containerd[1574]: time="2025-07-10T00:25:29.944667929Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 10 00:25:29.944689 containerd[1574]: time="2025-07-10T00:25:29.944680824Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 10 00:25:29.944757 containerd[1574]: time="2025-07-10T00:25:29.944719586Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 10 00:25:29.945091 containerd[1574]: time="2025-07-10T00:25:29.945058963Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 10 00:25:29.945195 containerd[1574]: time="2025-07-10T00:25:29.945167677Z" level=info msg="metadata content store policy set" policy=shared Jul 10 00:25:29.952938 containerd[1574]: time="2025-07-10T00:25:29.952879519Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 10 00:25:29.953016 containerd[1574]: time="2025-07-10T00:25:29.952963617Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 10 00:25:29.953016 containerd[1574]: time="2025-07-10T00:25:29.952985137Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 10 00:25:29.953095 containerd[1574]: time="2025-07-10T00:25:29.953074294Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 10 00:25:29.953166 containerd[1574]: time="2025-07-10T00:25:29.953098560Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 10 00:25:29.953166 containerd[1574]: time="2025-07-10T00:25:29.953113438Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 10 00:25:29.953166 containerd[1574]: time="2025-07-10T00:25:29.953129137Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 10 00:25:29.953166 containerd[1574]: time="2025-07-10T00:25:29.953146440Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 10 00:25:29.953166 containerd[1574]: time="2025-07-10T00:25:29.953159755Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 10 00:25:29.953327 containerd[1574]: time="2025-07-10T00:25:29.953275602Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 10 00:25:29.953327 containerd[1574]: time="2025-07-10T00:25:29.953297503Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 10 00:25:29.953327 containerd[1574]: time="2025-07-10T00:25:29.953314936Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 10 00:25:29.953590 containerd[1574]: time="2025-07-10T00:25:29.953540068Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 10 00:25:29.953651 containerd[1574]: time="2025-07-10T00:25:29.953598217Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 10 00:25:29.953707 containerd[1574]: time="2025-07-10T00:25:29.953662427Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 10 00:25:29.953752 containerd[1574]: time="2025-07-10T00:25:29.953703214Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 10 00:25:29.953752 containerd[1574]: time="2025-07-10T00:25:29.953719885Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 10 00:25:29.953752 containerd[1574]: time="2025-07-10T00:25:29.953746275Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 10 00:25:29.953925 containerd[1574]: time="2025-07-10T00:25:29.953775399Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 10 00:25:29.953925 containerd[1574]: time="2025-07-10T00:25:29.953807760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 10 00:25:29.953925 containerd[1574]: time="2025-07-10T00:25:29.953835261Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 10 00:25:29.953925 containerd[1574]: time="2025-07-10T00:25:29.953865538Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 10 00:25:29.953925 containerd[1574]: time="2025-07-10T00:25:29.953885666Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 10 00:25:29.955674 containerd[1574]: time="2025-07-10T00:25:29.955474526Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 10 00:25:29.955674 containerd[1574]: time="2025-07-10T00:25:29.955516134Z" level=info msg="Start snapshots syncer" Jul 10 00:25:29.955674 containerd[1574]: time="2025-07-10T00:25:29.955546761Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 10 00:25:29.955920 containerd[1574]: time="2025-07-10T00:25:29.955870067Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 10 00:25:29.956046 containerd[1574]: time="2025-07-10T00:25:29.955950508Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 10 00:25:29.957470 containerd[1574]: time="2025-07-10T00:25:29.957428039Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 10 00:25:29.957747 containerd[1574]: time="2025-07-10T00:25:29.957714937Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 10 00:25:29.957838 containerd[1574]: time="2025-07-10T00:25:29.957815506Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 10 00:25:29.957894 containerd[1574]: time="2025-07-10T00:25:29.957881950Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 10 00:25:29.957954 containerd[1574]: time="2025-07-10T00:25:29.957938897Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 10 00:25:29.958029 containerd[1574]: time="2025-07-10T00:25:29.958014629Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 10 00:25:29.958085 containerd[1574]: time="2025-07-10T00:25:29.958072728Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 10 00:25:29.958141 containerd[1574]: time="2025-07-10T00:25:29.958128393Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 10 00:25:29.958221 containerd[1574]: time="2025-07-10T00:25:29.958202271Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 10 00:25:29.958276 containerd[1574]: time="2025-07-10T00:25:29.958264087Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 10 00:25:29.958327 containerd[1574]: time="2025-07-10T00:25:29.958315173Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 10 00:25:29.958464 containerd[1574]: time="2025-07-10T00:25:29.958411443Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 10 00:25:29.958574 containerd[1574]: time="2025-07-10T00:25:29.958539063Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 10 00:25:29.958635 containerd[1574]: time="2025-07-10T00:25:29.958622189Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 10 00:25:29.958717 containerd[1574]: time="2025-07-10T00:25:29.958700716Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 10 00:25:29.958765 containerd[1574]: time="2025-07-10T00:25:29.958753785Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 10 00:25:29.958825 containerd[1574]: time="2025-07-10T00:25:29.958811053Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 10 00:25:29.958914 containerd[1574]: time="2025-07-10T00:25:29.958892516Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 10 00:25:29.958991 containerd[1574]: time="2025-07-10T00:25:29.958978858Z" level=info msg="runtime interface created" Jul 10 00:25:29.959034 containerd[1574]: time="2025-07-10T00:25:29.959023992Z" level=info msg="created NRI interface" Jul 10 00:25:29.959081 containerd[1574]: time="2025-07-10T00:25:29.959069217Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 10 00:25:29.959129 containerd[1574]: time="2025-07-10T00:25:29.959118790Z" level=info msg="Connect containerd service" Jul 10 00:25:29.959230 containerd[1574]: time="2025-07-10T00:25:29.959210833Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 10 00:25:29.960615 containerd[1574]: time="2025-07-10T00:25:29.960590570Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 10 00:25:30.017160 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 10 00:25:30.038326 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 10 00:25:30.042241 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 10 00:25:30.062507 systemd[1]: issuegen.service: Deactivated successfully. Jul 10 00:25:30.062918 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 10 00:25:30.068400 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 10 00:25:30.119593 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 10 00:25:30.124496 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 10 00:25:30.128144 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 10 00:25:30.129807 systemd[1]: Reached target getty.target - Login Prompts. Jul 10 00:25:30.188299 containerd[1574]: time="2025-07-10T00:25:30.188235225Z" level=info msg="Start subscribing containerd event" Jul 10 00:25:30.188502 containerd[1574]: time="2025-07-10T00:25:30.188314083Z" level=info msg="Start recovering state" Jul 10 00:25:30.188654 containerd[1574]: time="2025-07-10T00:25:30.188618354Z" level=info msg="Start event monitor" Jul 10 00:25:30.188654 containerd[1574]: time="2025-07-10T00:25:30.188628733Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 10 00:25:30.188785 containerd[1574]: time="2025-07-10T00:25:30.188669449Z" level=info msg="Start cni network conf syncer for default" Jul 10 00:25:30.188785 containerd[1574]: time="2025-07-10T00:25:30.188681402Z" level=info msg="Start streaming server" Jul 10 00:25:30.188785 containerd[1574]: time="2025-07-10T00:25:30.188702742Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 10 00:25:30.188785 containerd[1574]: time="2025-07-10T00:25:30.188719824Z" level=info msg="runtime interface starting up..." Jul 10 00:25:30.188785 containerd[1574]: time="2025-07-10T00:25:30.188731716Z" level=info msg="starting plugins..." Jul 10 00:25:30.188785 containerd[1574]: time="2025-07-10T00:25:30.188762354Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 10 00:25:30.188992 containerd[1574]: time="2025-07-10T00:25:30.188800746Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 10 00:25:30.189230 systemd[1]: Started containerd.service - containerd container runtime. Jul 10 00:25:30.189726 containerd[1574]: time="2025-07-10T00:25:30.189681127Z" level=info msg="containerd successfully booted in 0.262860s" Jul 10 00:25:30.741775 systemd-networkd[1485]: eth0: Gained IPv6LL Jul 10 00:25:30.746679 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 10 00:25:30.748645 systemd[1]: Reached target network-online.target - Network is Online. Jul 10 00:25:30.751613 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 10 00:25:30.754579 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:25:30.757342 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 10 00:25:30.797901 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 10 00:25:30.800524 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 10 00:25:30.800894 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 10 00:25:30.803809 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 10 00:25:31.030507 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 10 00:25:31.033562 systemd[1]: Started sshd@0-10.0.0.116:22-10.0.0.1:45594.service - OpenSSH per-connection server daemon (10.0.0.1:45594). Jul 10 00:25:31.162698 sshd[1673]: Accepted publickey for core from 10.0.0.1 port 45594 ssh2: RSA SHA256:a/WzkVKs173+YSebQY64/4LigDpieaPOYRH6W2gWTe4 Jul 10 00:25:31.165620 sshd-session[1673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:25:31.182771 systemd-logind[1538]: New session 1 of user core. Jul 10 00:25:31.184681 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 10 00:25:31.271826 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 10 00:25:31.326764 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 10 00:25:31.467613 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 10 00:25:31.494459 (systemd)[1677]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:25:31.498494 systemd-logind[1538]: New session c1 of user core. Jul 10 00:25:31.688560 systemd[1677]: Queued start job for default target default.target. Jul 10 00:25:31.728505 systemd[1677]: Created slice app.slice - User Application Slice. Jul 10 00:25:31.728548 systemd[1677]: Reached target paths.target - Paths. Jul 10 00:25:31.728610 systemd[1677]: Reached target timers.target - Timers. Jul 10 00:25:31.730757 systemd[1677]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 10 00:25:31.749840 systemd[1677]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 10 00:25:31.750024 systemd[1677]: Reached target sockets.target - Sockets. Jul 10 00:25:31.750079 systemd[1677]: Reached target basic.target - Basic System. Jul 10 00:25:31.750132 systemd[1677]: Reached target default.target - Main User Target. Jul 10 00:25:31.750179 systemd[1677]: Startup finished in 243ms. Jul 10 00:25:31.751199 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 10 00:25:31.768665 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 10 00:25:31.843120 systemd[1]: Started sshd@1-10.0.0.116:22-10.0.0.1:45602.service - OpenSSH per-connection server daemon (10.0.0.1:45602). Jul 10 00:25:31.910244 sshd[1688]: Accepted publickey for core from 10.0.0.1 port 45602 ssh2: RSA SHA256:a/WzkVKs173+YSebQY64/4LigDpieaPOYRH6W2gWTe4 Jul 10 00:25:31.912050 sshd-session[1688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:25:31.917949 systemd-logind[1538]: New session 2 of user core. Jul 10 00:25:31.927851 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 10 00:25:32.036771 sshd[1690]: Connection closed by 10.0.0.1 port 45602 Jul 10 00:25:32.039778 sshd-session[1688]: pam_unix(sshd:session): session closed for user core Jul 10 00:25:32.052653 systemd[1]: sshd@1-10.0.0.116:22-10.0.0.1:45602.service: Deactivated successfully. Jul 10 00:25:32.054769 systemd[1]: session-2.scope: Deactivated successfully. Jul 10 00:25:32.055670 systemd-logind[1538]: Session 2 logged out. Waiting for processes to exit. Jul 10 00:25:32.059072 systemd[1]: Started sshd@2-10.0.0.116:22-10.0.0.1:45612.service - OpenSSH per-connection server daemon (10.0.0.1:45612). Jul 10 00:25:32.061212 systemd-logind[1538]: Removed session 2. Jul 10 00:25:32.147012 sshd[1696]: Accepted publickey for core from 10.0.0.1 port 45612 ssh2: RSA SHA256:a/WzkVKs173+YSebQY64/4LigDpieaPOYRH6W2gWTe4 Jul 10 00:25:32.148756 sshd-session[1696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:25:32.153807 systemd-logind[1538]: New session 3 of user core. Jul 10 00:25:32.163582 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 10 00:25:32.224339 sshd[1698]: Connection closed by 10.0.0.1 port 45612 Jul 10 00:25:32.224658 sshd-session[1696]: pam_unix(sshd:session): session closed for user core Jul 10 00:25:32.227863 systemd[1]: sshd@2-10.0.0.116:22-10.0.0.1:45612.service: Deactivated successfully. Jul 10 00:25:32.256848 systemd[1]: session-3.scope: Deactivated successfully. Jul 10 00:25:32.259845 systemd-logind[1538]: Session 3 logged out. Waiting for processes to exit. Jul 10 00:25:32.261645 systemd-logind[1538]: Removed session 3. Jul 10 00:25:32.928121 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:25:32.930111 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 10 00:25:32.931349 systemd[1]: Startup finished in 3.597s (kernel) + 8.675s (initrd) + 7.692s (userspace) = 19.965s. Jul 10 00:25:32.943291 (kubelet)[1708]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 00:25:33.756057 kubelet[1708]: E0710 00:25:33.755989 1708 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 00:25:33.760959 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 00:25:33.761212 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 00:25:33.761856 systemd[1]: kubelet.service: Consumed 2.743s CPU time, 266.1M memory peak. Jul 10 00:25:42.251724 systemd[1]: Started sshd@3-10.0.0.116:22-10.0.0.1:34524.service - OpenSSH per-connection server daemon (10.0.0.1:34524). Jul 10 00:25:42.320311 sshd[1721]: Accepted publickey for core from 10.0.0.1 port 34524 ssh2: RSA SHA256:a/WzkVKs173+YSebQY64/4LigDpieaPOYRH6W2gWTe4 Jul 10 00:25:42.322326 sshd-session[1721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:25:42.329041 systemd-logind[1538]: New session 4 of user core. Jul 10 00:25:42.336791 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 10 00:25:42.395399 sshd[1723]: Connection closed by 10.0.0.1 port 34524 Jul 10 00:25:42.396082 sshd-session[1721]: pam_unix(sshd:session): session closed for user core Jul 10 00:25:42.405952 systemd[1]: sshd@3-10.0.0.116:22-10.0.0.1:34524.service: Deactivated successfully. Jul 10 00:25:42.408355 systemd[1]: session-4.scope: Deactivated successfully. Jul 10 00:25:42.409274 systemd-logind[1538]: Session 4 logged out. Waiting for processes to exit. Jul 10 00:25:42.412610 systemd[1]: Started sshd@4-10.0.0.116:22-10.0.0.1:34538.service - OpenSSH per-connection server daemon (10.0.0.1:34538). Jul 10 00:25:42.413368 systemd-logind[1538]: Removed session 4. Jul 10 00:25:42.475145 sshd[1729]: Accepted publickey for core from 10.0.0.1 port 34538 ssh2: RSA SHA256:a/WzkVKs173+YSebQY64/4LigDpieaPOYRH6W2gWTe4 Jul 10 00:25:42.477063 sshd-session[1729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:25:42.482154 systemd-logind[1538]: New session 5 of user core. Jul 10 00:25:42.491633 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 10 00:25:42.543830 sshd[1731]: Connection closed by 10.0.0.1 port 34538 Jul 10 00:25:42.544017 sshd-session[1729]: pam_unix(sshd:session): session closed for user core Jul 10 00:25:42.553242 systemd[1]: sshd@4-10.0.0.116:22-10.0.0.1:34538.service: Deactivated successfully. Jul 10 00:25:42.555247 systemd[1]: session-5.scope: Deactivated successfully. Jul 10 00:25:42.556082 systemd-logind[1538]: Session 5 logged out. Waiting for processes to exit. Jul 10 00:25:42.559143 systemd[1]: Started sshd@5-10.0.0.116:22-10.0.0.1:34544.service - OpenSSH per-connection server daemon (10.0.0.1:34544). Jul 10 00:25:42.559992 systemd-logind[1538]: Removed session 5. Jul 10 00:25:42.620863 sshd[1737]: Accepted publickey for core from 10.0.0.1 port 34544 ssh2: RSA SHA256:a/WzkVKs173+YSebQY64/4LigDpieaPOYRH6W2gWTe4 Jul 10 00:25:42.622980 sshd-session[1737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:25:42.629646 systemd-logind[1538]: New session 6 of user core. Jul 10 00:25:42.640810 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 10 00:25:42.702177 sshd[1740]: Connection closed by 10.0.0.1 port 34544 Jul 10 00:25:42.703082 sshd-session[1737]: pam_unix(sshd:session): session closed for user core Jul 10 00:25:42.717868 systemd[1]: sshd@5-10.0.0.116:22-10.0.0.1:34544.service: Deactivated successfully. Jul 10 00:25:42.721863 systemd[1]: session-6.scope: Deactivated successfully. Jul 10 00:25:42.723358 systemd-logind[1538]: Session 6 logged out. Waiting for processes to exit. Jul 10 00:25:42.729312 systemd[1]: Started sshd@6-10.0.0.116:22-10.0.0.1:34550.service - OpenSSH per-connection server daemon (10.0.0.1:34550). Jul 10 00:25:42.730196 systemd-logind[1538]: Removed session 6. Jul 10 00:25:42.781979 sshd[1746]: Accepted publickey for core from 10.0.0.1 port 34550 ssh2: RSA SHA256:a/WzkVKs173+YSebQY64/4LigDpieaPOYRH6W2gWTe4 Jul 10 00:25:42.784012 sshd-session[1746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:25:42.789988 systemd-logind[1538]: New session 7 of user core. Jul 10 00:25:42.805812 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 10 00:25:42.866538 sudo[1749]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 10 00:25:42.866885 sudo[1749]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 00:25:42.883225 sudo[1749]: pam_unix(sudo:session): session closed for user root Jul 10 00:25:42.885265 sshd[1748]: Connection closed by 10.0.0.1 port 34550 Jul 10 00:25:42.885567 sshd-session[1746]: pam_unix(sshd:session): session closed for user core Jul 10 00:25:42.899790 systemd[1]: sshd@6-10.0.0.116:22-10.0.0.1:34550.service: Deactivated successfully. Jul 10 00:25:42.902028 systemd[1]: session-7.scope: Deactivated successfully. Jul 10 00:25:42.902885 systemd-logind[1538]: Session 7 logged out. Waiting for processes to exit. Jul 10 00:25:42.906272 systemd[1]: Started sshd@7-10.0.0.116:22-10.0.0.1:34562.service - OpenSSH per-connection server daemon (10.0.0.1:34562). Jul 10 00:25:42.907497 systemd-logind[1538]: Removed session 7. Jul 10 00:25:42.964306 sshd[1755]: Accepted publickey for core from 10.0.0.1 port 34562 ssh2: RSA SHA256:a/WzkVKs173+YSebQY64/4LigDpieaPOYRH6W2gWTe4 Jul 10 00:25:42.965967 sshd-session[1755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:25:42.970881 systemd-logind[1538]: New session 8 of user core. Jul 10 00:25:42.981603 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 10 00:25:43.038941 sudo[1759]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 10 00:25:43.039317 sudo[1759]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 00:25:43.133382 sudo[1759]: pam_unix(sudo:session): session closed for user root Jul 10 00:25:43.142693 sudo[1758]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 10 00:25:43.143141 sudo[1758]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 00:25:43.156289 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 10 00:25:43.220379 augenrules[1781]: No rules Jul 10 00:25:43.222389 systemd[1]: audit-rules.service: Deactivated successfully. Jul 10 00:25:43.222700 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 10 00:25:43.224008 sudo[1758]: pam_unix(sudo:session): session closed for user root Jul 10 00:25:43.225738 sshd[1757]: Connection closed by 10.0.0.1 port 34562 Jul 10 00:25:43.226068 sshd-session[1755]: pam_unix(sshd:session): session closed for user core Jul 10 00:25:43.237642 systemd[1]: sshd@7-10.0.0.116:22-10.0.0.1:34562.service: Deactivated successfully. Jul 10 00:25:43.240840 systemd[1]: session-8.scope: Deactivated successfully. Jul 10 00:25:43.242319 systemd-logind[1538]: Session 8 logged out. Waiting for processes to exit. Jul 10 00:25:43.246843 systemd[1]: Started sshd@8-10.0.0.116:22-10.0.0.1:34578.service - OpenSSH per-connection server daemon (10.0.0.1:34578). Jul 10 00:25:43.247994 systemd-logind[1538]: Removed session 8. Jul 10 00:25:43.312945 sshd[1790]: Accepted publickey for core from 10.0.0.1 port 34578 ssh2: RSA SHA256:a/WzkVKs173+YSebQY64/4LigDpieaPOYRH6W2gWTe4 Jul 10 00:25:43.316269 sshd-session[1790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:25:43.323900 systemd-logind[1538]: New session 9 of user core. Jul 10 00:25:43.333800 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 10 00:25:43.396660 sudo[1793]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 10 00:25:43.397096 sudo[1793]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 00:25:44.011722 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 10 00:25:44.013982 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:25:44.287873 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 10 00:25:44.310480 (dockerd)[1816]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 10 00:25:44.369763 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:25:44.375949 (kubelet)[1820]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 00:25:44.502592 kubelet[1820]: E0710 00:25:44.502507 1820 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 00:25:44.510910 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 00:25:44.511358 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 00:25:44.511906 systemd[1]: kubelet.service: Consumed 453ms CPU time, 110.2M memory peak. Jul 10 00:25:44.816542 dockerd[1816]: time="2025-07-10T00:25:44.816460359Z" level=info msg="Starting up" Jul 10 00:25:44.818300 dockerd[1816]: time="2025-07-10T00:25:44.818275062Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 10 00:25:46.096049 dockerd[1816]: time="2025-07-10T00:25:46.095926454Z" level=info msg="Loading containers: start." Jul 10 00:25:46.109525 kernel: Initializing XFRM netlink socket Jul 10 00:25:46.453363 systemd-networkd[1485]: docker0: Link UP Jul 10 00:25:46.460380 dockerd[1816]: time="2025-07-10T00:25:46.460269818Z" level=info msg="Loading containers: done." Jul 10 00:25:46.558738 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1919758136-merged.mount: Deactivated successfully. Jul 10 00:25:46.563478 dockerd[1816]: time="2025-07-10T00:25:46.563407588Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 10 00:25:46.563585 dockerd[1816]: time="2025-07-10T00:25:46.563550506Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 10 00:25:46.563717 dockerd[1816]: time="2025-07-10T00:25:46.563686822Z" level=info msg="Initializing buildkit" Jul 10 00:25:46.618825 dockerd[1816]: time="2025-07-10T00:25:46.618715711Z" level=info msg="Completed buildkit initialization" Jul 10 00:25:46.629926 dockerd[1816]: time="2025-07-10T00:25:46.629814315Z" level=info msg="Daemon has completed initialization" Jul 10 00:25:46.630206 dockerd[1816]: time="2025-07-10T00:25:46.630012777Z" level=info msg="API listen on /run/docker.sock" Jul 10 00:25:46.630543 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 10 00:25:48.647870 containerd[1574]: time="2025-07-10T00:25:48.647794267Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jul 10 00:25:49.341811 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1559639063.mount: Deactivated successfully. Jul 10 00:25:51.014634 containerd[1574]: time="2025-07-10T00:25:51.014552774Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:25:51.053609 containerd[1574]: time="2025-07-10T00:25:51.053505505Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=28799045" Jul 10 00:25:51.056956 containerd[1574]: time="2025-07-10T00:25:51.056873190Z" level=info msg="ImageCreate event name:\"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:25:51.059611 containerd[1574]: time="2025-07-10T00:25:51.059538489Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:25:51.060601 containerd[1574]: time="2025-07-10T00:25:51.060562259Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"28795845\" in 2.412690516s" Jul 10 00:25:51.060662 containerd[1574]: time="2025-07-10T00:25:51.060606251Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\"" Jul 10 00:25:51.061747 containerd[1574]: time="2025-07-10T00:25:51.061715111Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jul 10 00:25:54.065837 containerd[1574]: time="2025-07-10T00:25:54.065726965Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:25:54.066824 containerd[1574]: time="2025-07-10T00:25:54.066783967Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=24783912" Jul 10 00:25:54.068086 containerd[1574]: time="2025-07-10T00:25:54.068038390Z" level=info msg="ImageCreate event name:\"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:25:54.070405 containerd[1574]: time="2025-07-10T00:25:54.070341329Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:25:54.071192 containerd[1574]: time="2025-07-10T00:25:54.071160445Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"26385746\" in 3.009416851s" Jul 10 00:25:54.071234 containerd[1574]: time="2025-07-10T00:25:54.071203806Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\"" Jul 10 00:25:54.071849 containerd[1574]: time="2025-07-10T00:25:54.071807729Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jul 10 00:25:54.632285 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 10 00:25:54.634648 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:25:54.901081 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:25:54.925389 (kubelet)[2108]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 00:25:55.185707 kubelet[2108]: E0710 00:25:55.185527 2108 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 00:25:55.189879 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 00:25:55.190072 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 00:25:55.190530 systemd[1]: kubelet.service: Consumed 298ms CPU time, 110.7M memory peak. Jul 10 00:25:55.642116 containerd[1574]: time="2025-07-10T00:25:55.642053991Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:25:55.642960 containerd[1574]: time="2025-07-10T00:25:55.642935835Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=19176916" Jul 10 00:25:55.644278 containerd[1574]: time="2025-07-10T00:25:55.644238317Z" level=info msg="ImageCreate event name:\"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:25:55.647295 containerd[1574]: time="2025-07-10T00:25:55.647260615Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:25:55.648274 containerd[1574]: time="2025-07-10T00:25:55.648219443Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"20778768\" in 1.576368303s" Jul 10 00:25:55.648274 containerd[1574]: time="2025-07-10T00:25:55.648271902Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\"" Jul 10 00:25:55.649145 containerd[1574]: time="2025-07-10T00:25:55.649110054Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 10 00:25:56.878196 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3088138292.mount: Deactivated successfully. Jul 10 00:25:57.489899 containerd[1574]: time="2025-07-10T00:25:57.489812929Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:25:57.490702 containerd[1574]: time="2025-07-10T00:25:57.490665638Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=30895363" Jul 10 00:25:57.491995 containerd[1574]: time="2025-07-10T00:25:57.491941891Z" level=info msg="ImageCreate event name:\"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:25:57.494565 containerd[1574]: time="2025-07-10T00:25:57.494475453Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:25:57.496013 containerd[1574]: time="2025-07-10T00:25:57.495941883Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"30894382\" in 1.84678962s" Jul 10 00:25:57.496071 containerd[1574]: time="2025-07-10T00:25:57.496008137Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\"" Jul 10 00:25:57.496791 containerd[1574]: time="2025-07-10T00:25:57.496752192Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 10 00:25:58.236978 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1615783723.mount: Deactivated successfully. Jul 10 00:25:59.813270 containerd[1574]: time="2025-07-10T00:25:59.813180021Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:25:59.814196 containerd[1574]: time="2025-07-10T00:25:59.814060372Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jul 10 00:25:59.816838 containerd[1574]: time="2025-07-10T00:25:59.816758793Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:25:59.820837 containerd[1574]: time="2025-07-10T00:25:59.820740259Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:25:59.822716 containerd[1574]: time="2025-07-10T00:25:59.822674647Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.325874906s" Jul 10 00:25:59.822716 containerd[1574]: time="2025-07-10T00:25:59.822716005Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 10 00:25:59.823655 containerd[1574]: time="2025-07-10T00:25:59.823580225Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 10 00:26:00.469963 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount657238968.mount: Deactivated successfully. Jul 10 00:26:00.475848 containerd[1574]: time="2025-07-10T00:26:00.475793137Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 00:26:00.476557 containerd[1574]: time="2025-07-10T00:26:00.476528987Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 10 00:26:00.477615 containerd[1574]: time="2025-07-10T00:26:00.477584086Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 00:26:00.479617 containerd[1574]: time="2025-07-10T00:26:00.479586862Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 00:26:00.480478 containerd[1574]: time="2025-07-10T00:26:00.480418291Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 656.785207ms" Jul 10 00:26:00.480478 containerd[1574]: time="2025-07-10T00:26:00.480475078Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 10 00:26:00.481082 containerd[1574]: time="2025-07-10T00:26:00.480916075Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 10 00:26:01.034473 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount778875292.mount: Deactivated successfully. Jul 10 00:26:04.265487 containerd[1574]: time="2025-07-10T00:26:04.265407974Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:26:04.266278 containerd[1574]: time="2025-07-10T00:26:04.266246687Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" Jul 10 00:26:04.267754 containerd[1574]: time="2025-07-10T00:26:04.267712571Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:26:04.270915 containerd[1574]: time="2025-07-10T00:26:04.270874819Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:26:04.272039 containerd[1574]: time="2025-07-10T00:26:04.272005261Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.791061204s" Jul 10 00:26:04.272084 containerd[1574]: time="2025-07-10T00:26:04.272040128Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jul 10 00:26:05.382064 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 10 00:26:05.384008 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:26:05.608418 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:26:05.628041 (kubelet)[2269]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 00:26:05.671745 kubelet[2269]: E0710 00:26:05.671573 2269 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 00:26:05.677280 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 00:26:05.677575 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 00:26:05.678078 systemd[1]: kubelet.service: Consumed 228ms CPU time, 110.6M memory peak. Jul 10 00:26:06.189865 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:26:06.190033 systemd[1]: kubelet.service: Consumed 228ms CPU time, 110.6M memory peak. Jul 10 00:26:06.192503 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:26:06.219945 systemd[1]: Reload requested from client PID 2285 ('systemctl') (unit session-9.scope)... Jul 10 00:26:06.219960 systemd[1]: Reloading... Jul 10 00:26:06.312503 zram_generator::config[2330]: No configuration found. Jul 10 00:26:06.961366 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:26:07.086593 systemd[1]: Reloading finished in 866 ms. Jul 10 00:26:07.163255 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 10 00:26:07.163355 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 10 00:26:07.163678 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:26:07.163730 systemd[1]: kubelet.service: Consumed 154ms CPU time, 98.3M memory peak. Jul 10 00:26:07.165434 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:26:07.336825 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:26:07.354886 (kubelet)[2375]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 10 00:26:07.397461 kubelet[2375]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:26:07.397461 kubelet[2375]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 10 00:26:07.397461 kubelet[2375]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:26:07.397962 kubelet[2375]: I0710 00:26:07.397520 2375 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 10 00:26:07.637066 kubelet[2375]: I0710 00:26:07.637000 2375 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 10 00:26:07.637066 kubelet[2375]: I0710 00:26:07.637035 2375 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 10 00:26:07.637460 kubelet[2375]: I0710 00:26:07.637421 2375 server.go:954] "Client rotation is on, will bootstrap in background" Jul 10 00:26:08.418502 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1107286082 wd_nsec: 1107285546 Jul 10 00:26:08.522856 kubelet[2375]: E0710 00:26:08.522772 2375 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.116:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.116:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:26:08.524058 kubelet[2375]: I0710 00:26:08.524019 2375 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 10 00:26:08.534522 kubelet[2375]: I0710 00:26:08.534485 2375 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 10 00:26:08.541141 kubelet[2375]: I0710 00:26:08.541100 2375 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 10 00:26:08.542953 kubelet[2375]: I0710 00:26:08.542892 2375 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 10 00:26:08.543148 kubelet[2375]: I0710 00:26:08.542938 2375 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 10 00:26:08.543265 kubelet[2375]: I0710 00:26:08.543159 2375 topology_manager.go:138] "Creating topology manager with none policy" Jul 10 00:26:08.543265 kubelet[2375]: I0710 00:26:08.543170 2375 container_manager_linux.go:304] "Creating device plugin manager" Jul 10 00:26:08.543430 kubelet[2375]: I0710 00:26:08.543403 2375 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:26:09.478815 kubelet[2375]: I0710 00:26:09.478718 2375 kubelet.go:446] "Attempting to sync node with API server" Jul 10 00:26:09.478815 kubelet[2375]: I0710 00:26:09.478818 2375 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 10 00:26:09.482641 kubelet[2375]: I0710 00:26:09.478862 2375 kubelet.go:352] "Adding apiserver pod source" Jul 10 00:26:09.482641 kubelet[2375]: I0710 00:26:09.478887 2375 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 10 00:26:10.393977 kubelet[2375]: I0710 00:26:10.393662 2375 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 10 00:26:10.394565 kubelet[2375]: W0710 00:26:10.394165 2375 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.116:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused Jul 10 00:26:10.394565 kubelet[2375]: E0710 00:26:10.394245 2375 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.116:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.116:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:26:10.394565 kubelet[2375]: I0710 00:26:10.394331 2375 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 10 00:26:10.394565 kubelet[2375]: W0710 00:26:10.394429 2375 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 10 00:26:10.396409 kubelet[2375]: W0710 00:26:10.396361 2375 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.116:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused Jul 10 00:26:10.396409 kubelet[2375]: E0710 00:26:10.396402 2375 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.116:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.116:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:26:10.397557 kubelet[2375]: I0710 00:26:10.397525 2375 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 10 00:26:10.397617 kubelet[2375]: I0710 00:26:10.397576 2375 server.go:1287] "Started kubelet" Jul 10 00:26:10.398420 kubelet[2375]: I0710 00:26:10.397746 2375 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 10 00:26:10.398420 kubelet[2375]: I0710 00:26:10.398389 2375 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 10 00:26:10.398621 kubelet[2375]: I0710 00:26:10.398597 2375 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 10 00:26:10.399795 kubelet[2375]: I0710 00:26:10.399517 2375 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 10 00:26:10.399795 kubelet[2375]: I0710 00:26:10.399697 2375 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 10 00:26:10.400021 kubelet[2375]: I0710 00:26:10.399982 2375 server.go:479] "Adding debug handlers to kubelet server" Jul 10 00:26:10.402554 kubelet[2375]: E0710 00:26:10.402518 2375 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:26:10.402673 kubelet[2375]: I0710 00:26:10.402660 2375 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 10 00:26:10.402929 kubelet[2375]: I0710 00:26:10.402914 2375 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 10 00:26:10.402987 kubelet[2375]: E0710 00:26:10.402904 2375 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.116:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.116:6443: connect: connection refused" interval="200ms" Jul 10 00:26:10.403046 kubelet[2375]: I0710 00:26:10.402989 2375 reconciler.go:26] "Reconciler: start to sync state" Jul 10 00:26:10.403298 kubelet[2375]: W0710 00:26:10.403277 2375 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.116:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused Jul 10 00:26:10.403331 kubelet[2375]: E0710 00:26:10.403311 2375 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.116:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.116:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:26:10.404028 kubelet[2375]: I0710 00:26:10.403993 2375 factory.go:221] Registration of the systemd container factory successfully Jul 10 00:26:10.404092 kubelet[2375]: I0710 00:26:10.404076 2375 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 10 00:26:10.405345 kubelet[2375]: E0710 00:26:10.403600 2375 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.116:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.116:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1850bc303bf5e603 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-10 00:26:10.397545987 +0000 UTC m=+3.038757145,LastTimestamp:2025-07-10 00:26:10.397545987 +0000 UTC m=+3.038757145,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 10 00:26:10.405480 kubelet[2375]: E0710 00:26:10.405372 2375 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 10 00:26:10.464548 kubelet[2375]: I0710 00:26:10.463494 2375 factory.go:221] Registration of the containerd container factory successfully Jul 10 00:26:10.466036 kubelet[2375]: I0710 00:26:10.465986 2375 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 10 00:26:10.467712 kubelet[2375]: I0710 00:26:10.467689 2375 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 10 00:26:10.468380 kubelet[2375]: I0710 00:26:10.468153 2375 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 10 00:26:10.468380 kubelet[2375]: I0710 00:26:10.468195 2375 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 10 00:26:10.468380 kubelet[2375]: I0710 00:26:10.468203 2375 kubelet.go:2382] "Starting kubelet main sync loop" Jul 10 00:26:10.468380 kubelet[2375]: E0710 00:26:10.468271 2375 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 10 00:26:10.474461 kubelet[2375]: W0710 00:26:10.474235 2375 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.116:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused Jul 10 00:26:10.475141 kubelet[2375]: E0710 00:26:10.475105 2375 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.116:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.116:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:26:10.484846 kubelet[2375]: I0710 00:26:10.484811 2375 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 10 00:26:10.485346 kubelet[2375]: I0710 00:26:10.485071 2375 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 10 00:26:10.485346 kubelet[2375]: I0710 00:26:10.485103 2375 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:26:10.503149 kubelet[2375]: E0710 00:26:10.503087 2375 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:26:10.568940 kubelet[2375]: E0710 00:26:10.568838 2375 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 10 00:26:10.603240 kubelet[2375]: E0710 00:26:10.603175 2375 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:26:10.603780 kubelet[2375]: E0710 00:26:10.603728 2375 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.116:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.116:6443: connect: connection refused" interval="400ms" Jul 10 00:26:10.628676 kubelet[2375]: I0710 00:26:10.628603 2375 policy_none.go:49] "None policy: Start" Jul 10 00:26:10.628676 kubelet[2375]: I0710 00:26:10.628678 2375 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 10 00:26:10.628804 kubelet[2375]: I0710 00:26:10.628700 2375 state_mem.go:35] "Initializing new in-memory state store" Jul 10 00:26:10.638123 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 10 00:26:10.658463 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 10 00:26:10.662876 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 10 00:26:10.674942 kubelet[2375]: I0710 00:26:10.674699 2375 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 10 00:26:10.675085 kubelet[2375]: I0710 00:26:10.675058 2375 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 10 00:26:10.675113 kubelet[2375]: I0710 00:26:10.675075 2375 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 10 00:26:10.675416 kubelet[2375]: I0710 00:26:10.675368 2375 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 10 00:26:10.676683 kubelet[2375]: E0710 00:26:10.676627 2375 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 10 00:26:10.676769 kubelet[2375]: E0710 00:26:10.676694 2375 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 10 00:26:10.708662 kubelet[2375]: E0710 00:26:10.708607 2375 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.116:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.116:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:26:10.776076 kubelet[2375]: I0710 00:26:10.776012 2375 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 10 00:26:10.776519 kubelet[2375]: E0710 00:26:10.776484 2375 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.116:6443/api/v1/nodes\": dial tcp 10.0.0.116:6443: connect: connection refused" node="localhost" Jul 10 00:26:10.780247 systemd[1]: Created slice kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice - libcontainer container kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice. Jul 10 00:26:10.801690 kubelet[2375]: E0710 00:26:10.801649 2375 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 00:26:10.804224 kubelet[2375]: I0710 00:26:10.804191 2375 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5be3378b4b642882fd9684da98d499c6-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"5be3378b4b642882fd9684da98d499c6\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:26:10.805692 systemd[1]: Created slice kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice - libcontainer container kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice. Jul 10 00:26:10.807933 kubelet[2375]: E0710 00:26:10.807897 2375 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 00:26:10.809769 systemd[1]: Created slice kubepods-burstable-pod5be3378b4b642882fd9684da98d499c6.slice - libcontainer container kubepods-burstable-pod5be3378b4b642882fd9684da98d499c6.slice. Jul 10 00:26:10.811935 kubelet[2375]: E0710 00:26:10.811906 2375 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 00:26:10.904711 kubelet[2375]: I0710 00:26:10.904660 2375 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5be3378b4b642882fd9684da98d499c6-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"5be3378b4b642882fd9684da98d499c6\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:26:10.904711 kubelet[2375]: I0710 00:26:10.904714 2375 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:26:10.904916 kubelet[2375]: I0710 00:26:10.904758 2375 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:26:10.904916 kubelet[2375]: I0710 00:26:10.904789 2375 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:26:10.904916 kubelet[2375]: I0710 00:26:10.904810 2375 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:26:10.904916 kubelet[2375]: I0710 00:26:10.904832 2375 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:26:10.904916 kubelet[2375]: I0710 00:26:10.904891 2375 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5be3378b4b642882fd9684da98d499c6-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"5be3378b4b642882fd9684da98d499c6\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:26:10.905038 kubelet[2375]: I0710 00:26:10.904914 2375 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jul 10 00:26:10.978110 kubelet[2375]: I0710 00:26:10.978005 2375 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 10 00:26:10.978505 kubelet[2375]: E0710 00:26:10.978431 2375 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.116:6443/api/v1/nodes\": dial tcp 10.0.0.116:6443: connect: connection refused" node="localhost" Jul 10 00:26:11.005299 kubelet[2375]: E0710 00:26:11.005244 2375 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.116:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.116:6443: connect: connection refused" interval="800ms" Jul 10 00:26:11.103022 kubelet[2375]: E0710 00:26:11.102946 2375 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:26:11.103812 containerd[1574]: time="2025-07-10T00:26:11.103745145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,}" Jul 10 00:26:11.109259 kubelet[2375]: E0710 00:26:11.109234 2375 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:26:11.109808 containerd[1574]: time="2025-07-10T00:26:11.109764647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,}" Jul 10 00:26:11.113100 kubelet[2375]: E0710 00:26:11.113049 2375 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:26:11.113383 containerd[1574]: time="2025-07-10T00:26:11.113354024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:5be3378b4b642882fd9684da98d499c6,Namespace:kube-system,Attempt:0,}" Jul 10 00:26:11.148463 containerd[1574]: time="2025-07-10T00:26:11.147515991Z" level=info msg="connecting to shim 3e04c5f91b06fa536678ca95061a6dba4c6168b83039032a7b74a30256cce4c6" address="unix:///run/containerd/s/547f260f253d11bcc02ec47b6fc4145a59cb8c88f77af89ee2268756d53d36d3" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:26:11.276711 containerd[1574]: time="2025-07-10T00:26:11.276420862Z" level=info msg="connecting to shim 5ee00e8232e29c09a5469422c33f8e4c3f89ce24270638ba4da05c0849c9f63a" address="unix:///run/containerd/s/138dcd7acbd0f80b21c05956d61b9029ce23c5f753312a344745900efb51b50c" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:26:11.285647 systemd[1]: Started cri-containerd-3e04c5f91b06fa536678ca95061a6dba4c6168b83039032a7b74a30256cce4c6.scope - libcontainer container 3e04c5f91b06fa536678ca95061a6dba4c6168b83039032a7b74a30256cce4c6. Jul 10 00:26:11.290172 containerd[1574]: time="2025-07-10T00:26:11.288314820Z" level=info msg="connecting to shim 95dec3a2bf7831233bc85163b3d4c1028d2fe641fc72f9214cb58781e4c3731c" address="unix:///run/containerd/s/f2bdd404c1d639d60ad8c6fd6ce22b4dad3b24d337ae92d950465d69caa7c2eb" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:26:11.356640 systemd[1]: Started cri-containerd-5ee00e8232e29c09a5469422c33f8e4c3f89ce24270638ba4da05c0849c9f63a.scope - libcontainer container 5ee00e8232e29c09a5469422c33f8e4c3f89ce24270638ba4da05c0849c9f63a. Jul 10 00:26:11.381646 kubelet[2375]: I0710 00:26:11.381585 2375 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 10 00:26:11.382239 kubelet[2375]: E0710 00:26:11.382185 2375 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.116:6443/api/v1/nodes\": dial tcp 10.0.0.116:6443: connect: connection refused" node="localhost" Jul 10 00:26:11.435730 systemd[1]: Started cri-containerd-95dec3a2bf7831233bc85163b3d4c1028d2fe641fc72f9214cb58781e4c3731c.scope - libcontainer container 95dec3a2bf7831233bc85163b3d4c1028d2fe641fc72f9214cb58781e4c3731c. Jul 10 00:26:11.510934 containerd[1574]: time="2025-07-10T00:26:11.510883337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"5ee00e8232e29c09a5469422c33f8e4c3f89ce24270638ba4da05c0849c9f63a\"" Jul 10 00:26:11.513046 kubelet[2375]: E0710 00:26:11.513019 2375 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:26:11.515640 containerd[1574]: time="2025-07-10T00:26:11.515595676Z" level=info msg="CreateContainer within sandbox \"5ee00e8232e29c09a5469422c33f8e4c3f89ce24270638ba4da05c0849c9f63a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 10 00:26:11.518825 containerd[1574]: time="2025-07-10T00:26:11.518786946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"3e04c5f91b06fa536678ca95061a6dba4c6168b83039032a7b74a30256cce4c6\"" Jul 10 00:26:11.519795 kubelet[2375]: E0710 00:26:11.519769 2375 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:26:11.523623 containerd[1574]: time="2025-07-10T00:26:11.523570270Z" level=info msg="CreateContainer within sandbox \"3e04c5f91b06fa536678ca95061a6dba4c6168b83039032a7b74a30256cce4c6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 10 00:26:11.524458 kubelet[2375]: W0710 00:26:11.524372 2375 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.116:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused Jul 10 00:26:11.524503 kubelet[2375]: E0710 00:26:11.524479 2375 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.116:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.116:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:26:11.532656 containerd[1574]: time="2025-07-10T00:26:11.532535616Z" level=info msg="Container 86cd03597ce56f3db83392538c20bce7228f7fd1a2d04b8f74f96b8d079f4bc9: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:26:11.534505 containerd[1574]: time="2025-07-10T00:26:11.534473416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:5be3378b4b642882fd9684da98d499c6,Namespace:kube-system,Attempt:0,} returns sandbox id \"95dec3a2bf7831233bc85163b3d4c1028d2fe641fc72f9214cb58781e4c3731c\"" Jul 10 00:26:11.535304 kubelet[2375]: E0710 00:26:11.535281 2375 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:26:11.537474 containerd[1574]: time="2025-07-10T00:26:11.537420072Z" level=info msg="CreateContainer within sandbox \"95dec3a2bf7831233bc85163b3d4c1028d2fe641fc72f9214cb58781e4c3731c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 10 00:26:11.539413 containerd[1574]: time="2025-07-10T00:26:11.539354225Z" level=info msg="Container 73736dd902de7990a7d5ffe46d7bc2a6a10f8caa93eb7218f7918efdd9968beb: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:26:11.557822 containerd[1574]: time="2025-07-10T00:26:11.557770550Z" level=info msg="CreateContainer within sandbox \"5ee00e8232e29c09a5469422c33f8e4c3f89ce24270638ba4da05c0849c9f63a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"86cd03597ce56f3db83392538c20bce7228f7fd1a2d04b8f74f96b8d079f4bc9\"" Jul 10 00:26:11.558575 containerd[1574]: time="2025-07-10T00:26:11.558267093Z" level=info msg="Container d04cdf571c2ebfabb3d0f44f1f1965ac81e4e2dc0fec86f15c205ca3c861158d: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:26:11.559151 containerd[1574]: time="2025-07-10T00:26:11.559110925Z" level=info msg="StartContainer for \"86cd03597ce56f3db83392538c20bce7228f7fd1a2d04b8f74f96b8d079f4bc9\"" Jul 10 00:26:11.561173 containerd[1574]: time="2025-07-10T00:26:11.561109350Z" level=info msg="connecting to shim 86cd03597ce56f3db83392538c20bce7228f7fd1a2d04b8f74f96b8d079f4bc9" address="unix:///run/containerd/s/138dcd7acbd0f80b21c05956d61b9029ce23c5f753312a344745900efb51b50c" protocol=ttrpc version=3 Jul 10 00:26:11.564105 containerd[1574]: time="2025-07-10T00:26:11.564049053Z" level=info msg="CreateContainer within sandbox \"3e04c5f91b06fa536678ca95061a6dba4c6168b83039032a7b74a30256cce4c6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"73736dd902de7990a7d5ffe46d7bc2a6a10f8caa93eb7218f7918efdd9968beb\"" Jul 10 00:26:11.564568 containerd[1574]: time="2025-07-10T00:26:11.564529997Z" level=info msg="StartContainer for \"73736dd902de7990a7d5ffe46d7bc2a6a10f8caa93eb7218f7918efdd9968beb\"" Jul 10 00:26:11.565782 containerd[1574]: time="2025-07-10T00:26:11.565751456Z" level=info msg="connecting to shim 73736dd902de7990a7d5ffe46d7bc2a6a10f8caa93eb7218f7918efdd9968beb" address="unix:///run/containerd/s/547f260f253d11bcc02ec47b6fc4145a59cb8c88f77af89ee2268756d53d36d3" protocol=ttrpc version=3 Jul 10 00:26:11.570327 containerd[1574]: time="2025-07-10T00:26:11.570279775Z" level=info msg="CreateContainer within sandbox \"95dec3a2bf7831233bc85163b3d4c1028d2fe641fc72f9214cb58781e4c3731c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d04cdf571c2ebfabb3d0f44f1f1965ac81e4e2dc0fec86f15c205ca3c861158d\"" Jul 10 00:26:11.570820 containerd[1574]: time="2025-07-10T00:26:11.570795845Z" level=info msg="StartContainer for \"d04cdf571c2ebfabb3d0f44f1f1965ac81e4e2dc0fec86f15c205ca3c861158d\"" Jul 10 00:26:11.571801 containerd[1574]: time="2025-07-10T00:26:11.571777951Z" level=info msg="connecting to shim d04cdf571c2ebfabb3d0f44f1f1965ac81e4e2dc0fec86f15c205ca3c861158d" address="unix:///run/containerd/s/f2bdd404c1d639d60ad8c6fd6ce22b4dad3b24d337ae92d950465d69caa7c2eb" protocol=ttrpc version=3 Jul 10 00:26:11.679394 systemd[1]: Started cri-containerd-73736dd902de7990a7d5ffe46d7bc2a6a10f8caa93eb7218f7918efdd9968beb.scope - libcontainer container 73736dd902de7990a7d5ffe46d7bc2a6a10f8caa93eb7218f7918efdd9968beb. Jul 10 00:26:11.681725 systemd[1]: Started cri-containerd-d04cdf571c2ebfabb3d0f44f1f1965ac81e4e2dc0fec86f15c205ca3c861158d.scope - libcontainer container d04cdf571c2ebfabb3d0f44f1f1965ac81e4e2dc0fec86f15c205ca3c861158d. Jul 10 00:26:11.686531 systemd[1]: Started cri-containerd-86cd03597ce56f3db83392538c20bce7228f7fd1a2d04b8f74f96b8d079f4bc9.scope - libcontainer container 86cd03597ce56f3db83392538c20bce7228f7fd1a2d04b8f74f96b8d079f4bc9. Jul 10 00:26:11.749151 kubelet[2375]: W0710 00:26:11.749086 2375 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.116:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused Jul 10 00:26:11.749367 kubelet[2375]: E0710 00:26:11.749343 2375 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.116:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.116:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:26:11.807480 kubelet[2375]: E0710 00:26:11.806806 2375 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.116:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.116:6443: connect: connection refused" interval="1.6s" Jul 10 00:26:12.110853 containerd[1574]: time="2025-07-10T00:26:12.110556455Z" level=info msg="StartContainer for \"73736dd902de7990a7d5ffe46d7bc2a6a10f8caa93eb7218f7918efdd9968beb\" returns successfully" Jul 10 00:26:12.113699 containerd[1574]: time="2025-07-10T00:26:12.113662479Z" level=info msg="StartContainer for \"d04cdf571c2ebfabb3d0f44f1f1965ac81e4e2dc0fec86f15c205ca3c861158d\" returns successfully" Jul 10 00:26:12.117551 containerd[1574]: time="2025-07-10T00:26:12.117502626Z" level=info msg="StartContainer for \"86cd03597ce56f3db83392538c20bce7228f7fd1a2d04b8f74f96b8d079f4bc9\" returns successfully" Jul 10 00:26:12.184173 kubelet[2375]: I0710 00:26:12.184111 2375 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 10 00:26:12.522295 kubelet[2375]: E0710 00:26:12.522251 2375 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 00:26:12.524568 kubelet[2375]: E0710 00:26:12.522400 2375 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:26:12.530876 kubelet[2375]: E0710 00:26:12.530841 2375 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 00:26:12.530987 kubelet[2375]: E0710 00:26:12.530962 2375 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:26:12.538540 kubelet[2375]: E0710 00:26:12.538504 2375 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 00:26:12.538655 kubelet[2375]: E0710 00:26:12.538612 2375 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:26:13.414981 kubelet[2375]: E0710 00:26:13.414922 2375 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 10 00:26:13.485584 kubelet[2375]: I0710 00:26:13.485421 2375 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 10 00:26:13.485584 kubelet[2375]: E0710 00:26:13.485558 2375 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 10 00:26:13.507010 kubelet[2375]: E0710 00:26:13.506968 2375 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:26:13.531262 kubelet[2375]: E0710 00:26:13.531226 2375 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 00:26:13.531743 kubelet[2375]: E0710 00:26:13.531308 2375 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 00:26:13.531743 kubelet[2375]: E0710 00:26:13.531364 2375 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:26:13.531743 kubelet[2375]: E0710 00:26:13.531405 2375 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:26:13.607533 kubelet[2375]: E0710 00:26:13.607472 2375 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:26:13.708368 kubelet[2375]: E0710 00:26:13.708170 2375 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:26:13.808884 kubelet[2375]: E0710 00:26:13.808807 2375 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:26:13.909950 kubelet[2375]: E0710 00:26:13.909874 2375 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:26:13.967591 update_engine[1540]: I20250710 00:26:13.967332 1540 update_attempter.cc:509] Updating boot flags... Jul 10 00:26:14.011467 kubelet[2375]: E0710 00:26:14.010741 2375 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:26:14.102890 kubelet[2375]: I0710 00:26:14.102855 2375 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 10 00:26:14.115200 kubelet[2375]: E0710 00:26:14.115164 2375 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 10 00:26:14.115200 kubelet[2375]: I0710 00:26:14.115195 2375 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 10 00:26:14.119527 kubelet[2375]: E0710 00:26:14.119423 2375 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 10 00:26:14.119527 kubelet[2375]: I0710 00:26:14.119471 2375 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 10 00:26:14.121339 kubelet[2375]: E0710 00:26:14.121315 2375 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jul 10 00:26:14.396371 kubelet[2375]: I0710 00:26:14.396323 2375 apiserver.go:52] "Watching apiserver" Jul 10 00:26:14.403659 kubelet[2375]: I0710 00:26:14.403609 2375 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 10 00:26:14.531945 kubelet[2375]: I0710 00:26:14.531872 2375 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 10 00:26:14.535915 kubelet[2375]: E0710 00:26:14.535878 2375 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:26:15.286471 systemd[1]: Reload requested from client PID 2668 ('systemctl') (unit session-9.scope)... Jul 10 00:26:15.286506 systemd[1]: Reloading... Jul 10 00:26:15.388487 zram_generator::config[2717]: No configuration found. Jul 10 00:26:15.495571 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:26:15.533603 kubelet[2375]: E0710 00:26:15.533554 2375 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:26:15.634735 systemd[1]: Reloading finished in 347 ms. Jul 10 00:26:15.670418 kubelet[2375]: I0710 00:26:15.670289 2375 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 10 00:26:15.670495 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:26:15.689206 systemd[1]: kubelet.service: Deactivated successfully. Jul 10 00:26:15.689627 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:26:15.689704 systemd[1]: kubelet.service: Consumed 2.896s CPU time, 134.1M memory peak. Jul 10 00:26:15.692162 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:26:15.926281 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:26:15.944011 (kubelet)[2756]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 10 00:26:15.989763 kubelet[2756]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:26:15.989763 kubelet[2756]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 10 00:26:15.989763 kubelet[2756]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:26:15.990285 kubelet[2756]: I0710 00:26:15.989825 2756 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 10 00:26:15.999076 kubelet[2756]: I0710 00:26:15.999022 2756 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 10 00:26:15.999076 kubelet[2756]: I0710 00:26:15.999053 2756 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 10 00:26:15.999339 kubelet[2756]: I0710 00:26:15.999316 2756 server.go:954] "Client rotation is on, will bootstrap in background" Jul 10 00:26:16.000585 kubelet[2756]: I0710 00:26:16.000557 2756 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 10 00:26:16.002679 kubelet[2756]: I0710 00:26:16.002594 2756 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 10 00:26:16.006616 kubelet[2756]: I0710 00:26:16.006584 2756 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 10 00:26:16.012459 kubelet[2756]: I0710 00:26:16.011370 2756 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 10 00:26:16.012459 kubelet[2756]: I0710 00:26:16.011624 2756 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 10 00:26:16.012459 kubelet[2756]: I0710 00:26:16.011661 2756 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 10 00:26:16.012459 kubelet[2756]: I0710 00:26:16.011992 2756 topology_manager.go:138] "Creating topology manager with none policy" Jul 10 00:26:16.012723 kubelet[2756]: I0710 00:26:16.012001 2756 container_manager_linux.go:304] "Creating device plugin manager" Jul 10 00:26:16.012723 kubelet[2756]: I0710 00:26:16.012055 2756 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:26:16.012723 kubelet[2756]: I0710 00:26:16.012209 2756 kubelet.go:446] "Attempting to sync node with API server" Jul 10 00:26:16.012723 kubelet[2756]: I0710 00:26:16.012232 2756 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 10 00:26:16.012723 kubelet[2756]: I0710 00:26:16.012256 2756 kubelet.go:352] "Adding apiserver pod source" Jul 10 00:26:16.012723 kubelet[2756]: I0710 00:26:16.012266 2756 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 10 00:26:16.013772 kubelet[2756]: I0710 00:26:16.013742 2756 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 10 00:26:16.014549 kubelet[2756]: I0710 00:26:16.014513 2756 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 10 00:26:16.015702 kubelet[2756]: I0710 00:26:16.015677 2756 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 10 00:26:16.015789 kubelet[2756]: I0710 00:26:16.015730 2756 server.go:1287] "Started kubelet" Jul 10 00:26:16.016162 kubelet[2756]: I0710 00:26:16.016129 2756 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 10 00:26:16.016306 kubelet[2756]: I0710 00:26:16.016264 2756 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 10 00:26:16.019404 kubelet[2756]: I0710 00:26:16.018061 2756 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 10 00:26:16.020815 kubelet[2756]: I0710 00:26:16.020045 2756 server.go:479] "Adding debug handlers to kubelet server" Jul 10 00:26:16.023972 kubelet[2756]: I0710 00:26:16.023952 2756 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 10 00:26:16.024224 kubelet[2756]: I0710 00:26:16.024208 2756 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 10 00:26:16.026219 kubelet[2756]: E0710 00:26:16.025932 2756 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 10 00:26:16.027140 kubelet[2756]: I0710 00:26:16.027114 2756 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 10 00:26:16.027380 kubelet[2756]: E0710 00:26:16.027313 2756 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:26:16.027475 kubelet[2756]: I0710 00:26:16.027431 2756 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 10 00:26:16.027826 kubelet[2756]: I0710 00:26:16.027800 2756 reconciler.go:26] "Reconciler: start to sync state" Jul 10 00:26:16.027968 kubelet[2756]: I0710 00:26:16.027951 2756 factory.go:221] Registration of the systemd container factory successfully Jul 10 00:26:16.028143 kubelet[2756]: I0710 00:26:16.028121 2756 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 10 00:26:16.030949 kubelet[2756]: I0710 00:26:16.030905 2756 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 10 00:26:16.031459 kubelet[2756]: I0710 00:26:16.031079 2756 factory.go:221] Registration of the containerd container factory successfully Jul 10 00:26:16.033058 kubelet[2756]: I0710 00:26:16.032750 2756 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 10 00:26:16.033058 kubelet[2756]: I0710 00:26:16.032791 2756 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 10 00:26:16.033058 kubelet[2756]: I0710 00:26:16.032816 2756 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 10 00:26:16.033058 kubelet[2756]: I0710 00:26:16.032827 2756 kubelet.go:2382] "Starting kubelet main sync loop" Jul 10 00:26:16.033058 kubelet[2756]: E0710 00:26:16.032890 2756 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 10 00:26:16.080410 kubelet[2756]: I0710 00:26:16.080368 2756 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 10 00:26:16.080410 kubelet[2756]: I0710 00:26:16.080391 2756 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 10 00:26:16.080410 kubelet[2756]: I0710 00:26:16.080423 2756 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:26:16.080687 kubelet[2756]: I0710 00:26:16.080671 2756 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 10 00:26:16.080735 kubelet[2756]: I0710 00:26:16.080686 2756 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 10 00:26:16.080735 kubelet[2756]: I0710 00:26:16.080709 2756 policy_none.go:49] "None policy: Start" Jul 10 00:26:16.080735 kubelet[2756]: I0710 00:26:16.080732 2756 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 10 00:26:16.080833 kubelet[2756]: I0710 00:26:16.080745 2756 state_mem.go:35] "Initializing new in-memory state store" Jul 10 00:26:16.080890 kubelet[2756]: I0710 00:26:16.080870 2756 state_mem.go:75] "Updated machine memory state" Jul 10 00:26:16.086037 kubelet[2756]: I0710 00:26:16.085950 2756 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 10 00:26:16.086184 kubelet[2756]: I0710 00:26:16.086168 2756 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 10 00:26:16.086496 kubelet[2756]: I0710 00:26:16.086183 2756 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 10 00:26:16.086729 kubelet[2756]: I0710 00:26:16.086687 2756 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 10 00:26:16.089884 kubelet[2756]: E0710 00:26:16.089789 2756 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 10 00:26:16.133704 kubelet[2756]: I0710 00:26:16.133672 2756 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 10 00:26:16.133901 kubelet[2756]: I0710 00:26:16.133871 2756 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 10 00:26:16.134036 kubelet[2756]: I0710 00:26:16.133994 2756 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 10 00:26:16.147490 kubelet[2756]: E0710 00:26:16.147420 2756 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 10 00:26:16.192422 kubelet[2756]: I0710 00:26:16.192293 2756 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 10 00:26:16.229302 kubelet[2756]: I0710 00:26:16.229196 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5be3378b4b642882fd9684da98d499c6-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"5be3378b4b642882fd9684da98d499c6\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:26:16.229493 kubelet[2756]: I0710 00:26:16.229338 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5be3378b4b642882fd9684da98d499c6-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"5be3378b4b642882fd9684da98d499c6\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:26:16.229493 kubelet[2756]: I0710 00:26:16.229370 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:26:16.229493 kubelet[2756]: I0710 00:26:16.229454 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:26:16.229493 kubelet[2756]: I0710 00:26:16.229489 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jul 10 00:26:16.229705 kubelet[2756]: I0710 00:26:16.229522 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:26:16.229705 kubelet[2756]: I0710 00:26:16.229548 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:26:16.229705 kubelet[2756]: I0710 00:26:16.229584 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:26:16.229705 kubelet[2756]: I0710 00:26:16.229617 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5be3378b4b642882fd9684da98d499c6-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"5be3378b4b642882fd9684da98d499c6\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:26:16.435017 kubelet[2756]: I0710 00:26:16.434970 2756 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 10 00:26:16.435148 kubelet[2756]: I0710 00:26:16.435067 2756 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 10 00:26:16.441423 kubelet[2756]: E0710 00:26:16.441379 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:26:16.447660 kubelet[2756]: E0710 00:26:16.447419 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:26:16.448230 kubelet[2756]: E0710 00:26:16.447805 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:26:16.456260 sudo[2792]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 10 00:26:16.456620 sudo[2792]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 10 00:26:16.943427 sudo[2792]: pam_unix(sudo:session): session closed for user root Jul 10 00:26:17.014369 kubelet[2756]: I0710 00:26:17.014316 2756 apiserver.go:52] "Watching apiserver" Jul 10 00:26:17.027885 kubelet[2756]: I0710 00:26:17.027839 2756 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 10 00:26:17.059285 kubelet[2756]: I0710 00:26:17.058633 2756 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 10 00:26:17.059285 kubelet[2756]: E0710 00:26:17.058993 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:26:17.059285 kubelet[2756]: E0710 00:26:17.059225 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:26:17.222991 kubelet[2756]: E0710 00:26:17.222563 2756 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 10 00:26:17.222991 kubelet[2756]: E0710 00:26:17.222770 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:26:17.378078 kubelet[2756]: I0710 00:26:17.377759 2756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.3777349709999998 podStartE2EDuration="3.377734971s" podCreationTimestamp="2025-07-10 00:26:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:26:17.282394608 +0000 UTC m=+1.333434239" watchObservedRunningTime="2025-07-10 00:26:17.377734971 +0000 UTC m=+1.428774601" Jul 10 00:26:17.390478 kubelet[2756]: I0710 00:26:17.390405 2756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.3903825539999999 podStartE2EDuration="1.390382554s" podCreationTimestamp="2025-07-10 00:26:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:26:17.378341478 +0000 UTC m=+1.429381108" watchObservedRunningTime="2025-07-10 00:26:17.390382554 +0000 UTC m=+1.441422184" Jul 10 00:26:17.399506 kubelet[2756]: I0710 00:26:17.399417 2756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.399397104 podStartE2EDuration="1.399397104s" podCreationTimestamp="2025-07-10 00:26:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:26:17.390753255 +0000 UTC m=+1.441792905" watchObservedRunningTime="2025-07-10 00:26:17.399397104 +0000 UTC m=+1.450436744" Jul 10 00:26:18.061208 kubelet[2756]: E0710 00:26:18.061148 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:26:18.061734 kubelet[2756]: E0710 00:26:18.061344 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:26:18.533184 sudo[1793]: pam_unix(sudo:session): session closed for user root Jul 10 00:26:18.534915 sshd[1792]: Connection closed by 10.0.0.1 port 34578 Jul 10 00:26:18.535531 sshd-session[1790]: pam_unix(sshd:session): session closed for user core Jul 10 00:26:18.540309 systemd[1]: sshd@8-10.0.0.116:22-10.0.0.1:34578.service: Deactivated successfully. Jul 10 00:26:18.543070 systemd[1]: session-9.scope: Deactivated successfully. Jul 10 00:26:18.543328 systemd[1]: session-9.scope: Consumed 5.269s CPU time, 257.6M memory peak. Jul 10 00:26:18.545107 systemd-logind[1538]: Session 9 logged out. Waiting for processes to exit. Jul 10 00:26:18.547190 systemd-logind[1538]: Removed session 9. Jul 10 00:26:19.063168 kubelet[2756]: E0710 00:26:19.063136 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:26:19.063805 kubelet[2756]: E0710 00:26:19.063139 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:26:21.578739 kubelet[2756]: I0710 00:26:21.578680 2756 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 10 00:26:21.580090 containerd[1574]: time="2025-07-10T00:26:21.580038184Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 10 00:26:21.580399 kubelet[2756]: I0710 00:26:21.580364 2756 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 10 00:26:22.975325 systemd[1]: Created slice kubepods-burstable-pod420995f2_f48d_445d_982c_a6f4a978e305.slice - libcontainer container kubepods-burstable-pod420995f2_f48d_445d_982c_a6f4a978e305.slice. Jul 10 00:26:22.979519 kubelet[2756]: I0710 00:26:22.979474 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/420995f2-f48d-445d-982c-a6f4a978e305-cilium-run\") pod \"cilium-hs8f7\" (UID: \"420995f2-f48d-445d-982c-a6f4a978e305\") " pod="kube-system/cilium-hs8f7" Jul 10 00:26:22.979519 kubelet[2756]: I0710 00:26:22.979518 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/420995f2-f48d-445d-982c-a6f4a978e305-clustermesh-secrets\") pod \"cilium-hs8f7\" (UID: \"420995f2-f48d-445d-982c-a6f4a978e305\") " pod="kube-system/cilium-hs8f7" Jul 10 00:26:22.979936 kubelet[2756]: I0710 00:26:22.979538 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/420995f2-f48d-445d-982c-a6f4a978e305-cilium-config-path\") pod \"cilium-hs8f7\" (UID: \"420995f2-f48d-445d-982c-a6f4a978e305\") " pod="kube-system/cilium-hs8f7" Jul 10 00:26:22.979936 kubelet[2756]: I0710 00:26:22.979554 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/420995f2-f48d-445d-982c-a6f4a978e305-host-proc-sys-kernel\") pod \"cilium-hs8f7\" (UID: \"420995f2-f48d-445d-982c-a6f4a978e305\") " pod="kube-system/cilium-hs8f7" Jul 10 00:26:22.979936 kubelet[2756]: I0710 00:26:22.979567 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9mm6\" (UniqueName: \"kubernetes.io/projected/420995f2-f48d-445d-982c-a6f4a978e305-kube-api-access-f9mm6\") pod \"cilium-hs8f7\" (UID: \"420995f2-f48d-445d-982c-a6f4a978e305\") " pod="kube-system/cilium-hs8f7" Jul 10 00:26:22.979936 kubelet[2756]: I0710 00:26:22.979586 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/420995f2-f48d-445d-982c-a6f4a978e305-bpf-maps\") pod \"cilium-hs8f7\" (UID: \"420995f2-f48d-445d-982c-a6f4a978e305\") " pod="kube-system/cilium-hs8f7" Jul 10 00:26:22.979936 kubelet[2756]: I0710 00:26:22.979602 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/420995f2-f48d-445d-982c-a6f4a978e305-cilium-cgroup\") pod \"cilium-hs8f7\" (UID: \"420995f2-f48d-445d-982c-a6f4a978e305\") " pod="kube-system/cilium-hs8f7" Jul 10 00:26:22.979936 kubelet[2756]: I0710 00:26:22.979617 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/420995f2-f48d-445d-982c-a6f4a978e305-hubble-tls\") pod \"cilium-hs8f7\" (UID: \"420995f2-f48d-445d-982c-a6f4a978e305\") " pod="kube-system/cilium-hs8f7" Jul 10 00:26:22.980081 kubelet[2756]: I0710 00:26:22.979630 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/420995f2-f48d-445d-982c-a6f4a978e305-hostproc\") pod \"cilium-hs8f7\" (UID: \"420995f2-f48d-445d-982c-a6f4a978e305\") " pod="kube-system/cilium-hs8f7" Jul 10 00:26:22.980081 kubelet[2756]: I0710 00:26:22.979663 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/420995f2-f48d-445d-982c-a6f4a978e305-cni-path\") pod \"cilium-hs8f7\" (UID: \"420995f2-f48d-445d-982c-a6f4a978e305\") " pod="kube-system/cilium-hs8f7" Jul 10 00:26:22.980081 kubelet[2756]: I0710 00:26:22.979678 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/420995f2-f48d-445d-982c-a6f4a978e305-etc-cni-netd\") pod \"cilium-hs8f7\" (UID: \"420995f2-f48d-445d-982c-a6f4a978e305\") " pod="kube-system/cilium-hs8f7" Jul 10 00:26:22.980081 kubelet[2756]: I0710 00:26:22.979695 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/420995f2-f48d-445d-982c-a6f4a978e305-lib-modules\") pod \"cilium-hs8f7\" (UID: \"420995f2-f48d-445d-982c-a6f4a978e305\") " pod="kube-system/cilium-hs8f7" Jul 10 00:26:22.980081 kubelet[2756]: I0710 00:26:22.979708 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/420995f2-f48d-445d-982c-a6f4a978e305-xtables-lock\") pod \"cilium-hs8f7\" (UID: \"420995f2-f48d-445d-982c-a6f4a978e305\") " pod="kube-system/cilium-hs8f7" Jul 10 00:26:22.980081 kubelet[2756]: I0710 00:26:22.979724 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/420995f2-f48d-445d-982c-a6f4a978e305-host-proc-sys-net\") pod \"cilium-hs8f7\" (UID: \"420995f2-f48d-445d-982c-a6f4a978e305\") " pod="kube-system/cilium-hs8f7" Jul 10 00:26:22.996925 systemd[1]: Created slice kubepods-besteffort-pod1667ee67_5258_455d_87c6_804179a91363.slice - libcontainer container kubepods-besteffort-pod1667ee67_5258_455d_87c6_804179a91363.slice. Jul 10 00:26:23.011972 systemd[1]: Created slice kubepods-besteffort-pod1e811eaf_b35d_4bbc_a68a_833b4243f360.slice - libcontainer container kubepods-besteffort-pod1e811eaf_b35d_4bbc_a68a_833b4243f360.slice. Jul 10 00:26:23.080475 kubelet[2756]: I0710 00:26:23.080393 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1667ee67-5258-455d-87c6-804179a91363-lib-modules\") pod \"kube-proxy-n2rxl\" (UID: \"1667ee67-5258-455d-87c6-804179a91363\") " pod="kube-system/kube-proxy-n2rxl" Jul 10 00:26:23.080691 kubelet[2756]: I0710 00:26:23.080489 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1667ee67-5258-455d-87c6-804179a91363-kube-proxy\") pod \"kube-proxy-n2rxl\" (UID: \"1667ee67-5258-455d-87c6-804179a91363\") " pod="kube-system/kube-proxy-n2rxl" Jul 10 00:26:23.080691 kubelet[2756]: I0710 00:26:23.080520 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jh7rb\" (UniqueName: \"kubernetes.io/projected/1e811eaf-b35d-4bbc-a68a-833b4243f360-kube-api-access-jh7rb\") pod \"cilium-operator-6c4d7847fc-wbtn6\" (UID: \"1e811eaf-b35d-4bbc-a68a-833b4243f360\") " pod="kube-system/cilium-operator-6c4d7847fc-wbtn6" Jul 10 00:26:23.080691 kubelet[2756]: I0710 00:26:23.080584 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skxvp\" (UniqueName: \"kubernetes.io/projected/1667ee67-5258-455d-87c6-804179a91363-kube-api-access-skxvp\") pod \"kube-proxy-n2rxl\" (UID: \"1667ee67-5258-455d-87c6-804179a91363\") " pod="kube-system/kube-proxy-n2rxl" Jul 10 00:26:23.080691 kubelet[2756]: I0710 00:26:23.080600 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1667ee67-5258-455d-87c6-804179a91363-xtables-lock\") pod \"kube-proxy-n2rxl\" (UID: \"1667ee67-5258-455d-87c6-804179a91363\") " pod="kube-system/kube-proxy-n2rxl" Jul 10 00:26:23.080691 kubelet[2756]: I0710 00:26:23.080653 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1e811eaf-b35d-4bbc-a68a-833b4243f360-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-wbtn6\" (UID: \"1e811eaf-b35d-4bbc-a68a-833b4243f360\") " pod="kube-system/cilium-operator-6c4d7847fc-wbtn6" Jul 10 00:26:23.283208 kubelet[2756]: E0710 00:26:23.283037 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:26:23.283883 containerd[1574]: time="2025-07-10T00:26:23.283835572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hs8f7,Uid:420995f2-f48d-445d-982c-a6f4a978e305,Namespace:kube-system,Attempt:0,}" Jul 10 00:26:23.308956 kubelet[2756]: E0710 00:26:23.308911 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:26:23.309547 containerd[1574]: time="2025-07-10T00:26:23.309427286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-n2rxl,Uid:1667ee67-5258-455d-87c6-804179a91363,Namespace:kube-system,Attempt:0,}" Jul 10 00:26:23.315936 kubelet[2756]: E0710 00:26:23.315912 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:26:23.316569 containerd[1574]: time="2025-07-10T00:26:23.316386473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-wbtn6,Uid:1e811eaf-b35d-4bbc-a68a-833b4243f360,Namespace:kube-system,Attempt:0,}" Jul 10 00:26:23.321607 containerd[1574]: time="2025-07-10T00:26:23.321546265Z" level=info msg="connecting to shim 1969c45d5ef308dcbbceedc18520f3200e74ac54b0019749fa670408d4027465" address="unix:///run/containerd/s/7ac0bfc1783c8f1bd1e891f957c6197d030849ee8f0621d96d79758bcf389998" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:26:23.337201 containerd[1574]: time="2025-07-10T00:26:23.336965059Z" level=info msg="connecting to shim 21faf81b3db6f0a9f15ad2d63b341af13c29afada7af656ef24db097610a2656" address="unix:///run/containerd/s/b1bcad803f885df1d8315d867177194aa90f6c99291d43a2bf74880da8e5018a" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:26:23.352374 containerd[1574]: time="2025-07-10T00:26:23.352314143Z" level=info msg="connecting to shim 3f39712686ac40e5eee47e9e2f5aaba254e1a60b654d62f354999d8df4bbccc0" address="unix:///run/containerd/s/5df438f453d17ae056690ef9ae0739547dec31bf78f43901ef1b2b8f4e9bc1ac" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:26:23.391676 systemd[1]: Started cri-containerd-1969c45d5ef308dcbbceedc18520f3200e74ac54b0019749fa670408d4027465.scope - libcontainer container 1969c45d5ef308dcbbceedc18520f3200e74ac54b0019749fa670408d4027465. Jul 10 00:26:23.394058 systemd[1]: Started cri-containerd-21faf81b3db6f0a9f15ad2d63b341af13c29afada7af656ef24db097610a2656.scope - libcontainer container 21faf81b3db6f0a9f15ad2d63b341af13c29afada7af656ef24db097610a2656. Jul 10 00:26:23.399291 systemd[1]: Started cri-containerd-3f39712686ac40e5eee47e9e2f5aaba254e1a60b654d62f354999d8df4bbccc0.scope - libcontainer container 3f39712686ac40e5eee47e9e2f5aaba254e1a60b654d62f354999d8df4bbccc0. Jul 10 00:26:23.434212 containerd[1574]: time="2025-07-10T00:26:23.434150630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hs8f7,Uid:420995f2-f48d-445d-982c-a6f4a978e305,Namespace:kube-system,Attempt:0,} returns sandbox id \"1969c45d5ef308dcbbceedc18520f3200e74ac54b0019749fa670408d4027465\"" Jul 10 00:26:23.434838 kubelet[2756]: E0710 00:26:23.434781 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:26:23.436581 containerd[1574]: time="2025-07-10T00:26:23.436554695Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 10 00:26:23.447711 containerd[1574]: time="2025-07-10T00:26:23.447567568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-n2rxl,Uid:1667ee67-5258-455d-87c6-804179a91363,Namespace:kube-system,Attempt:0,} returns sandbox id \"21faf81b3db6f0a9f15ad2d63b341af13c29afada7af656ef24db097610a2656\"" Jul 10 00:26:23.448903 kubelet[2756]: E0710 00:26:23.448854 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:26:23.452671 containerd[1574]: time="2025-07-10T00:26:23.452622254Z" level=info msg="CreateContainer within sandbox \"21faf81b3db6f0a9f15ad2d63b341af13c29afada7af656ef24db097610a2656\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 10 00:26:23.465136 containerd[1574]: time="2025-07-10T00:26:23.465084691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-wbtn6,Uid:1e811eaf-b35d-4bbc-a68a-833b4243f360,Namespace:kube-system,Attempt:0,} returns sandbox id \"3f39712686ac40e5eee47e9e2f5aaba254e1a60b654d62f354999d8df4bbccc0\"" Jul 10 00:26:23.465867 kubelet[2756]: E0710 00:26:23.465843 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:26:23.467032 containerd[1574]: time="2025-07-10T00:26:23.466995475Z" level=info msg="Container 6e54e61d53a8e80b6cd755d0bb16e834c3c8ec2dfc2389502207c5adf6cd62e2: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:26:23.476068 containerd[1574]: time="2025-07-10T00:26:23.476023134Z" level=info msg="CreateContainer within sandbox \"21faf81b3db6f0a9f15ad2d63b341af13c29afada7af656ef24db097610a2656\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6e54e61d53a8e80b6cd755d0bb16e834c3c8ec2dfc2389502207c5adf6cd62e2\"" Jul 10 00:26:23.476597 containerd[1574]: time="2025-07-10T00:26:23.476525041Z" level=info msg="StartContainer for \"6e54e61d53a8e80b6cd755d0bb16e834c3c8ec2dfc2389502207c5adf6cd62e2\"" Jul 10 00:26:23.480845 containerd[1574]: time="2025-07-10T00:26:23.480794774Z" level=info msg="connecting to shim 6e54e61d53a8e80b6cd755d0bb16e834c3c8ec2dfc2389502207c5adf6cd62e2" address="unix:///run/containerd/s/b1bcad803f885df1d8315d867177194aa90f6c99291d43a2bf74880da8e5018a" protocol=ttrpc version=3 Jul 10 00:26:23.496429 kubelet[2756]: E0710 00:26:23.496396 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:26:23.502575 systemd[1]: Started cri-containerd-6e54e61d53a8e80b6cd755d0bb16e834c3c8ec2dfc2389502207c5adf6cd62e2.scope - libcontainer container 6e54e61d53a8e80b6cd755d0bb16e834c3c8ec2dfc2389502207c5adf6cd62e2. Jul 10 00:26:23.555214 containerd[1574]: time="2025-07-10T00:26:23.555055242Z" level=info msg="StartContainer for \"6e54e61d53a8e80b6cd755d0bb16e834c3c8ec2dfc2389502207c5adf6cd62e2\" returns successfully" Jul 10 00:26:24.072456 kubelet[2756]: E0710 00:26:24.072400 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:26:24.073313 kubelet[2756]: E0710 00:26:24.073294 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:26:24.080210 kubelet[2756]: I0710 00:26:24.080161 2756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-n2rxl" podStartSLOduration=2.080147738 podStartE2EDuration="2.080147738s" podCreationTimestamp="2025-07-10 00:26:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:26:24.079957048 +0000 UTC m=+8.130996678" watchObservedRunningTime="2025-07-10 00:26:24.080147738 +0000 UTC m=+8.131187368" Jul 10 00:26:24.124150 kubelet[2756]: E0710 00:26:24.124099 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:26:25.074584 kubelet[2756]: E0710 00:26:25.074545 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:26:26.075700 kubelet[2756]: E0710 00:26:26.075647 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:26:28.845769 kubelet[2756]: E0710 00:26:28.845720 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:26:29.080307 kubelet[2756]: E0710 00:26:29.080273 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:26:30.127374 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount535821640.mount: Deactivated successfully. Jul 10 00:26:41.835849 containerd[1574]: time="2025-07-10T00:26:41.835768712Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:26:41.836717 containerd[1574]: time="2025-07-10T00:26:41.836663323Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jul 10 00:26:41.837786 containerd[1574]: time="2025-07-10T00:26:41.837753802Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:26:41.839529 containerd[1574]: time="2025-07-10T00:26:41.839468342Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 18.402871287s" Jul 10 00:26:41.839529 containerd[1574]: time="2025-07-10T00:26:41.839515350Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 10 00:26:41.846011 containerd[1574]: time="2025-07-10T00:26:41.845966031Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 10 00:26:41.848663 containerd[1574]: time="2025-07-10T00:26:41.848594759Z" level=info msg="CreateContainer within sandbox \"1969c45d5ef308dcbbceedc18520f3200e74ac54b0019749fa670408d4027465\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 10 00:26:41.860722 containerd[1574]: time="2025-07-10T00:26:41.860658334Z" level=info msg="Container f9f98da5207dd93a5a3df623ac51f1b919db958feb0dc9865eb36e07e2c0e0aa: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:26:41.864522 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1714416193.mount: Deactivated successfully. Jul 10 00:26:41.867835 containerd[1574]: time="2025-07-10T00:26:41.867796856Z" level=info msg="CreateContainer within sandbox \"1969c45d5ef308dcbbceedc18520f3200e74ac54b0019749fa670408d4027465\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f9f98da5207dd93a5a3df623ac51f1b919db958feb0dc9865eb36e07e2c0e0aa\"" Jul 10 00:26:41.868574 containerd[1574]: time="2025-07-10T00:26:41.868427270Z" level=info msg="StartContainer for \"f9f98da5207dd93a5a3df623ac51f1b919db958feb0dc9865eb36e07e2c0e0aa\"" Jul 10 00:26:41.869571 containerd[1574]: time="2025-07-10T00:26:41.869529009Z" level=info msg="connecting to shim f9f98da5207dd93a5a3df623ac51f1b919db958feb0dc9865eb36e07e2c0e0aa" address="unix:///run/containerd/s/7ac0bfc1783c8f1bd1e891f957c6197d030849ee8f0621d96d79758bcf389998" protocol=ttrpc version=3 Jul 10 00:26:41.896709 systemd[1]: Started cri-containerd-f9f98da5207dd93a5a3df623ac51f1b919db958feb0dc9865eb36e07e2c0e0aa.scope - libcontainer container f9f98da5207dd93a5a3df623ac51f1b919db958feb0dc9865eb36e07e2c0e0aa. Jul 10 00:26:41.934035 containerd[1574]: time="2025-07-10T00:26:41.933995879Z" level=info msg="StartContainer for \"f9f98da5207dd93a5a3df623ac51f1b919db958feb0dc9865eb36e07e2c0e0aa\" returns successfully" Jul 10 00:26:41.946868 systemd[1]: cri-containerd-f9f98da5207dd93a5a3df623ac51f1b919db958feb0dc9865eb36e07e2c0e0aa.scope: Deactivated successfully. Jul 10 00:26:41.948532 containerd[1574]: time="2025-07-10T00:26:41.948486413Z" level=info msg="received exit event container_id:\"f9f98da5207dd93a5a3df623ac51f1b919db958feb0dc9865eb36e07e2c0e0aa\" id:\"f9f98da5207dd93a5a3df623ac51f1b919db958feb0dc9865eb36e07e2c0e0aa\" pid:3179 exited_at:{seconds:1752107201 nanos:947910050}" Jul 10 00:26:41.948690 containerd[1574]: time="2025-07-10T00:26:41.948519565Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f9f98da5207dd93a5a3df623ac51f1b919db958feb0dc9865eb36e07e2c0e0aa\" id:\"f9f98da5207dd93a5a3df623ac51f1b919db958feb0dc9865eb36e07e2c0e0aa\" pid:3179 exited_at:{seconds:1752107201 nanos:947910050}" Jul 10 00:26:41.975146 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f9f98da5207dd93a5a3df623ac51f1b919db958feb0dc9865eb36e07e2c0e0aa-rootfs.mount: Deactivated successfully. Jul 10 00:26:42.109087 kubelet[2756]: E0710 00:26:42.104471 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:26:43.108417 kubelet[2756]: E0710 00:26:43.108294 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:26:43.110960 containerd[1574]: time="2025-07-10T00:26:43.110902357Z" level=info msg="CreateContainer within sandbox \"1969c45d5ef308dcbbceedc18520f3200e74ac54b0019749fa670408d4027465\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 10 00:26:43.123526 containerd[1574]: time="2025-07-10T00:26:43.123297370Z" level=info msg="Container d60405f7b073745b42992e7fbc9ae344f0469c299a32689d84ba734431c6e106: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:26:43.135107 containerd[1574]: time="2025-07-10T00:26:43.135026081Z" level=info msg="CreateContainer within sandbox \"1969c45d5ef308dcbbceedc18520f3200e74ac54b0019749fa670408d4027465\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d60405f7b073745b42992e7fbc9ae344f0469c299a32689d84ba734431c6e106\"" Jul 10 00:26:43.137749 containerd[1574]: time="2025-07-10T00:26:43.137648597Z" level=info msg="StartContainer for \"d60405f7b073745b42992e7fbc9ae344f0469c299a32689d84ba734431c6e106\"" Jul 10 00:26:43.139722 containerd[1574]: time="2025-07-10T00:26:43.139679742Z" level=info msg="connecting to shim d60405f7b073745b42992e7fbc9ae344f0469c299a32689d84ba734431c6e106" address="unix:///run/containerd/s/7ac0bfc1783c8f1bd1e891f957c6197d030849ee8f0621d96d79758bcf389998" protocol=ttrpc version=3 Jul 10 00:26:43.163711 systemd[1]: Started cri-containerd-d60405f7b073745b42992e7fbc9ae344f0469c299a32689d84ba734431c6e106.scope - libcontainer container d60405f7b073745b42992e7fbc9ae344f0469c299a32689d84ba734431c6e106. Jul 10 00:26:43.202222 containerd[1574]: time="2025-07-10T00:26:43.202179029Z" level=info msg="StartContainer for \"d60405f7b073745b42992e7fbc9ae344f0469c299a32689d84ba734431c6e106\" returns successfully" Jul 10 00:26:43.217237 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 10 00:26:43.217551 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 10 00:26:43.217867 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 10 00:26:43.220835 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 10 00:26:43.224421 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 10 00:26:43.224760 containerd[1574]: time="2025-07-10T00:26:43.224332140Z" level=info msg="received exit event container_id:\"d60405f7b073745b42992e7fbc9ae344f0469c299a32689d84ba734431c6e106\" id:\"d60405f7b073745b42992e7fbc9ae344f0469c299a32689d84ba734431c6e106\" pid:3223 exited_at:{seconds:1752107203 nanos:224036164}" Jul 10 00:26:43.224760 containerd[1574]: time="2025-07-10T00:26:43.224504494Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d60405f7b073745b42992e7fbc9ae344f0469c299a32689d84ba734431c6e106\" id:\"d60405f7b073745b42992e7fbc9ae344f0469c299a32689d84ba734431c6e106\" pid:3223 exited_at:{seconds:1752107203 nanos:224036164}" Jul 10 00:26:43.225133 systemd[1]: cri-containerd-d60405f7b073745b42992e7fbc9ae344f0469c299a32689d84ba734431c6e106.scope: Deactivated successfully. Jul 10 00:26:43.259688 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 10 00:26:44.124321 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d60405f7b073745b42992e7fbc9ae344f0469c299a32689d84ba734431c6e106-rootfs.mount: Deactivated successfully. Jul 10 00:26:44.146509 kubelet[2756]: E0710 00:26:44.146473 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:26:44.148222 containerd[1574]: time="2025-07-10T00:26:44.148177622Z" level=info msg="CreateContainer within sandbox \"1969c45d5ef308dcbbceedc18520f3200e74ac54b0019749fa670408d4027465\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 10 00:26:44.268613 containerd[1574]: time="2025-07-10T00:26:44.268537909Z" level=info msg="Container ea810c115c10b4042007af097a28d0005c7b5c60cde65873bc6494f413a8b093: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:26:44.282066 containerd[1574]: time="2025-07-10T00:26:44.282014320Z" level=info msg="CreateContainer within sandbox \"1969c45d5ef308dcbbceedc18520f3200e74ac54b0019749fa670408d4027465\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ea810c115c10b4042007af097a28d0005c7b5c60cde65873bc6494f413a8b093\"" Jul 10 00:26:44.282622 containerd[1574]: time="2025-07-10T00:26:44.282588008Z" level=info msg="StartContainer for \"ea810c115c10b4042007af097a28d0005c7b5c60cde65873bc6494f413a8b093\"" Jul 10 00:26:44.286138 containerd[1574]: time="2025-07-10T00:26:44.286079997Z" level=info msg="connecting to shim ea810c115c10b4042007af097a28d0005c7b5c60cde65873bc6494f413a8b093" address="unix:///run/containerd/s/7ac0bfc1783c8f1bd1e891f957c6197d030849ee8f0621d96d79758bcf389998" protocol=ttrpc version=3 Jul 10 00:26:44.295270 containerd[1574]: time="2025-07-10T00:26:44.295209394Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:26:44.296987 containerd[1574]: time="2025-07-10T00:26:44.296949733Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jul 10 00:26:44.298457 containerd[1574]: time="2025-07-10T00:26:44.298405457Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:26:44.300584 containerd[1574]: time="2025-07-10T00:26:44.300551798Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.454346238s" Jul 10 00:26:44.300633 containerd[1574]: time="2025-07-10T00:26:44.300590150Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 10 00:26:44.305713 containerd[1574]: time="2025-07-10T00:26:44.305183146Z" level=info msg="CreateContainer within sandbox \"3f39712686ac40e5eee47e9e2f5aaba254e1a60b654d62f354999d8df4bbccc0\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 10 00:26:44.317717 systemd[1]: Started cri-containerd-ea810c115c10b4042007af097a28d0005c7b5c60cde65873bc6494f413a8b093.scope - libcontainer container ea810c115c10b4042007af097a28d0005c7b5c60cde65873bc6494f413a8b093. Jul 10 00:26:44.318918 containerd[1574]: time="2025-07-10T00:26:44.317738226Z" level=info msg="Container 40bc52c44de49683b197fc8d54aefb4f10f1bdbe58de4870206da6212e5ffcb8: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:26:44.328141 containerd[1574]: time="2025-07-10T00:26:44.328056577Z" level=info msg="CreateContainer within sandbox \"3f39712686ac40e5eee47e9e2f5aaba254e1a60b654d62f354999d8df4bbccc0\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"40bc52c44de49683b197fc8d54aefb4f10f1bdbe58de4870206da6212e5ffcb8\"" Jul 10 00:26:44.329297 containerd[1574]: time="2025-07-10T00:26:44.329247093Z" level=info msg="StartContainer for \"40bc52c44de49683b197fc8d54aefb4f10f1bdbe58de4870206da6212e5ffcb8\"" Jul 10 00:26:44.330812 containerd[1574]: time="2025-07-10T00:26:44.330721171Z" level=info msg="connecting to shim 40bc52c44de49683b197fc8d54aefb4f10f1bdbe58de4870206da6212e5ffcb8" address="unix:///run/containerd/s/5df438f453d17ae056690ef9ae0739547dec31bf78f43901ef1b2b8f4e9bc1ac" protocol=ttrpc version=3 Jul 10 00:26:44.353367 systemd[1]: Started cri-containerd-40bc52c44de49683b197fc8d54aefb4f10f1bdbe58de4870206da6212e5ffcb8.scope - libcontainer container 40bc52c44de49683b197fc8d54aefb4f10f1bdbe58de4870206da6212e5ffcb8. Jul 10 00:26:44.370814 systemd[1]: cri-containerd-ea810c115c10b4042007af097a28d0005c7b5c60cde65873bc6494f413a8b093.scope: Deactivated successfully. Jul 10 00:26:44.372242 containerd[1574]: time="2025-07-10T00:26:44.372026079Z" level=info msg="received exit event container_id:\"ea810c115c10b4042007af097a28d0005c7b5c60cde65873bc6494f413a8b093\" id:\"ea810c115c10b4042007af097a28d0005c7b5c60cde65873bc6494f413a8b093\" pid:3290 exited_at:{seconds:1752107204 nanos:371623794}" Jul 10 00:26:44.372816 containerd[1574]: time="2025-07-10T00:26:44.372782760Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ea810c115c10b4042007af097a28d0005c7b5c60cde65873bc6494f413a8b093\" id:\"ea810c115c10b4042007af097a28d0005c7b5c60cde65873bc6494f413a8b093\" pid:3290 exited_at:{seconds:1752107204 nanos:371623794}" Jul 10 00:26:44.374420 containerd[1574]: time="2025-07-10T00:26:44.374312072Z" level=info msg="StartContainer for \"ea810c115c10b4042007af097a28d0005c7b5c60cde65873bc6494f413a8b093\" returns successfully" Jul 10 00:26:44.399014 containerd[1574]: time="2025-07-10T00:26:44.398975395Z" level=info msg="StartContainer for \"40bc52c44de49683b197fc8d54aefb4f10f1bdbe58de4870206da6212e5ffcb8\" returns successfully" Jul 10 00:26:45.126912 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ea810c115c10b4042007af097a28d0005c7b5c60cde65873bc6494f413a8b093-rootfs.mount: Deactivated successfully. Jul 10 00:26:45.152482 kubelet[2756]: E0710 00:26:45.152418 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:26:45.158746 kubelet[2756]: E0710 00:26:45.158709 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:26:45.163150 containerd[1574]: time="2025-07-10T00:26:45.163096531Z" level=info msg="CreateContainer within sandbox \"1969c45d5ef308dcbbceedc18520f3200e74ac54b0019749fa670408d4027465\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 10 00:26:45.181499 containerd[1574]: time="2025-07-10T00:26:45.179789090Z" level=info msg="Container a66910599303cbe7082284e3032d8a7816dda67b57121cf15ae08aa778a7a2a0: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:26:45.190464 containerd[1574]: time="2025-07-10T00:26:45.190085987Z" level=info msg="CreateContainer within sandbox \"1969c45d5ef308dcbbceedc18520f3200e74ac54b0019749fa670408d4027465\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a66910599303cbe7082284e3032d8a7816dda67b57121cf15ae08aa778a7a2a0\"" Jul 10 00:26:45.194459 containerd[1574]: time="2025-07-10T00:26:45.192197784Z" level=info msg="StartContainer for \"a66910599303cbe7082284e3032d8a7816dda67b57121cf15ae08aa778a7a2a0\"" Jul 10 00:26:45.195556 containerd[1574]: time="2025-07-10T00:26:45.195503611Z" level=info msg="connecting to shim a66910599303cbe7082284e3032d8a7816dda67b57121cf15ae08aa778a7a2a0" address="unix:///run/containerd/s/7ac0bfc1783c8f1bd1e891f957c6197d030849ee8f0621d96d79758bcf389998" protocol=ttrpc version=3 Jul 10 00:26:45.224817 kubelet[2756]: I0710 00:26:45.224742 2756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-wbtn6" podStartSLOduration=2.389705394 podStartE2EDuration="23.224722375s" podCreationTimestamp="2025-07-10 00:26:22 +0000 UTC" firstStartedPulling="2025-07-10 00:26:23.466366759 +0000 UTC m=+7.517406389" lastFinishedPulling="2025-07-10 00:26:44.30138374 +0000 UTC m=+28.352423370" observedRunningTime="2025-07-10 00:26:45.223694955 +0000 UTC m=+29.274734585" watchObservedRunningTime="2025-07-10 00:26:45.224722375 +0000 UTC m=+29.275762005" Jul 10 00:26:45.231159 systemd[1]: Started cri-containerd-a66910599303cbe7082284e3032d8a7816dda67b57121cf15ae08aa778a7a2a0.scope - libcontainer container a66910599303cbe7082284e3032d8a7816dda67b57121cf15ae08aa778a7a2a0. Jul 10 00:26:45.276372 systemd[1]: cri-containerd-a66910599303cbe7082284e3032d8a7816dda67b57121cf15ae08aa778a7a2a0.scope: Deactivated successfully. Jul 10 00:26:45.279602 containerd[1574]: time="2025-07-10T00:26:45.279557174Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a66910599303cbe7082284e3032d8a7816dda67b57121cf15ae08aa778a7a2a0\" id:\"a66910599303cbe7082284e3032d8a7816dda67b57121cf15ae08aa778a7a2a0\" pid:3361 exited_at:{seconds:1752107205 nanos:278691058}" Jul 10 00:26:45.282085 containerd[1574]: time="2025-07-10T00:26:45.281791891Z" level=info msg="received exit event container_id:\"a66910599303cbe7082284e3032d8a7816dda67b57121cf15ae08aa778a7a2a0\" id:\"a66910599303cbe7082284e3032d8a7816dda67b57121cf15ae08aa778a7a2a0\" pid:3361 exited_at:{seconds:1752107205 nanos:278691058}" Jul 10 00:26:45.284240 containerd[1574]: time="2025-07-10T00:26:45.284116256Z" level=info msg="StartContainer for \"a66910599303cbe7082284e3032d8a7816dda67b57121cf15ae08aa778a7a2a0\" returns successfully" Jul 10 00:26:45.307763 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a66910599303cbe7082284e3032d8a7816dda67b57121cf15ae08aa778a7a2a0-rootfs.mount: Deactivated successfully. Jul 10 00:26:46.164930 kubelet[2756]: E0710 00:26:46.164892 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:26:46.165480 kubelet[2756]: E0710 00:26:46.165057 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:26:46.167346 containerd[1574]: time="2025-07-10T00:26:46.167296562Z" level=info msg="CreateContainer within sandbox \"1969c45d5ef308dcbbceedc18520f3200e74ac54b0019749fa670408d4027465\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 10 00:26:46.187928 containerd[1574]: time="2025-07-10T00:26:46.187430971Z" level=info msg="Container 14de38d9c88acac377c5c1bb07dbe8954504832c6876be8c635dc063e0806050: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:26:46.194636 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2975173704.mount: Deactivated successfully. Jul 10 00:26:46.198033 containerd[1574]: time="2025-07-10T00:26:46.197972987Z" level=info msg="CreateContainer within sandbox \"1969c45d5ef308dcbbceedc18520f3200e74ac54b0019749fa670408d4027465\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"14de38d9c88acac377c5c1bb07dbe8954504832c6876be8c635dc063e0806050\"" Jul 10 00:26:46.198652 containerd[1574]: time="2025-07-10T00:26:46.198626264Z" level=info msg="StartContainer for \"14de38d9c88acac377c5c1bb07dbe8954504832c6876be8c635dc063e0806050\"" Jul 10 00:26:46.201661 containerd[1574]: time="2025-07-10T00:26:46.201618032Z" level=info msg="connecting to shim 14de38d9c88acac377c5c1bb07dbe8954504832c6876be8c635dc063e0806050" address="unix:///run/containerd/s/7ac0bfc1783c8f1bd1e891f957c6197d030849ee8f0621d96d79758bcf389998" protocol=ttrpc version=3 Jul 10 00:26:46.232606 systemd[1]: Started cri-containerd-14de38d9c88acac377c5c1bb07dbe8954504832c6876be8c635dc063e0806050.scope - libcontainer container 14de38d9c88acac377c5c1bb07dbe8954504832c6876be8c635dc063e0806050. Jul 10 00:26:46.276772 containerd[1574]: time="2025-07-10T00:26:46.276729119Z" level=info msg="StartContainer for \"14de38d9c88acac377c5c1bb07dbe8954504832c6876be8c635dc063e0806050\" returns successfully" Jul 10 00:26:46.367467 containerd[1574]: time="2025-07-10T00:26:46.367260652Z" level=info msg="TaskExit event in podsandbox handler container_id:\"14de38d9c88acac377c5c1bb07dbe8954504832c6876be8c635dc063e0806050\" id:\"bc3eeda7c1b9cbc813e256a194f5218c10796b154fc8181ec74003ef010f0433\" pid:3433 exited_at:{seconds:1752107206 nanos:366115822}" Jul 10 00:26:46.402478 kubelet[2756]: I0710 00:26:46.402053 2756 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 10 00:26:46.439070 systemd[1]: Created slice kubepods-burstable-podbed1622b_7786_42ec_9b45_0e5850de1dd1.slice - libcontainer container kubepods-burstable-podbed1622b_7786_42ec_9b45_0e5850de1dd1.slice. Jul 10 00:26:46.448041 systemd[1]: Created slice kubepods-burstable-podb787a5aa_96c5_4875_b80e_c4596f14e80f.slice - libcontainer container kubepods-burstable-podb787a5aa_96c5_4875_b80e_c4596f14e80f.slice. Jul 10 00:26:46.500613 kubelet[2756]: I0710 00:26:46.500551 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lplr\" (UniqueName: \"kubernetes.io/projected/b787a5aa-96c5-4875-b80e-c4596f14e80f-kube-api-access-2lplr\") pod \"coredns-668d6bf9bc-22wmm\" (UID: \"b787a5aa-96c5-4875-b80e-c4596f14e80f\") " pod="kube-system/coredns-668d6bf9bc-22wmm" Jul 10 00:26:46.500613 kubelet[2756]: I0710 00:26:46.500604 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b787a5aa-96c5-4875-b80e-c4596f14e80f-config-volume\") pod \"coredns-668d6bf9bc-22wmm\" (UID: \"b787a5aa-96c5-4875-b80e-c4596f14e80f\") " pod="kube-system/coredns-668d6bf9bc-22wmm" Jul 10 00:26:46.500830 kubelet[2756]: I0710 00:26:46.500683 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpnm6\" (UniqueName: \"kubernetes.io/projected/bed1622b-7786-42ec-9b45-0e5850de1dd1-kube-api-access-mpnm6\") pod \"coredns-668d6bf9bc-xcg54\" (UID: \"bed1622b-7786-42ec-9b45-0e5850de1dd1\") " pod="kube-system/coredns-668d6bf9bc-xcg54" Jul 10 00:26:46.500830 kubelet[2756]: I0710 00:26:46.500703 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bed1622b-7786-42ec-9b45-0e5850de1dd1-config-volume\") pod \"coredns-668d6bf9bc-xcg54\" (UID: \"bed1622b-7786-42ec-9b45-0e5850de1dd1\") " pod="kube-system/coredns-668d6bf9bc-xcg54" Jul 10 00:26:46.746552 kubelet[2756]: E0710 00:26:46.746259 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:26:46.747455 containerd[1574]: time="2025-07-10T00:26:46.747360167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xcg54,Uid:bed1622b-7786-42ec-9b45-0e5850de1dd1,Namespace:kube-system,Attempt:0,}" Jul 10 00:26:46.752085 kubelet[2756]: E0710 00:26:46.752010 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:26:46.752777 containerd[1574]: time="2025-07-10T00:26:46.752723668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-22wmm,Uid:b787a5aa-96c5-4875-b80e-c4596f14e80f,Namespace:kube-system,Attempt:0,}" Jul 10 00:26:47.178252 kubelet[2756]: E0710 00:26:47.178210 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:26:47.253162 kubelet[2756]: I0710 00:26:47.253064 2756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-hs8f7" podStartSLOduration=6.842613409 podStartE2EDuration="25.25303944s" podCreationTimestamp="2025-07-10 00:26:22 +0000 UTC" firstStartedPulling="2025-07-10 00:26:23.435337399 +0000 UTC m=+7.486377029" lastFinishedPulling="2025-07-10 00:26:41.84576343 +0000 UTC m=+25.896803060" observedRunningTime="2025-07-10 00:26:47.252035064 +0000 UTC m=+31.303074724" watchObservedRunningTime="2025-07-10 00:26:47.25303944 +0000 UTC m=+31.304079070" Jul 10 00:26:48.183086 kubelet[2756]: E0710 00:26:48.182562 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:26:48.974816 systemd-networkd[1485]: cilium_host: Link UP Jul 10 00:26:48.975058 systemd-networkd[1485]: cilium_net: Link UP Jul 10 00:26:48.975346 systemd-networkd[1485]: cilium_net: Gained carrier Jul 10 00:26:48.975604 systemd-networkd[1485]: cilium_host: Gained carrier Jul 10 00:26:48.990585 systemd-networkd[1485]: cilium_host: Gained IPv6LL Jul 10 00:26:49.087845 systemd-networkd[1485]: cilium_vxlan: Link UP Jul 10 00:26:49.087859 systemd-networkd[1485]: cilium_vxlan: Gained carrier Jul 10 00:26:49.185031 kubelet[2756]: E0710 00:26:49.184983 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:26:49.333913 systemd-networkd[1485]: cilium_net: Gained IPv6LL Jul 10 00:26:49.372484 kernel: NET: Registered PF_ALG protocol family Jul 10 00:26:49.432634 systemd[1]: Started sshd@9-10.0.0.116:22-10.0.0.1:51990.service - OpenSSH per-connection server daemon (10.0.0.1:51990). Jul 10 00:26:49.493455 sshd[3641]: Accepted publickey for core from 10.0.0.1 port 51990 ssh2: RSA SHA256:a/WzkVKs173+YSebQY64/4LigDpieaPOYRH6W2gWTe4 Jul 10 00:26:49.495673 sshd-session[3641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:26:49.501709 systemd-logind[1538]: New session 10 of user core. Jul 10 00:26:49.511857 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 10 00:26:49.662586 sshd[3643]: Connection closed by 10.0.0.1 port 51990 Jul 10 00:26:49.662914 sshd-session[3641]: pam_unix(sshd:session): session closed for user core Jul 10 00:26:49.666486 systemd-logind[1538]: Session 10 logged out. Waiting for processes to exit. Jul 10 00:26:49.667194 systemd[1]: sshd@9-10.0.0.116:22-10.0.0.1:51990.service: Deactivated successfully. Jul 10 00:26:49.670587 systemd[1]: session-10.scope: Deactivated successfully. Jul 10 00:26:49.674296 systemd-logind[1538]: Removed session 10. Jul 10 00:26:50.161323 systemd-networkd[1485]: lxc_health: Link UP Jul 10 00:26:50.176221 systemd-networkd[1485]: lxc_health: Gained carrier Jul 10 00:26:50.294753 systemd-networkd[1485]: cilium_vxlan: Gained IPv6LL Jul 10 00:26:50.585197 systemd-networkd[1485]: lxc30bd8e2801ad: Link UP Jul 10 00:26:50.594484 kernel: eth0: renamed from tmp7b58d Jul 10 00:26:50.596133 systemd-networkd[1485]: lxc30bd8e2801ad: Gained carrier Jul 10 00:26:50.619487 systemd-networkd[1485]: lxc1689cd60923f: Link UP Jul 10 00:26:50.621475 kernel: eth0: renamed from tmpaed45 Jul 10 00:26:50.622789 systemd-networkd[1485]: lxc1689cd60923f: Gained carrier Jul 10 00:26:51.284911 kubelet[2756]: E0710 00:26:51.284845 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:26:51.957691 systemd-networkd[1485]: lxc30bd8e2801ad: Gained IPv6LL Jul 10 00:26:52.021679 systemd-networkd[1485]: lxc_health: Gained IPv6LL Jul 10 00:26:52.199263 kubelet[2756]: E0710 00:26:52.199210 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:26:52.533719 systemd-networkd[1485]: lxc1689cd60923f: Gained IPv6LL Jul 10 00:26:53.200702 kubelet[2756]: E0710 00:26:53.200659 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:26:54.416873 containerd[1574]: time="2025-07-10T00:26:54.416782271Z" level=info msg="connecting to shim 7b58d714d1d45dc077bf99e5c48d1e4df604efe86e03f99d662bf62e7ab87071" address="unix:///run/containerd/s/c4062b6e343263e438ce3825918685fdb874c8aa888e31f3c2b955ff426e69f3" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:26:54.417616 containerd[1574]: time="2025-07-10T00:26:54.417545884Z" level=info msg="connecting to shim aed45a71636a6238e257e5f4460238fcc0146cacfe6e724451259d68dd91be0d" address="unix:///run/containerd/s/b367848416fc41d868d176f924fa10e095cbd336dc6a558e991d486efc780ba1" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:26:54.442597 systemd[1]: Started cri-containerd-7b58d714d1d45dc077bf99e5c48d1e4df604efe86e03f99d662bf62e7ab87071.scope - libcontainer container 7b58d714d1d45dc077bf99e5c48d1e4df604efe86e03f99d662bf62e7ab87071. Jul 10 00:26:54.446605 systemd[1]: Started cri-containerd-aed45a71636a6238e257e5f4460238fcc0146cacfe6e724451259d68dd91be0d.scope - libcontainer container aed45a71636a6238e257e5f4460238fcc0146cacfe6e724451259d68dd91be0d. Jul 10 00:26:54.459191 systemd-resolved[1408]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:26:54.461078 systemd-resolved[1408]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:26:54.610074 containerd[1574]: time="2025-07-10T00:26:54.609955215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-22wmm,Uid:b787a5aa-96c5-4875-b80e-c4596f14e80f,Namespace:kube-system,Attempt:0,} returns sandbox id \"aed45a71636a6238e257e5f4460238fcc0146cacfe6e724451259d68dd91be0d\"" Jul 10 00:26:54.610972 kubelet[2756]: E0710 00:26:54.610945 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:26:54.612922 containerd[1574]: time="2025-07-10T00:26:54.612897256Z" level=info msg="CreateContainer within sandbox \"aed45a71636a6238e257e5f4460238fcc0146cacfe6e724451259d68dd91be0d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 10 00:26:54.614361 containerd[1574]: time="2025-07-10T00:26:54.614286714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xcg54,Uid:bed1622b-7786-42ec-9b45-0e5850de1dd1,Namespace:kube-system,Attempt:0,} returns sandbox id \"7b58d714d1d45dc077bf99e5c48d1e4df604efe86e03f99d662bf62e7ab87071\"" Jul 10 00:26:54.615134 kubelet[2756]: E0710 00:26:54.614970 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:26:54.616803 containerd[1574]: time="2025-07-10T00:26:54.616771347Z" level=info msg="CreateContainer within sandbox \"7b58d714d1d45dc077bf99e5c48d1e4df604efe86e03f99d662bf62e7ab87071\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 10 00:26:54.640067 containerd[1574]: time="2025-07-10T00:26:54.639995296Z" level=info msg="Container 289e11f2c00ae095f50060247d5315cf29376243cfb086261b707c5d9f2b6faf: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:26:54.643813 containerd[1574]: time="2025-07-10T00:26:54.643769058Z" level=info msg="Container 816644e1368e9380c378ecaab82dae1df7602f187ff4db16719d1fe8dd24e3a7: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:26:54.658384 containerd[1574]: time="2025-07-10T00:26:54.658324336Z" level=info msg="CreateContainer within sandbox \"aed45a71636a6238e257e5f4460238fcc0146cacfe6e724451259d68dd91be0d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"289e11f2c00ae095f50060247d5315cf29376243cfb086261b707c5d9f2b6faf\"" Jul 10 00:26:54.663888 containerd[1574]: time="2025-07-10T00:26:54.663776929Z" level=info msg="CreateContainer within sandbox \"7b58d714d1d45dc077bf99e5c48d1e4df604efe86e03f99d662bf62e7ab87071\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"816644e1368e9380c378ecaab82dae1df7602f187ff4db16719d1fe8dd24e3a7\"" Jul 10 00:26:54.664556 containerd[1574]: time="2025-07-10T00:26:54.664429905Z" level=info msg="StartContainer for \"816644e1368e9380c378ecaab82dae1df7602f187ff4db16719d1fe8dd24e3a7\"" Jul 10 00:26:54.667060 containerd[1574]: time="2025-07-10T00:26:54.666927643Z" level=info msg="StartContainer for \"289e11f2c00ae095f50060247d5315cf29376243cfb086261b707c5d9f2b6faf\"" Jul 10 00:26:54.667890 containerd[1574]: time="2025-07-10T00:26:54.667863369Z" level=info msg="connecting to shim 289e11f2c00ae095f50060247d5315cf29376243cfb086261b707c5d9f2b6faf" address="unix:///run/containerd/s/b367848416fc41d868d176f924fa10e095cbd336dc6a558e991d486efc780ba1" protocol=ttrpc version=3 Jul 10 00:26:54.676406 systemd[1]: Started sshd@10-10.0.0.116:22-10.0.0.1:53296.service - OpenSSH per-connection server daemon (10.0.0.1:53296). Jul 10 00:26:54.684179 containerd[1574]: time="2025-07-10T00:26:54.684122364Z" level=info msg="connecting to shim 816644e1368e9380c378ecaab82dae1df7602f187ff4db16719d1fe8dd24e3a7" address="unix:///run/containerd/s/c4062b6e343263e438ce3825918685fdb874c8aa888e31f3c2b955ff426e69f3" protocol=ttrpc version=3 Jul 10 00:26:54.693109 systemd[1]: Started cri-containerd-289e11f2c00ae095f50060247d5315cf29376243cfb086261b707c5d9f2b6faf.scope - libcontainer container 289e11f2c00ae095f50060247d5315cf29376243cfb086261b707c5d9f2b6faf. Jul 10 00:26:54.707578 systemd[1]: Started cri-containerd-816644e1368e9380c378ecaab82dae1df7602f187ff4db16719d1fe8dd24e3a7.scope - libcontainer container 816644e1368e9380c378ecaab82dae1df7602f187ff4db16719d1fe8dd24e3a7. Jul 10 00:26:54.743552 sshd[4028]: Accepted publickey for core from 10.0.0.1 port 53296 ssh2: RSA SHA256:a/WzkVKs173+YSebQY64/4LigDpieaPOYRH6W2gWTe4 Jul 10 00:26:54.745430 sshd-session[4028]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:26:54.748418 containerd[1574]: time="2025-07-10T00:26:54.748297841Z" level=info msg="StartContainer for \"289e11f2c00ae095f50060247d5315cf29376243cfb086261b707c5d9f2b6faf\" returns successfully" Jul 10 00:26:54.751676 systemd-logind[1538]: New session 11 of user core. Jul 10 00:26:54.761706 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 10 00:26:54.772717 containerd[1574]: time="2025-07-10T00:26:54.772651920Z" level=info msg="StartContainer for \"816644e1368e9380c378ecaab82dae1df7602f187ff4db16719d1fe8dd24e3a7\" returns successfully" Jul 10 00:26:54.897341 sshd[4081]: Connection closed by 10.0.0.1 port 53296 Jul 10 00:26:54.897782 sshd-session[4028]: pam_unix(sshd:session): session closed for user core Jul 10 00:26:54.902635 systemd[1]: sshd@10-10.0.0.116:22-10.0.0.1:53296.service: Deactivated successfully. Jul 10 00:26:54.905527 systemd[1]: session-11.scope: Deactivated successfully. Jul 10 00:26:54.906490 systemd-logind[1538]: Session 11 logged out. Waiting for processes to exit. Jul 10 00:26:54.908255 systemd-logind[1538]: Removed session 11. Jul 10 00:26:55.206863 kubelet[2756]: E0710 00:26:55.206822 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:26:55.208600 kubelet[2756]: E0710 00:26:55.208552 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:26:55.218454 kubelet[2756]: I0710 00:26:55.218023 2756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-22wmm" podStartSLOduration=33.217980308 podStartE2EDuration="33.217980308s" podCreationTimestamp="2025-07-10 00:26:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:26:55.21741652 +0000 UTC m=+39.268456150" watchObservedRunningTime="2025-07-10 00:26:55.217980308 +0000 UTC m=+39.269019938" Jul 10 00:26:55.226473 kubelet[2756]: I0710 00:26:55.226197 2756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-xcg54" podStartSLOduration=33.226177553 podStartE2EDuration="33.226177553s" podCreationTimestamp="2025-07-10 00:26:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:26:55.22611208 +0000 UTC m=+39.277151720" watchObservedRunningTime="2025-07-10 00:26:55.226177553 +0000 UTC m=+39.277217183" Jul 10 00:26:56.210173 kubelet[2756]: E0710 00:26:56.210117 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:26:56.210802 kubelet[2756]: E0710 00:26:56.210273 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:26:57.213457 kubelet[2756]: E0710 00:26:57.213144 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:26:57.213457 kubelet[2756]: E0710 00:26:57.213353 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:26:59.913779 systemd[1]: Started sshd@11-10.0.0.116:22-10.0.0.1:33002.service - OpenSSH per-connection server daemon (10.0.0.1:33002). Jul 10 00:26:59.973792 sshd[4113]: Accepted publickey for core from 10.0.0.1 port 33002 ssh2: RSA SHA256:a/WzkVKs173+YSebQY64/4LigDpieaPOYRH6W2gWTe4 Jul 10 00:26:59.975485 sshd-session[4113]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:26:59.980447 systemd-logind[1538]: New session 12 of user core. Jul 10 00:26:59.994635 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 10 00:27:00.108680 sshd[4115]: Connection closed by 10.0.0.1 port 33002 Jul 10 00:27:00.108992 sshd-session[4113]: pam_unix(sshd:session): session closed for user core Jul 10 00:27:00.112076 systemd[1]: sshd@11-10.0.0.116:22-10.0.0.1:33002.service: Deactivated successfully. Jul 10 00:27:00.114082 systemd[1]: session-12.scope: Deactivated successfully. Jul 10 00:27:00.116407 systemd-logind[1538]: Session 12 logged out. Waiting for processes to exit. Jul 10 00:27:00.117451 systemd-logind[1538]: Removed session 12. Jul 10 00:27:05.128633 systemd[1]: Started sshd@12-10.0.0.116:22-10.0.0.1:33010.service - OpenSSH per-connection server daemon (10.0.0.1:33010). Jul 10 00:27:05.188015 sshd[4130]: Accepted publickey for core from 10.0.0.1 port 33010 ssh2: RSA SHA256:a/WzkVKs173+YSebQY64/4LigDpieaPOYRH6W2gWTe4 Jul 10 00:27:05.189567 sshd-session[4130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:27:05.195102 systemd-logind[1538]: New session 13 of user core. Jul 10 00:27:05.200572 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 10 00:27:05.318993 sshd[4132]: Connection closed by 10.0.0.1 port 33010 Jul 10 00:27:05.319523 sshd-session[4130]: pam_unix(sshd:session): session closed for user core Jul 10 00:27:05.333771 systemd[1]: sshd@12-10.0.0.116:22-10.0.0.1:33010.service: Deactivated successfully. Jul 10 00:27:05.336479 systemd[1]: session-13.scope: Deactivated successfully. Jul 10 00:27:05.338702 systemd-logind[1538]: Session 13 logged out. Waiting for processes to exit. Jul 10 00:27:05.341891 systemd[1]: Started sshd@13-10.0.0.116:22-10.0.0.1:33020.service - OpenSSH per-connection server daemon (10.0.0.1:33020). Jul 10 00:27:05.342760 systemd-logind[1538]: Removed session 13. Jul 10 00:27:05.393524 sshd[4147]: Accepted publickey for core from 10.0.0.1 port 33020 ssh2: RSA SHA256:a/WzkVKs173+YSebQY64/4LigDpieaPOYRH6W2gWTe4 Jul 10 00:27:05.395231 sshd-session[4147]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:27:05.400299 systemd-logind[1538]: New session 14 of user core. Jul 10 00:27:05.411616 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 10 00:27:05.873030 sshd[4149]: Connection closed by 10.0.0.1 port 33020 Jul 10 00:27:05.874698 sshd-session[4147]: pam_unix(sshd:session): session closed for user core Jul 10 00:27:05.889590 systemd[1]: sshd@13-10.0.0.116:22-10.0.0.1:33020.service: Deactivated successfully. Jul 10 00:27:05.893491 systemd[1]: session-14.scope: Deactivated successfully. Jul 10 00:27:05.895601 systemd-logind[1538]: Session 14 logged out. Waiting for processes to exit. Jul 10 00:27:05.901473 systemd[1]: Started sshd@14-10.0.0.116:22-10.0.0.1:33026.service - OpenSSH per-connection server daemon (10.0.0.1:33026). Jul 10 00:27:05.902562 systemd-logind[1538]: Removed session 14. Jul 10 00:27:05.950836 sshd[4161]: Accepted publickey for core from 10.0.0.1 port 33026 ssh2: RSA SHA256:a/WzkVKs173+YSebQY64/4LigDpieaPOYRH6W2gWTe4 Jul 10 00:27:05.952639 sshd-session[4161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:27:05.958015 systemd-logind[1538]: New session 15 of user core. Jul 10 00:27:05.968610 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 10 00:27:06.079918 sshd[4163]: Connection closed by 10.0.0.1 port 33026 Jul 10 00:27:06.080283 sshd-session[4161]: pam_unix(sshd:session): session closed for user core Jul 10 00:27:06.084723 systemd[1]: sshd@14-10.0.0.116:22-10.0.0.1:33026.service: Deactivated successfully. Jul 10 00:27:06.087051 systemd[1]: session-15.scope: Deactivated successfully. Jul 10 00:27:06.087819 systemd-logind[1538]: Session 15 logged out. Waiting for processes to exit. Jul 10 00:27:06.089041 systemd-logind[1538]: Removed session 15. Jul 10 00:27:11.104223 systemd[1]: Started sshd@15-10.0.0.116:22-10.0.0.1:54802.service - OpenSSH per-connection server daemon (10.0.0.1:54802). Jul 10 00:27:11.159613 sshd[4176]: Accepted publickey for core from 10.0.0.1 port 54802 ssh2: RSA SHA256:a/WzkVKs173+YSebQY64/4LigDpieaPOYRH6W2gWTe4 Jul 10 00:27:11.161657 sshd-session[4176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:27:11.166705 systemd-logind[1538]: New session 16 of user core. Jul 10 00:27:11.175720 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 10 00:27:11.301772 sshd[4178]: Connection closed by 10.0.0.1 port 54802 Jul 10 00:27:11.302188 sshd-session[4176]: pam_unix(sshd:session): session closed for user core Jul 10 00:27:11.308670 systemd[1]: sshd@15-10.0.0.116:22-10.0.0.1:54802.service: Deactivated successfully. Jul 10 00:27:11.312029 systemd[1]: session-16.scope: Deactivated successfully. Jul 10 00:27:11.313060 systemd-logind[1538]: Session 16 logged out. Waiting for processes to exit. Jul 10 00:27:11.315295 systemd-logind[1538]: Removed session 16. Jul 10 00:27:16.316541 systemd[1]: Started sshd@16-10.0.0.116:22-10.0.0.1:54816.service - OpenSSH per-connection server daemon (10.0.0.1:54816). Jul 10 00:27:16.373634 sshd[4193]: Accepted publickey for core from 10.0.0.1 port 54816 ssh2: RSA SHA256:a/WzkVKs173+YSebQY64/4LigDpieaPOYRH6W2gWTe4 Jul 10 00:27:16.375854 sshd-session[4193]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:27:16.381783 systemd-logind[1538]: New session 17 of user core. Jul 10 00:27:16.391745 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 10 00:27:16.532001 sshd[4195]: Connection closed by 10.0.0.1 port 54816 Jul 10 00:27:16.532376 sshd-session[4193]: pam_unix(sshd:session): session closed for user core Jul 10 00:27:16.537539 systemd[1]: sshd@16-10.0.0.116:22-10.0.0.1:54816.service: Deactivated successfully. Jul 10 00:27:16.539890 systemd[1]: session-17.scope: Deactivated successfully. Jul 10 00:27:16.540900 systemd-logind[1538]: Session 17 logged out. Waiting for processes to exit. Jul 10 00:27:16.542471 systemd-logind[1538]: Removed session 17. Jul 10 00:27:21.550425 systemd[1]: Started sshd@17-10.0.0.116:22-10.0.0.1:48964.service - OpenSSH per-connection server daemon (10.0.0.1:48964). Jul 10 00:27:21.597666 sshd[4208]: Accepted publickey for core from 10.0.0.1 port 48964 ssh2: RSA SHA256:a/WzkVKs173+YSebQY64/4LigDpieaPOYRH6W2gWTe4 Jul 10 00:27:21.599420 sshd-session[4208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:27:21.604560 systemd-logind[1538]: New session 18 of user core. Jul 10 00:27:21.612648 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 10 00:27:21.744392 sshd[4210]: Connection closed by 10.0.0.1 port 48964 Jul 10 00:27:21.744961 sshd-session[4208]: pam_unix(sshd:session): session closed for user core Jul 10 00:27:21.759536 systemd[1]: sshd@17-10.0.0.116:22-10.0.0.1:48964.service: Deactivated successfully. Jul 10 00:27:21.761821 systemd[1]: session-18.scope: Deactivated successfully. Jul 10 00:27:21.762770 systemd-logind[1538]: Session 18 logged out. Waiting for processes to exit. Jul 10 00:27:21.766525 systemd[1]: Started sshd@18-10.0.0.116:22-10.0.0.1:48970.service - OpenSSH per-connection server daemon (10.0.0.1:48970). Jul 10 00:27:21.767249 systemd-logind[1538]: Removed session 18. Jul 10 00:27:21.815625 sshd[4223]: Accepted publickey for core from 10.0.0.1 port 48970 ssh2: RSA SHA256:a/WzkVKs173+YSebQY64/4LigDpieaPOYRH6W2gWTe4 Jul 10 00:27:21.817624 sshd-session[4223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:27:21.822674 systemd-logind[1538]: New session 19 of user core. Jul 10 00:27:21.832671 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 10 00:27:22.574146 sshd[4225]: Connection closed by 10.0.0.1 port 48970 Jul 10 00:27:22.574517 sshd-session[4223]: pam_unix(sshd:session): session closed for user core Jul 10 00:27:22.585240 systemd[1]: sshd@18-10.0.0.116:22-10.0.0.1:48970.service: Deactivated successfully. Jul 10 00:27:22.587925 systemd[1]: session-19.scope: Deactivated successfully. Jul 10 00:27:22.588901 systemd-logind[1538]: Session 19 logged out. Waiting for processes to exit. Jul 10 00:27:22.592923 systemd[1]: Started sshd@19-10.0.0.116:22-10.0.0.1:48976.service - OpenSSH per-connection server daemon (10.0.0.1:48976). Jul 10 00:27:22.593840 systemd-logind[1538]: Removed session 19. Jul 10 00:27:22.646852 sshd[4238]: Accepted publickey for core from 10.0.0.1 port 48976 ssh2: RSA SHA256:a/WzkVKs173+YSebQY64/4LigDpieaPOYRH6W2gWTe4 Jul 10 00:27:22.649325 sshd-session[4238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:27:22.655477 systemd-logind[1538]: New session 20 of user core. Jul 10 00:27:22.670720 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 10 00:27:23.559257 sshd[4240]: Connection closed by 10.0.0.1 port 48976 Jul 10 00:27:23.559923 sshd-session[4238]: pam_unix(sshd:session): session closed for user core Jul 10 00:27:23.572076 systemd[1]: sshd@19-10.0.0.116:22-10.0.0.1:48976.service: Deactivated successfully. Jul 10 00:27:23.578494 systemd[1]: session-20.scope: Deactivated successfully. Jul 10 00:27:23.579728 systemd-logind[1538]: Session 20 logged out. Waiting for processes to exit. Jul 10 00:27:23.585843 systemd[1]: Started sshd@20-10.0.0.116:22-10.0.0.1:48992.service - OpenSSH per-connection server daemon (10.0.0.1:48992). Jul 10 00:27:23.588063 systemd-logind[1538]: Removed session 20. Jul 10 00:27:23.633718 sshd[4258]: Accepted publickey for core from 10.0.0.1 port 48992 ssh2: RSA SHA256:a/WzkVKs173+YSebQY64/4LigDpieaPOYRH6W2gWTe4 Jul 10 00:27:23.635669 sshd-session[4258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:27:23.640702 systemd-logind[1538]: New session 21 of user core. Jul 10 00:27:23.651596 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 10 00:27:23.869205 sshd[4260]: Connection closed by 10.0.0.1 port 48992 Jul 10 00:27:23.870669 sshd-session[4258]: pam_unix(sshd:session): session closed for user core Jul 10 00:27:23.881677 systemd[1]: sshd@20-10.0.0.116:22-10.0.0.1:48992.service: Deactivated successfully. Jul 10 00:27:23.884347 systemd[1]: session-21.scope: Deactivated successfully. Jul 10 00:27:23.885508 systemd-logind[1538]: Session 21 logged out. Waiting for processes to exit. Jul 10 00:27:23.890478 systemd[1]: Started sshd@21-10.0.0.116:22-10.0.0.1:48994.service - OpenSSH per-connection server daemon (10.0.0.1:48994). Jul 10 00:27:23.891363 systemd-logind[1538]: Removed session 21. Jul 10 00:27:23.952581 sshd[4274]: Accepted publickey for core from 10.0.0.1 port 48994 ssh2: RSA SHA256:a/WzkVKs173+YSebQY64/4LigDpieaPOYRH6W2gWTe4 Jul 10 00:27:23.954646 sshd-session[4274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:27:23.960864 systemd-logind[1538]: New session 22 of user core. Jul 10 00:27:23.967602 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 10 00:27:24.091292 sshd[4276]: Connection closed by 10.0.0.1 port 48994 Jul 10 00:27:24.091732 sshd-session[4274]: pam_unix(sshd:session): session closed for user core Jul 10 00:27:24.096667 systemd[1]: sshd@21-10.0.0.116:22-10.0.0.1:48994.service: Deactivated successfully. Jul 10 00:27:24.098976 systemd[1]: session-22.scope: Deactivated successfully. Jul 10 00:27:24.099808 systemd-logind[1538]: Session 22 logged out. Waiting for processes to exit. Jul 10 00:27:24.101103 systemd-logind[1538]: Removed session 22. Jul 10 00:27:29.105320 systemd[1]: Started sshd@22-10.0.0.116:22-10.0.0.1:49000.service - OpenSSH per-connection server daemon (10.0.0.1:49000). Jul 10 00:27:29.162368 sshd[4289]: Accepted publickey for core from 10.0.0.1 port 49000 ssh2: RSA SHA256:a/WzkVKs173+YSebQY64/4LigDpieaPOYRH6W2gWTe4 Jul 10 00:27:29.164074 sshd-session[4289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:27:29.169621 systemd-logind[1538]: New session 23 of user core. Jul 10 00:27:29.179696 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 10 00:27:29.302495 sshd[4291]: Connection closed by 10.0.0.1 port 49000 Jul 10 00:27:29.302860 sshd-session[4289]: pam_unix(sshd:session): session closed for user core Jul 10 00:27:29.307606 systemd[1]: sshd@22-10.0.0.116:22-10.0.0.1:49000.service: Deactivated successfully. Jul 10 00:27:29.310911 systemd[1]: session-23.scope: Deactivated successfully. Jul 10 00:27:29.315253 systemd-logind[1538]: Session 23 logged out. Waiting for processes to exit. Jul 10 00:27:29.316544 systemd-logind[1538]: Removed session 23. Jul 10 00:27:33.034339 kubelet[2756]: E0710 00:27:33.034236 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:27:34.318096 systemd[1]: Started sshd@23-10.0.0.116:22-10.0.0.1:50204.service - OpenSSH per-connection server daemon (10.0.0.1:50204). Jul 10 00:27:34.379020 sshd[4307]: Accepted publickey for core from 10.0.0.1 port 50204 ssh2: RSA SHA256:a/WzkVKs173+YSebQY64/4LigDpieaPOYRH6W2gWTe4 Jul 10 00:27:34.380999 sshd-session[4307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:27:34.386077 systemd-logind[1538]: New session 24 of user core. Jul 10 00:27:34.400738 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 10 00:27:34.519681 sshd[4309]: Connection closed by 10.0.0.1 port 50204 Jul 10 00:27:34.520073 sshd-session[4307]: pam_unix(sshd:session): session closed for user core Jul 10 00:27:34.525405 systemd[1]: sshd@23-10.0.0.116:22-10.0.0.1:50204.service: Deactivated successfully. Jul 10 00:27:34.527842 systemd[1]: session-24.scope: Deactivated successfully. Jul 10 00:27:34.528791 systemd-logind[1538]: Session 24 logged out. Waiting for processes to exit. Jul 10 00:27:34.530356 systemd-logind[1538]: Removed session 24. Jul 10 00:27:39.536375 systemd[1]: Started sshd@24-10.0.0.116:22-10.0.0.1:47418.service - OpenSSH per-connection server daemon (10.0.0.1:47418). Jul 10 00:27:39.580646 sshd[4323]: Accepted publickey for core from 10.0.0.1 port 47418 ssh2: RSA SHA256:a/WzkVKs173+YSebQY64/4LigDpieaPOYRH6W2gWTe4 Jul 10 00:27:39.582281 sshd-session[4323]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:27:39.587324 systemd-logind[1538]: New session 25 of user core. Jul 10 00:27:39.599614 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 10 00:27:39.739950 sshd[4325]: Connection closed by 10.0.0.1 port 47418 Jul 10 00:27:39.740309 sshd-session[4323]: pam_unix(sshd:session): session closed for user core Jul 10 00:27:39.745929 systemd[1]: sshd@24-10.0.0.116:22-10.0.0.1:47418.service: Deactivated successfully. Jul 10 00:27:39.748354 systemd[1]: session-25.scope: Deactivated successfully. Jul 10 00:27:39.749231 systemd-logind[1538]: Session 25 logged out. Waiting for processes to exit. Jul 10 00:27:39.750584 systemd-logind[1538]: Removed session 25. Jul 10 00:27:41.034024 kubelet[2756]: E0710 00:27:41.033977 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:27:43.033466 kubelet[2756]: E0710 00:27:43.033412 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:27:44.753090 systemd[1]: Started sshd@25-10.0.0.116:22-10.0.0.1:47426.service - OpenSSH per-connection server daemon (10.0.0.1:47426). Jul 10 00:27:44.809010 sshd[4338]: Accepted publickey for core from 10.0.0.1 port 47426 ssh2: RSA SHA256:a/WzkVKs173+YSebQY64/4LigDpieaPOYRH6W2gWTe4 Jul 10 00:27:44.811136 sshd-session[4338]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:27:44.816734 systemd-logind[1538]: New session 26 of user core. Jul 10 00:27:44.831607 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 10 00:27:44.947085 sshd[4340]: Connection closed by 10.0.0.1 port 47426 Jul 10 00:27:44.947612 sshd-session[4338]: pam_unix(sshd:session): session closed for user core Jul 10 00:27:44.960762 systemd[1]: sshd@25-10.0.0.116:22-10.0.0.1:47426.service: Deactivated successfully. Jul 10 00:27:44.963123 systemd[1]: session-26.scope: Deactivated successfully. Jul 10 00:27:44.964245 systemd-logind[1538]: Session 26 logged out. Waiting for processes to exit. Jul 10 00:27:44.967838 systemd[1]: Started sshd@26-10.0.0.116:22-10.0.0.1:47428.service - OpenSSH per-connection server daemon (10.0.0.1:47428). Jul 10 00:27:44.968688 systemd-logind[1538]: Removed session 26. Jul 10 00:27:45.017945 sshd[4353]: Accepted publickey for core from 10.0.0.1 port 47428 ssh2: RSA SHA256:a/WzkVKs173+YSebQY64/4LigDpieaPOYRH6W2gWTe4 Jul 10 00:27:45.019461 sshd-session[4353]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:27:45.024719 systemd-logind[1538]: New session 27 of user core. Jul 10 00:27:45.043713 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 10 00:27:46.517302 containerd[1574]: time="2025-07-10T00:27:46.517238744Z" level=info msg="StopContainer for \"40bc52c44de49683b197fc8d54aefb4f10f1bdbe58de4870206da6212e5ffcb8\" with timeout 30 (s)" Jul 10 00:27:46.527304 containerd[1574]: time="2025-07-10T00:27:46.527256699Z" level=info msg="Stop container \"40bc52c44de49683b197fc8d54aefb4f10f1bdbe58de4870206da6212e5ffcb8\" with signal terminated" Jul 10 00:27:46.541049 systemd[1]: cri-containerd-40bc52c44de49683b197fc8d54aefb4f10f1bdbe58de4870206da6212e5ffcb8.scope: Deactivated successfully. Jul 10 00:27:46.543373 containerd[1574]: time="2025-07-10T00:27:46.542923606Z" level=info msg="received exit event container_id:\"40bc52c44de49683b197fc8d54aefb4f10f1bdbe58de4870206da6212e5ffcb8\" id:\"40bc52c44de49683b197fc8d54aefb4f10f1bdbe58de4870206da6212e5ffcb8\" pid:3310 exited_at:{seconds:1752107266 nanos:542250459}" Jul 10 00:27:46.543531 containerd[1574]: time="2025-07-10T00:27:46.543277597Z" level=info msg="TaskExit event in podsandbox handler container_id:\"40bc52c44de49683b197fc8d54aefb4f10f1bdbe58de4870206da6212e5ffcb8\" id:\"40bc52c44de49683b197fc8d54aefb4f10f1bdbe58de4870206da6212e5ffcb8\" pid:3310 exited_at:{seconds:1752107266 nanos:542250459}" Jul 10 00:27:46.553773 containerd[1574]: time="2025-07-10T00:27:46.553669351Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 10 00:27:46.557460 containerd[1574]: time="2025-07-10T00:27:46.557411547Z" level=info msg="TaskExit event in podsandbox handler container_id:\"14de38d9c88acac377c5c1bb07dbe8954504832c6876be8c635dc063e0806050\" id:\"88197b43339ae3908a7916118813847afd3953ccb150a68ecb14c4d266ee5058\" pid:4381 exited_at:{seconds:1752107266 nanos:555963692}" Jul 10 00:27:46.561577 containerd[1574]: time="2025-07-10T00:27:46.561528423Z" level=info msg="StopContainer for \"14de38d9c88acac377c5c1bb07dbe8954504832c6876be8c635dc063e0806050\" with timeout 2 (s)" Jul 10 00:27:46.561907 containerd[1574]: time="2025-07-10T00:27:46.561877034Z" level=info msg="Stop container \"14de38d9c88acac377c5c1bb07dbe8954504832c6876be8c635dc063e0806050\" with signal terminated" Jul 10 00:27:46.571365 systemd-networkd[1485]: lxc_health: Link DOWN Jul 10 00:27:46.571376 systemd-networkd[1485]: lxc_health: Lost carrier Jul 10 00:27:46.574639 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-40bc52c44de49683b197fc8d54aefb4f10f1bdbe58de4870206da6212e5ffcb8-rootfs.mount: Deactivated successfully. Jul 10 00:27:46.592630 containerd[1574]: time="2025-07-10T00:27:46.592591514Z" level=info msg="StopContainer for \"40bc52c44de49683b197fc8d54aefb4f10f1bdbe58de4870206da6212e5ffcb8\" returns successfully" Jul 10 00:27:46.593986 systemd[1]: cri-containerd-14de38d9c88acac377c5c1bb07dbe8954504832c6876be8c635dc063e0806050.scope: Deactivated successfully. Jul 10 00:27:46.594358 systemd[1]: cri-containerd-14de38d9c88acac377c5c1bb07dbe8954504832c6876be8c635dc063e0806050.scope: Consumed 7.301s CPU time, 126M memory peak, 416K read from disk, 13.3M written to disk. Jul 10 00:27:46.595513 containerd[1574]: time="2025-07-10T00:27:46.595469501Z" level=info msg="received exit event container_id:\"14de38d9c88acac377c5c1bb07dbe8954504832c6876be8c635dc063e0806050\" id:\"14de38d9c88acac377c5c1bb07dbe8954504832c6876be8c635dc063e0806050\" pid:3400 exited_at:{seconds:1752107266 nanos:595090853}" Jul 10 00:27:46.596622 containerd[1574]: time="2025-07-10T00:27:46.596559918Z" level=info msg="StopPodSandbox for \"3f39712686ac40e5eee47e9e2f5aaba254e1a60b654d62f354999d8df4bbccc0\"" Jul 10 00:27:46.596703 containerd[1574]: time="2025-07-10T00:27:46.596685246Z" level=info msg="Container to stop \"40bc52c44de49683b197fc8d54aefb4f10f1bdbe58de4870206da6212e5ffcb8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:27:46.597505 containerd[1574]: time="2025-07-10T00:27:46.597468701Z" level=info msg="TaskExit event in podsandbox handler container_id:\"14de38d9c88acac377c5c1bb07dbe8954504832c6876be8c635dc063e0806050\" id:\"14de38d9c88acac377c5c1bb07dbe8954504832c6876be8c635dc063e0806050\" pid:3400 exited_at:{seconds:1752107266 nanos:595090853}" Jul 10 00:27:46.610710 systemd[1]: cri-containerd-3f39712686ac40e5eee47e9e2f5aaba254e1a60b654d62f354999d8df4bbccc0.scope: Deactivated successfully. Jul 10 00:27:46.613514 containerd[1574]: time="2025-07-10T00:27:46.613470352Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3f39712686ac40e5eee47e9e2f5aaba254e1a60b654d62f354999d8df4bbccc0\" id:\"3f39712686ac40e5eee47e9e2f5aaba254e1a60b654d62f354999d8df4bbccc0\" pid:2949 exit_status:137 exited_at:{seconds:1752107266 nanos:612406715}" Jul 10 00:27:46.621100 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-14de38d9c88acac377c5c1bb07dbe8954504832c6876be8c635dc063e0806050-rootfs.mount: Deactivated successfully. Jul 10 00:27:46.632737 containerd[1574]: time="2025-07-10T00:27:46.632701488Z" level=info msg="StopContainer for \"14de38d9c88acac377c5c1bb07dbe8954504832c6876be8c635dc063e0806050\" returns successfully" Jul 10 00:27:46.633348 containerd[1574]: time="2025-07-10T00:27:46.633302578Z" level=info msg="StopPodSandbox for \"1969c45d5ef308dcbbceedc18520f3200e74ac54b0019749fa670408d4027465\"" Jul 10 00:27:46.633421 containerd[1574]: time="2025-07-10T00:27:46.633400544Z" level=info msg="Container to stop \"a66910599303cbe7082284e3032d8a7816dda67b57121cf15ae08aa778a7a2a0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:27:46.633493 containerd[1574]: time="2025-07-10T00:27:46.633418878Z" level=info msg="Container to stop \"14de38d9c88acac377c5c1bb07dbe8954504832c6876be8c635dc063e0806050\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:27:46.633493 containerd[1574]: time="2025-07-10T00:27:46.633448645Z" level=info msg="Container to stop \"f9f98da5207dd93a5a3df623ac51f1b919db958feb0dc9865eb36e07e2c0e0aa\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:27:46.633493 containerd[1574]: time="2025-07-10T00:27:46.633460778Z" level=info msg="Container to stop \"d60405f7b073745b42992e7fbc9ae344f0469c299a32689d84ba734431c6e106\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:27:46.633493 containerd[1574]: time="2025-07-10T00:27:46.633472379Z" level=info msg="Container to stop \"ea810c115c10b4042007af097a28d0005c7b5c60cde65873bc6494f413a8b093\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:27:46.641075 systemd[1]: cri-containerd-1969c45d5ef308dcbbceedc18520f3200e74ac54b0019749fa670408d4027465.scope: Deactivated successfully. Jul 10 00:27:46.650404 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3f39712686ac40e5eee47e9e2f5aaba254e1a60b654d62f354999d8df4bbccc0-rootfs.mount: Deactivated successfully. Jul 10 00:27:46.657940 containerd[1574]: time="2025-07-10T00:27:46.657814776Z" level=info msg="shim disconnected" id=3f39712686ac40e5eee47e9e2f5aaba254e1a60b654d62f354999d8df4bbccc0 namespace=k8s.io Jul 10 00:27:46.657940 containerd[1574]: time="2025-07-10T00:27:46.657886962Z" level=warning msg="cleaning up after shim disconnected" id=3f39712686ac40e5eee47e9e2f5aaba254e1a60b654d62f354999d8df4bbccc0 namespace=k8s.io Jul 10 00:27:46.658185 containerd[1574]: time="2025-07-10T00:27:46.657896090Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 00:27:46.668885 containerd[1574]: time="2025-07-10T00:27:46.668838027Z" level=info msg="shim disconnected" id=1969c45d5ef308dcbbceedc18520f3200e74ac54b0019749fa670408d4027465 namespace=k8s.io Jul 10 00:27:46.668885 containerd[1574]: time="2025-07-10T00:27:46.668882882Z" level=warning msg="cleaning up after shim disconnected" id=1969c45d5ef308dcbbceedc18520f3200e74ac54b0019749fa670408d4027465 namespace=k8s.io Jul 10 00:27:46.669599 containerd[1574]: time="2025-07-10T00:27:46.668890878Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 00:27:46.669663 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1969c45d5ef308dcbbceedc18520f3200e74ac54b0019749fa670408d4027465-rootfs.mount: Deactivated successfully. Jul 10 00:27:46.687717 containerd[1574]: time="2025-07-10T00:27:46.687628587Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1969c45d5ef308dcbbceedc18520f3200e74ac54b0019749fa670408d4027465\" id:\"1969c45d5ef308dcbbceedc18520f3200e74ac54b0019749fa670408d4027465\" pid:2934 exit_status:137 exited_at:{seconds:1752107266 nanos:642196552}" Jul 10 00:27:46.689783 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3f39712686ac40e5eee47e9e2f5aaba254e1a60b654d62f354999d8df4bbccc0-shm.mount: Deactivated successfully. Jul 10 00:27:46.703375 containerd[1574]: time="2025-07-10T00:27:46.703323046Z" level=info msg="TearDown network for sandbox \"1969c45d5ef308dcbbceedc18520f3200e74ac54b0019749fa670408d4027465\" successfully" Jul 10 00:27:46.703375 containerd[1574]: time="2025-07-10T00:27:46.703350368Z" level=info msg="StopPodSandbox for \"1969c45d5ef308dcbbceedc18520f3200e74ac54b0019749fa670408d4027465\" returns successfully" Jul 10 00:27:46.704341 containerd[1574]: time="2025-07-10T00:27:46.704299157Z" level=info msg="TearDown network for sandbox \"3f39712686ac40e5eee47e9e2f5aaba254e1a60b654d62f354999d8df4bbccc0\" successfully" Jul 10 00:27:46.704341 containerd[1574]: time="2025-07-10T00:27:46.704337820Z" level=info msg="StopPodSandbox for \"3f39712686ac40e5eee47e9e2f5aaba254e1a60b654d62f354999d8df4bbccc0\" returns successfully" Jul 10 00:27:46.707164 containerd[1574]: time="2025-07-10T00:27:46.707121619Z" level=info msg="received exit event sandbox_id:\"3f39712686ac40e5eee47e9e2f5aaba254e1a60b654d62f354999d8df4bbccc0\" exit_status:137 exited_at:{seconds:1752107266 nanos:612406715}" Jul 10 00:27:46.707266 containerd[1574]: time="2025-07-10T00:27:46.707240424Z" level=info msg="received exit event sandbox_id:\"1969c45d5ef308dcbbceedc18520f3200e74ac54b0019749fa670408d4027465\" exit_status:137 exited_at:{seconds:1752107266 nanos:642196552}" Jul 10 00:27:46.788324 kubelet[2756]: I0710 00:27:46.788148 2756 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/420995f2-f48d-445d-982c-a6f4a978e305-host-proc-sys-kernel\") pod \"420995f2-f48d-445d-982c-a6f4a978e305\" (UID: \"420995f2-f48d-445d-982c-a6f4a978e305\") " Jul 10 00:27:46.788324 kubelet[2756]: I0710 00:27:46.788196 2756 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/420995f2-f48d-445d-982c-a6f4a978e305-cni-path\") pod \"420995f2-f48d-445d-982c-a6f4a978e305\" (UID: \"420995f2-f48d-445d-982c-a6f4a978e305\") " Jul 10 00:27:46.788324 kubelet[2756]: I0710 00:27:46.788210 2756 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/420995f2-f48d-445d-982c-a6f4a978e305-hostproc\") pod \"420995f2-f48d-445d-982c-a6f4a978e305\" (UID: \"420995f2-f48d-445d-982c-a6f4a978e305\") " Jul 10 00:27:46.788324 kubelet[2756]: I0710 00:27:46.788224 2756 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/420995f2-f48d-445d-982c-a6f4a978e305-lib-modules\") pod \"420995f2-f48d-445d-982c-a6f4a978e305\" (UID: \"420995f2-f48d-445d-982c-a6f4a978e305\") " Jul 10 00:27:46.788324 kubelet[2756]: I0710 00:27:46.788245 2756 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jh7rb\" (UniqueName: \"kubernetes.io/projected/1e811eaf-b35d-4bbc-a68a-833b4243f360-kube-api-access-jh7rb\") pod \"1e811eaf-b35d-4bbc-a68a-833b4243f360\" (UID: \"1e811eaf-b35d-4bbc-a68a-833b4243f360\") " Jul 10 00:27:46.788324 kubelet[2756]: I0710 00:27:46.788261 2756 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1e811eaf-b35d-4bbc-a68a-833b4243f360-cilium-config-path\") pod \"1e811eaf-b35d-4bbc-a68a-833b4243f360\" (UID: \"1e811eaf-b35d-4bbc-a68a-833b4243f360\") " Jul 10 00:27:46.788969 kubelet[2756]: I0710 00:27:46.788275 2756 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9mm6\" (UniqueName: \"kubernetes.io/projected/420995f2-f48d-445d-982c-a6f4a978e305-kube-api-access-f9mm6\") pod \"420995f2-f48d-445d-982c-a6f4a978e305\" (UID: \"420995f2-f48d-445d-982c-a6f4a978e305\") " Jul 10 00:27:46.788969 kubelet[2756]: I0710 00:27:46.788288 2756 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/420995f2-f48d-445d-982c-a6f4a978e305-etc-cni-netd\") pod \"420995f2-f48d-445d-982c-a6f4a978e305\" (UID: \"420995f2-f48d-445d-982c-a6f4a978e305\") " Jul 10 00:27:46.788969 kubelet[2756]: I0710 00:27:46.788292 2756 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/420995f2-f48d-445d-982c-a6f4a978e305-cni-path" (OuterVolumeSpecName: "cni-path") pod "420995f2-f48d-445d-982c-a6f4a978e305" (UID: "420995f2-f48d-445d-982c-a6f4a978e305"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:27:46.788969 kubelet[2756]: I0710 00:27:46.788304 2756 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/420995f2-f48d-445d-982c-a6f4a978e305-clustermesh-secrets\") pod \"420995f2-f48d-445d-982c-a6f4a978e305\" (UID: \"420995f2-f48d-445d-982c-a6f4a978e305\") " Jul 10 00:27:46.788969 kubelet[2756]: I0710 00:27:46.788342 2756 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/420995f2-f48d-445d-982c-a6f4a978e305-hubble-tls\") pod \"420995f2-f48d-445d-982c-a6f4a978e305\" (UID: \"420995f2-f48d-445d-982c-a6f4a978e305\") " Jul 10 00:27:46.788969 kubelet[2756]: I0710 00:27:46.788357 2756 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/420995f2-f48d-445d-982c-a6f4a978e305-cilium-cgroup\") pod \"420995f2-f48d-445d-982c-a6f4a978e305\" (UID: \"420995f2-f48d-445d-982c-a6f4a978e305\") " Jul 10 00:27:46.789156 kubelet[2756]: I0710 00:27:46.788337 2756 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/420995f2-f48d-445d-982c-a6f4a978e305-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "420995f2-f48d-445d-982c-a6f4a978e305" (UID: "420995f2-f48d-445d-982c-a6f4a978e305"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:27:46.789156 kubelet[2756]: I0710 00:27:46.788388 2756 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/420995f2-f48d-445d-982c-a6f4a978e305-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "420995f2-f48d-445d-982c-a6f4a978e305" (UID: "420995f2-f48d-445d-982c-a6f4a978e305"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:27:46.789156 kubelet[2756]: I0710 00:27:46.788370 2756 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/420995f2-f48d-445d-982c-a6f4a978e305-xtables-lock\") pod \"420995f2-f48d-445d-982c-a6f4a978e305\" (UID: \"420995f2-f48d-445d-982c-a6f4a978e305\") " Jul 10 00:27:46.789156 kubelet[2756]: I0710 00:27:46.788482 2756 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/420995f2-f48d-445d-982c-a6f4a978e305-host-proc-sys-net\") pod \"420995f2-f48d-445d-982c-a6f4a978e305\" (UID: \"420995f2-f48d-445d-982c-a6f4a978e305\") " Jul 10 00:27:46.789156 kubelet[2756]: I0710 00:27:46.788512 2756 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/420995f2-f48d-445d-982c-a6f4a978e305-cilium-config-path\") pod \"420995f2-f48d-445d-982c-a6f4a978e305\" (UID: \"420995f2-f48d-445d-982c-a6f4a978e305\") " Jul 10 00:27:46.789299 kubelet[2756]: I0710 00:27:46.788534 2756 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/420995f2-f48d-445d-982c-a6f4a978e305-bpf-maps\") pod \"420995f2-f48d-445d-982c-a6f4a978e305\" (UID: \"420995f2-f48d-445d-982c-a6f4a978e305\") " Jul 10 00:27:46.789299 kubelet[2756]: I0710 00:27:46.788551 2756 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/420995f2-f48d-445d-982c-a6f4a978e305-cilium-run\") pod \"420995f2-f48d-445d-982c-a6f4a978e305\" (UID: \"420995f2-f48d-445d-982c-a6f4a978e305\") " Jul 10 00:27:46.789299 kubelet[2756]: I0710 00:27:46.788610 2756 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/420995f2-f48d-445d-982c-a6f4a978e305-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 10 00:27:46.789299 kubelet[2756]: I0710 00:27:46.788621 2756 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/420995f2-f48d-445d-982c-a6f4a978e305-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 10 00:27:46.789299 kubelet[2756]: I0710 00:27:46.788631 2756 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/420995f2-f48d-445d-982c-a6f4a978e305-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 10 00:27:46.789299 kubelet[2756]: I0710 00:27:46.788655 2756 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/420995f2-f48d-445d-982c-a6f4a978e305-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "420995f2-f48d-445d-982c-a6f4a978e305" (UID: "420995f2-f48d-445d-982c-a6f4a978e305"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:27:46.789461 kubelet[2756]: I0710 00:27:46.788673 2756 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/420995f2-f48d-445d-982c-a6f4a978e305-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "420995f2-f48d-445d-982c-a6f4a978e305" (UID: "420995f2-f48d-445d-982c-a6f4a978e305"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:27:46.789750 kubelet[2756]: I0710 00:27:46.789598 2756 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/420995f2-f48d-445d-982c-a6f4a978e305-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "420995f2-f48d-445d-982c-a6f4a978e305" (UID: "420995f2-f48d-445d-982c-a6f4a978e305"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:27:46.791641 kubelet[2756]: I0710 00:27:46.791608 2756 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e811eaf-b35d-4bbc-a68a-833b4243f360-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1e811eaf-b35d-4bbc-a68a-833b4243f360" (UID: "1e811eaf-b35d-4bbc-a68a-833b4243f360"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 10 00:27:46.792330 kubelet[2756]: I0710 00:27:46.791813 2756 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/420995f2-f48d-445d-982c-a6f4a978e305-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "420995f2-f48d-445d-982c-a6f4a978e305" (UID: "420995f2-f48d-445d-982c-a6f4a978e305"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:27:46.793279 kubelet[2756]: I0710 00:27:46.793222 2756 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e811eaf-b35d-4bbc-a68a-833b4243f360-kube-api-access-jh7rb" (OuterVolumeSpecName: "kube-api-access-jh7rb") pod "1e811eaf-b35d-4bbc-a68a-833b4243f360" (UID: "1e811eaf-b35d-4bbc-a68a-833b4243f360"). InnerVolumeSpecName "kube-api-access-jh7rb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 10 00:27:46.793429 kubelet[2756]: I0710 00:27:46.793407 2756 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/420995f2-f48d-445d-982c-a6f4a978e305-hostproc" (OuterVolumeSpecName: "hostproc") pod "420995f2-f48d-445d-982c-a6f4a978e305" (UID: "420995f2-f48d-445d-982c-a6f4a978e305"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:27:46.793550 kubelet[2756]: I0710 00:27:46.793529 2756 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/420995f2-f48d-445d-982c-a6f4a978e305-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "420995f2-f48d-445d-982c-a6f4a978e305" (UID: "420995f2-f48d-445d-982c-a6f4a978e305"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:27:46.793866 kubelet[2756]: I0710 00:27:46.793816 2756 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/420995f2-f48d-445d-982c-a6f4a978e305-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "420995f2-f48d-445d-982c-a6f4a978e305" (UID: "420995f2-f48d-445d-982c-a6f4a978e305"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 10 00:27:46.793866 kubelet[2756]: I0710 00:27:46.793856 2756 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/420995f2-f48d-445d-982c-a6f4a978e305-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "420995f2-f48d-445d-982c-a6f4a978e305" (UID: "420995f2-f48d-445d-982c-a6f4a978e305"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:27:46.794405 kubelet[2756]: I0710 00:27:46.794381 2756 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/420995f2-f48d-445d-982c-a6f4a978e305-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "420995f2-f48d-445d-982c-a6f4a978e305" (UID: "420995f2-f48d-445d-982c-a6f4a978e305"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 10 00:27:46.795610 kubelet[2756]: I0710 00:27:46.795584 2756 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/420995f2-f48d-445d-982c-a6f4a978e305-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "420995f2-f48d-445d-982c-a6f4a978e305" (UID: "420995f2-f48d-445d-982c-a6f4a978e305"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 10 00:27:46.796482 kubelet[2756]: I0710 00:27:46.796421 2756 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/420995f2-f48d-445d-982c-a6f4a978e305-kube-api-access-f9mm6" (OuterVolumeSpecName: "kube-api-access-f9mm6") pod "420995f2-f48d-445d-982c-a6f4a978e305" (UID: "420995f2-f48d-445d-982c-a6f4a978e305"). InnerVolumeSpecName "kube-api-access-f9mm6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 10 00:27:46.889609 kubelet[2756]: I0710 00:27:46.889541 2756 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/420995f2-f48d-445d-982c-a6f4a978e305-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 10 00:27:46.889609 kubelet[2756]: I0710 00:27:46.889573 2756 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/420995f2-f48d-445d-982c-a6f4a978e305-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 10 00:27:46.889609 kubelet[2756]: I0710 00:27:46.889587 2756 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jh7rb\" (UniqueName: \"kubernetes.io/projected/1e811eaf-b35d-4bbc-a68a-833b4243f360-kube-api-access-jh7rb\") on node \"localhost\" DevicePath \"\"" Jul 10 00:27:46.889609 kubelet[2756]: I0710 00:27:46.889595 2756 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1e811eaf-b35d-4bbc-a68a-833b4243f360-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 10 00:27:46.889609 kubelet[2756]: I0710 00:27:46.889604 2756 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-f9mm6\" (UniqueName: \"kubernetes.io/projected/420995f2-f48d-445d-982c-a6f4a978e305-kube-api-access-f9mm6\") on node \"localhost\" DevicePath \"\"" Jul 10 00:27:46.889609 kubelet[2756]: I0710 00:27:46.889612 2756 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/420995f2-f48d-445d-982c-a6f4a978e305-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 10 00:27:46.889609 kubelet[2756]: I0710 00:27:46.889622 2756 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/420995f2-f48d-445d-982c-a6f4a978e305-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 10 00:27:46.889609 kubelet[2756]: I0710 00:27:46.889630 2756 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/420995f2-f48d-445d-982c-a6f4a978e305-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 10 00:27:46.890053 kubelet[2756]: I0710 00:27:46.889639 2756 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/420995f2-f48d-445d-982c-a6f4a978e305-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 10 00:27:46.890053 kubelet[2756]: I0710 00:27:46.889647 2756 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/420995f2-f48d-445d-982c-a6f4a978e305-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 10 00:27:46.890053 kubelet[2756]: I0710 00:27:46.889655 2756 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/420995f2-f48d-445d-982c-a6f4a978e305-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 10 00:27:46.890053 kubelet[2756]: I0710 00:27:46.889663 2756 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/420995f2-f48d-445d-982c-a6f4a978e305-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 10 00:27:46.890053 kubelet[2756]: I0710 00:27:46.889670 2756 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/420995f2-f48d-445d-982c-a6f4a978e305-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 10 00:27:47.322491 kubelet[2756]: I0710 00:27:47.322408 2756 scope.go:117] "RemoveContainer" containerID="14de38d9c88acac377c5c1bb07dbe8954504832c6876be8c635dc063e0806050" Jul 10 00:27:47.324222 containerd[1574]: time="2025-07-10T00:27:47.324147804Z" level=info msg="RemoveContainer for \"14de38d9c88acac377c5c1bb07dbe8954504832c6876be8c635dc063e0806050\"" Jul 10 00:27:47.332124 systemd[1]: Removed slice kubepods-besteffort-pod1e811eaf_b35d_4bbc_a68a_833b4243f360.slice - libcontainer container kubepods-besteffort-pod1e811eaf_b35d_4bbc_a68a_833b4243f360.slice. Jul 10 00:27:47.332649 containerd[1574]: time="2025-07-10T00:27:47.332609193Z" level=info msg="RemoveContainer for \"14de38d9c88acac377c5c1bb07dbe8954504832c6876be8c635dc063e0806050\" returns successfully" Jul 10 00:27:47.332866 kubelet[2756]: I0710 00:27:47.332845 2756 scope.go:117] "RemoveContainer" containerID="a66910599303cbe7082284e3032d8a7816dda67b57121cf15ae08aa778a7a2a0" Jul 10 00:27:47.334586 containerd[1574]: time="2025-07-10T00:27:47.334550925Z" level=info msg="RemoveContainer for \"a66910599303cbe7082284e3032d8a7816dda67b57121cf15ae08aa778a7a2a0\"" Jul 10 00:27:47.334630 systemd[1]: Removed slice kubepods-burstable-pod420995f2_f48d_445d_982c_a6f4a978e305.slice - libcontainer container kubepods-burstable-pod420995f2_f48d_445d_982c_a6f4a978e305.slice. Jul 10 00:27:47.334731 systemd[1]: kubepods-burstable-pod420995f2_f48d_445d_982c_a6f4a978e305.slice: Consumed 7.430s CPU time, 126.3M memory peak, 428K read from disk, 13.3M written to disk. Jul 10 00:27:47.345687 containerd[1574]: time="2025-07-10T00:27:47.345632302Z" level=info msg="RemoveContainer for \"a66910599303cbe7082284e3032d8a7816dda67b57121cf15ae08aa778a7a2a0\" returns successfully" Jul 10 00:27:47.346005 kubelet[2756]: I0710 00:27:47.345963 2756 scope.go:117] "RemoveContainer" containerID="ea810c115c10b4042007af097a28d0005c7b5c60cde65873bc6494f413a8b093" Jul 10 00:27:47.348309 containerd[1574]: time="2025-07-10T00:27:47.348263289Z" level=info msg="RemoveContainer for \"ea810c115c10b4042007af097a28d0005c7b5c60cde65873bc6494f413a8b093\"" Jul 10 00:27:47.353809 containerd[1574]: time="2025-07-10T00:27:47.353775227Z" level=info msg="RemoveContainer for \"ea810c115c10b4042007af097a28d0005c7b5c60cde65873bc6494f413a8b093\" returns successfully" Jul 10 00:27:47.354383 kubelet[2756]: I0710 00:27:47.354359 2756 scope.go:117] "RemoveContainer" containerID="d60405f7b073745b42992e7fbc9ae344f0469c299a32689d84ba734431c6e106" Jul 10 00:27:47.356230 containerd[1574]: time="2025-07-10T00:27:47.355764158Z" level=info msg="RemoveContainer for \"d60405f7b073745b42992e7fbc9ae344f0469c299a32689d84ba734431c6e106\"" Jul 10 00:27:47.359953 containerd[1574]: time="2025-07-10T00:27:47.359918464Z" level=info msg="RemoveContainer for \"d60405f7b073745b42992e7fbc9ae344f0469c299a32689d84ba734431c6e106\" returns successfully" Jul 10 00:27:47.360211 kubelet[2756]: I0710 00:27:47.360160 2756 scope.go:117] "RemoveContainer" containerID="f9f98da5207dd93a5a3df623ac51f1b919db958feb0dc9865eb36e07e2c0e0aa" Jul 10 00:27:47.362082 containerd[1574]: time="2025-07-10T00:27:47.361997986Z" level=info msg="RemoveContainer for \"f9f98da5207dd93a5a3df623ac51f1b919db958feb0dc9865eb36e07e2c0e0aa\"" Jul 10 00:27:47.366038 containerd[1574]: time="2025-07-10T00:27:47.365991386Z" level=info msg="RemoveContainer for \"f9f98da5207dd93a5a3df623ac51f1b919db958feb0dc9865eb36e07e2c0e0aa\" returns successfully" Jul 10 00:27:47.366346 kubelet[2756]: I0710 00:27:47.366315 2756 scope.go:117] "RemoveContainer" containerID="14de38d9c88acac377c5c1bb07dbe8954504832c6876be8c635dc063e0806050" Jul 10 00:27:47.366682 containerd[1574]: time="2025-07-10T00:27:47.366623775Z" level=error msg="ContainerStatus for \"14de38d9c88acac377c5c1bb07dbe8954504832c6876be8c635dc063e0806050\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"14de38d9c88acac377c5c1bb07dbe8954504832c6876be8c635dc063e0806050\": not found" Jul 10 00:27:47.370252 kubelet[2756]: E0710 00:27:47.370220 2756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"14de38d9c88acac377c5c1bb07dbe8954504832c6876be8c635dc063e0806050\": not found" containerID="14de38d9c88acac377c5c1bb07dbe8954504832c6876be8c635dc063e0806050" Jul 10 00:27:47.370393 kubelet[2756]: I0710 00:27:47.370265 2756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"14de38d9c88acac377c5c1bb07dbe8954504832c6876be8c635dc063e0806050"} err="failed to get container status \"14de38d9c88acac377c5c1bb07dbe8954504832c6876be8c635dc063e0806050\": rpc error: code = NotFound desc = an error occurred when try to find container \"14de38d9c88acac377c5c1bb07dbe8954504832c6876be8c635dc063e0806050\": not found" Jul 10 00:27:47.370393 kubelet[2756]: I0710 00:27:47.370356 2756 scope.go:117] "RemoveContainer" containerID="a66910599303cbe7082284e3032d8a7816dda67b57121cf15ae08aa778a7a2a0" Jul 10 00:27:47.370685 containerd[1574]: time="2025-07-10T00:27:47.370633777Z" level=error msg="ContainerStatus for \"a66910599303cbe7082284e3032d8a7816dda67b57121cf15ae08aa778a7a2a0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a66910599303cbe7082284e3032d8a7816dda67b57121cf15ae08aa778a7a2a0\": not found" Jul 10 00:27:47.370832 kubelet[2756]: E0710 00:27:47.370805 2756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a66910599303cbe7082284e3032d8a7816dda67b57121cf15ae08aa778a7a2a0\": not found" containerID="a66910599303cbe7082284e3032d8a7816dda67b57121cf15ae08aa778a7a2a0" Jul 10 00:27:47.370882 kubelet[2756]: I0710 00:27:47.370832 2756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a66910599303cbe7082284e3032d8a7816dda67b57121cf15ae08aa778a7a2a0"} err="failed to get container status \"a66910599303cbe7082284e3032d8a7816dda67b57121cf15ae08aa778a7a2a0\": rpc error: code = NotFound desc = an error occurred when try to find container \"a66910599303cbe7082284e3032d8a7816dda67b57121cf15ae08aa778a7a2a0\": not found" Jul 10 00:27:47.370882 kubelet[2756]: I0710 00:27:47.370846 2756 scope.go:117] "RemoveContainer" containerID="ea810c115c10b4042007af097a28d0005c7b5c60cde65873bc6494f413a8b093" Jul 10 00:27:47.371043 containerd[1574]: time="2025-07-10T00:27:47.371006564Z" level=error msg="ContainerStatus for \"ea810c115c10b4042007af097a28d0005c7b5c60cde65873bc6494f413a8b093\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ea810c115c10b4042007af097a28d0005c7b5c60cde65873bc6494f413a8b093\": not found" Jul 10 00:27:47.371186 kubelet[2756]: E0710 00:27:47.371148 2756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ea810c115c10b4042007af097a28d0005c7b5c60cde65873bc6494f413a8b093\": not found" containerID="ea810c115c10b4042007af097a28d0005c7b5c60cde65873bc6494f413a8b093" Jul 10 00:27:47.371238 kubelet[2756]: I0710 00:27:47.371190 2756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ea810c115c10b4042007af097a28d0005c7b5c60cde65873bc6494f413a8b093"} err="failed to get container status \"ea810c115c10b4042007af097a28d0005c7b5c60cde65873bc6494f413a8b093\": rpc error: code = NotFound desc = an error occurred when try to find container \"ea810c115c10b4042007af097a28d0005c7b5c60cde65873bc6494f413a8b093\": not found" Jul 10 00:27:47.371238 kubelet[2756]: I0710 00:27:47.371210 2756 scope.go:117] "RemoveContainer" containerID="d60405f7b073745b42992e7fbc9ae344f0469c299a32689d84ba734431c6e106" Jul 10 00:27:47.371542 containerd[1574]: time="2025-07-10T00:27:47.371499779Z" level=error msg="ContainerStatus for \"d60405f7b073745b42992e7fbc9ae344f0469c299a32689d84ba734431c6e106\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d60405f7b073745b42992e7fbc9ae344f0469c299a32689d84ba734431c6e106\": not found" Jul 10 00:27:47.371736 kubelet[2756]: E0710 00:27:47.371707 2756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d60405f7b073745b42992e7fbc9ae344f0469c299a32689d84ba734431c6e106\": not found" containerID="d60405f7b073745b42992e7fbc9ae344f0469c299a32689d84ba734431c6e106" Jul 10 00:27:47.371775 kubelet[2756]: I0710 00:27:47.371740 2756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d60405f7b073745b42992e7fbc9ae344f0469c299a32689d84ba734431c6e106"} err="failed to get container status \"d60405f7b073745b42992e7fbc9ae344f0469c299a32689d84ba734431c6e106\": rpc error: code = NotFound desc = an error occurred when try to find container \"d60405f7b073745b42992e7fbc9ae344f0469c299a32689d84ba734431c6e106\": not found" Jul 10 00:27:47.371775 kubelet[2756]: I0710 00:27:47.371759 2756 scope.go:117] "RemoveContainer" containerID="f9f98da5207dd93a5a3df623ac51f1b919db958feb0dc9865eb36e07e2c0e0aa" Jul 10 00:27:47.371992 containerd[1574]: time="2025-07-10T00:27:47.371951044Z" level=error msg="ContainerStatus for \"f9f98da5207dd93a5a3df623ac51f1b919db958feb0dc9865eb36e07e2c0e0aa\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f9f98da5207dd93a5a3df623ac51f1b919db958feb0dc9865eb36e07e2c0e0aa\": not found" Jul 10 00:27:47.372280 kubelet[2756]: E0710 00:27:47.372232 2756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f9f98da5207dd93a5a3df623ac51f1b919db958feb0dc9865eb36e07e2c0e0aa\": not found" containerID="f9f98da5207dd93a5a3df623ac51f1b919db958feb0dc9865eb36e07e2c0e0aa" Jul 10 00:27:47.372348 kubelet[2756]: I0710 00:27:47.372271 2756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f9f98da5207dd93a5a3df623ac51f1b919db958feb0dc9865eb36e07e2c0e0aa"} err="failed to get container status \"f9f98da5207dd93a5a3df623ac51f1b919db958feb0dc9865eb36e07e2c0e0aa\": rpc error: code = NotFound desc = an error occurred when try to find container \"f9f98da5207dd93a5a3df623ac51f1b919db958feb0dc9865eb36e07e2c0e0aa\": not found" Jul 10 00:27:47.372348 kubelet[2756]: I0710 00:27:47.372300 2756 scope.go:117] "RemoveContainer" containerID="40bc52c44de49683b197fc8d54aefb4f10f1bdbe58de4870206da6212e5ffcb8" Jul 10 00:27:47.373948 containerd[1574]: time="2025-07-10T00:27:47.373919435Z" level=info msg="RemoveContainer for \"40bc52c44de49683b197fc8d54aefb4f10f1bdbe58de4870206da6212e5ffcb8\"" Jul 10 00:27:47.377696 containerd[1574]: time="2025-07-10T00:27:47.377666658Z" level=info msg="RemoveContainer for \"40bc52c44de49683b197fc8d54aefb4f10f1bdbe58de4870206da6212e5ffcb8\" returns successfully" Jul 10 00:27:47.377914 kubelet[2756]: I0710 00:27:47.377881 2756 scope.go:117] "RemoveContainer" containerID="40bc52c44de49683b197fc8d54aefb4f10f1bdbe58de4870206da6212e5ffcb8" Jul 10 00:27:47.378214 containerd[1574]: time="2025-07-10T00:27:47.378135828Z" level=error msg="ContainerStatus for \"40bc52c44de49683b197fc8d54aefb4f10f1bdbe58de4870206da6212e5ffcb8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"40bc52c44de49683b197fc8d54aefb4f10f1bdbe58de4870206da6212e5ffcb8\": not found" Jul 10 00:27:47.378342 kubelet[2756]: E0710 00:27:47.378316 2756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"40bc52c44de49683b197fc8d54aefb4f10f1bdbe58de4870206da6212e5ffcb8\": not found" containerID="40bc52c44de49683b197fc8d54aefb4f10f1bdbe58de4870206da6212e5ffcb8" Jul 10 00:27:47.378379 kubelet[2756]: I0710 00:27:47.378347 2756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"40bc52c44de49683b197fc8d54aefb4f10f1bdbe58de4870206da6212e5ffcb8"} err="failed to get container status \"40bc52c44de49683b197fc8d54aefb4f10f1bdbe58de4870206da6212e5ffcb8\": rpc error: code = NotFound desc = an error occurred when try to find container \"40bc52c44de49683b197fc8d54aefb4f10f1bdbe58de4870206da6212e5ffcb8\": not found" Jul 10 00:27:47.578151 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1969c45d5ef308dcbbceedc18520f3200e74ac54b0019749fa670408d4027465-shm.mount: Deactivated successfully. Jul 10 00:27:47.578319 systemd[1]: var-lib-kubelet-pods-1e811eaf\x2db35d\x2d4bbc\x2da68a\x2d833b4243f360-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djh7rb.mount: Deactivated successfully. Jul 10 00:27:47.578451 systemd[1]: var-lib-kubelet-pods-420995f2\x2df48d\x2d445d\x2d982c\x2da6f4a978e305-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 10 00:27:47.578552 systemd[1]: var-lib-kubelet-pods-420995f2\x2df48d\x2d445d\x2d982c\x2da6f4a978e305-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2df9mm6.mount: Deactivated successfully. Jul 10 00:27:47.578649 systemd[1]: var-lib-kubelet-pods-420995f2\x2df48d\x2d445d\x2d982c\x2da6f4a978e305-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 10 00:27:48.036731 kubelet[2756]: I0710 00:27:48.036670 2756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e811eaf-b35d-4bbc-a68a-833b4243f360" path="/var/lib/kubelet/pods/1e811eaf-b35d-4bbc-a68a-833b4243f360/volumes" Jul 10 00:27:48.037395 kubelet[2756]: I0710 00:27:48.037362 2756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="420995f2-f48d-445d-982c-a6f4a978e305" path="/var/lib/kubelet/pods/420995f2-f48d-445d-982c-a6f4a978e305/volumes" Jul 10 00:27:48.358495 sshd[4355]: Connection closed by 10.0.0.1 port 47428 Jul 10 00:27:48.358809 sshd-session[4353]: pam_unix(sshd:session): session closed for user core Jul 10 00:27:48.373141 systemd[1]: sshd@26-10.0.0.116:22-10.0.0.1:47428.service: Deactivated successfully. Jul 10 00:27:48.375674 systemd[1]: session-27.scope: Deactivated successfully. Jul 10 00:27:48.376522 systemd-logind[1538]: Session 27 logged out. Waiting for processes to exit. Jul 10 00:27:48.380533 systemd[1]: Started sshd@27-10.0.0.116:22-10.0.0.1:47434.service - OpenSSH per-connection server daemon (10.0.0.1:47434). Jul 10 00:27:48.381137 systemd-logind[1538]: Removed session 27. Jul 10 00:27:48.430822 sshd[4504]: Accepted publickey for core from 10.0.0.1 port 47434 ssh2: RSA SHA256:a/WzkVKs173+YSebQY64/4LigDpieaPOYRH6W2gWTe4 Jul 10 00:27:48.432500 sshd-session[4504]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:27:48.437722 systemd-logind[1538]: New session 28 of user core. Jul 10 00:27:48.448584 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 10 00:27:48.872616 sshd[4506]: Connection closed by 10.0.0.1 port 47434 Jul 10 00:27:48.874491 sshd-session[4504]: pam_unix(sshd:session): session closed for user core Jul 10 00:27:48.887683 systemd[1]: sshd@27-10.0.0.116:22-10.0.0.1:47434.service: Deactivated successfully. Jul 10 00:27:48.892059 systemd[1]: session-28.scope: Deactivated successfully. Jul 10 00:27:48.893512 systemd-logind[1538]: Session 28 logged out. Waiting for processes to exit. Jul 10 00:27:48.894793 kubelet[2756]: I0710 00:27:48.894701 2756 memory_manager.go:355] "RemoveStaleState removing state" podUID="420995f2-f48d-445d-982c-a6f4a978e305" containerName="cilium-agent" Jul 10 00:27:48.894793 kubelet[2756]: I0710 00:27:48.894724 2756 memory_manager.go:355] "RemoveStaleState removing state" podUID="1e811eaf-b35d-4bbc-a68a-833b4243f360" containerName="cilium-operator" Jul 10 00:27:48.899739 systemd[1]: Started sshd@28-10.0.0.116:22-10.0.0.1:47444.service - OpenSSH per-connection server daemon (10.0.0.1:47444). Jul 10 00:27:48.902535 systemd-logind[1538]: Removed session 28. Jul 10 00:27:48.923348 systemd[1]: Created slice kubepods-burstable-pode763834d_defc_4e9e_9e12_ca1e488303a5.slice - libcontainer container kubepods-burstable-pode763834d_defc_4e9e_9e12_ca1e488303a5.slice. Jul 10 00:27:48.950631 sshd[4518]: Accepted publickey for core from 10.0.0.1 port 47444 ssh2: RSA SHA256:a/WzkVKs173+YSebQY64/4LigDpieaPOYRH6W2gWTe4 Jul 10 00:27:48.952135 sshd-session[4518]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:27:48.956911 systemd-logind[1538]: New session 29 of user core. Jul 10 00:27:48.970608 systemd[1]: Started session-29.scope - Session 29 of User core. Jul 10 00:27:49.002847 kubelet[2756]: I0710 00:27:49.002801 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e763834d-defc-4e9e-9e12-ca1e488303a5-xtables-lock\") pod \"cilium-hv4hb\" (UID: \"e763834d-defc-4e9e-9e12-ca1e488303a5\") " pod="kube-system/cilium-hv4hb" Jul 10 00:27:49.002929 kubelet[2756]: I0710 00:27:49.002854 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e763834d-defc-4e9e-9e12-ca1e488303a5-cilium-config-path\") pod \"cilium-hv4hb\" (UID: \"e763834d-defc-4e9e-9e12-ca1e488303a5\") " pod="kube-system/cilium-hv4hb" Jul 10 00:27:49.002929 kubelet[2756]: I0710 00:27:49.002875 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e763834d-defc-4e9e-9e12-ca1e488303a5-lib-modules\") pod \"cilium-hv4hb\" (UID: \"e763834d-defc-4e9e-9e12-ca1e488303a5\") " pod="kube-system/cilium-hv4hb" Jul 10 00:27:49.002929 kubelet[2756]: I0710 00:27:49.002890 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e763834d-defc-4e9e-9e12-ca1e488303a5-clustermesh-secrets\") pod \"cilium-hv4hb\" (UID: \"e763834d-defc-4e9e-9e12-ca1e488303a5\") " pod="kube-system/cilium-hv4hb" Jul 10 00:27:49.002929 kubelet[2756]: I0710 00:27:49.002907 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e763834d-defc-4e9e-9e12-ca1e488303a5-etc-cni-netd\") pod \"cilium-hv4hb\" (UID: \"e763834d-defc-4e9e-9e12-ca1e488303a5\") " pod="kube-system/cilium-hv4hb" Jul 10 00:27:49.002929 kubelet[2756]: I0710 00:27:49.002921 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e763834d-defc-4e9e-9e12-ca1e488303a5-cilium-ipsec-secrets\") pod \"cilium-hv4hb\" (UID: \"e763834d-defc-4e9e-9e12-ca1e488303a5\") " pod="kube-system/cilium-hv4hb" Jul 10 00:27:49.003048 kubelet[2756]: I0710 00:27:49.002935 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e763834d-defc-4e9e-9e12-ca1e488303a5-host-proc-sys-net\") pod \"cilium-hv4hb\" (UID: \"e763834d-defc-4e9e-9e12-ca1e488303a5\") " pod="kube-system/cilium-hv4hb" Jul 10 00:27:49.003048 kubelet[2756]: I0710 00:27:49.002952 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vbpx\" (UniqueName: \"kubernetes.io/projected/e763834d-defc-4e9e-9e12-ca1e488303a5-kube-api-access-9vbpx\") pod \"cilium-hv4hb\" (UID: \"e763834d-defc-4e9e-9e12-ca1e488303a5\") " pod="kube-system/cilium-hv4hb" Jul 10 00:27:49.003048 kubelet[2756]: I0710 00:27:49.002968 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e763834d-defc-4e9e-9e12-ca1e488303a5-cilium-run\") pod \"cilium-hv4hb\" (UID: \"e763834d-defc-4e9e-9e12-ca1e488303a5\") " pod="kube-system/cilium-hv4hb" Jul 10 00:27:49.003048 kubelet[2756]: I0710 00:27:49.002982 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e763834d-defc-4e9e-9e12-ca1e488303a5-bpf-maps\") pod \"cilium-hv4hb\" (UID: \"e763834d-defc-4e9e-9e12-ca1e488303a5\") " pod="kube-system/cilium-hv4hb" Jul 10 00:27:49.003048 kubelet[2756]: I0710 00:27:49.002995 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e763834d-defc-4e9e-9e12-ca1e488303a5-hostproc\") pod \"cilium-hv4hb\" (UID: \"e763834d-defc-4e9e-9e12-ca1e488303a5\") " pod="kube-system/cilium-hv4hb" Jul 10 00:27:49.003048 kubelet[2756]: I0710 00:27:49.003012 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e763834d-defc-4e9e-9e12-ca1e488303a5-host-proc-sys-kernel\") pod \"cilium-hv4hb\" (UID: \"e763834d-defc-4e9e-9e12-ca1e488303a5\") " pod="kube-system/cilium-hv4hb" Jul 10 00:27:49.003200 kubelet[2756]: I0710 00:27:49.003026 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e763834d-defc-4e9e-9e12-ca1e488303a5-cilium-cgroup\") pod \"cilium-hv4hb\" (UID: \"e763834d-defc-4e9e-9e12-ca1e488303a5\") " pod="kube-system/cilium-hv4hb" Jul 10 00:27:49.003200 kubelet[2756]: I0710 00:27:49.003038 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e763834d-defc-4e9e-9e12-ca1e488303a5-cni-path\") pod \"cilium-hv4hb\" (UID: \"e763834d-defc-4e9e-9e12-ca1e488303a5\") " pod="kube-system/cilium-hv4hb" Jul 10 00:27:49.003200 kubelet[2756]: I0710 00:27:49.003052 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e763834d-defc-4e9e-9e12-ca1e488303a5-hubble-tls\") pod \"cilium-hv4hb\" (UID: \"e763834d-defc-4e9e-9e12-ca1e488303a5\") " pod="kube-system/cilium-hv4hb" Jul 10 00:27:49.023326 sshd[4520]: Connection closed by 10.0.0.1 port 47444 Jul 10 00:27:49.023716 sshd-session[4518]: pam_unix(sshd:session): session closed for user core Jul 10 00:27:49.033185 systemd[1]: sshd@28-10.0.0.116:22-10.0.0.1:47444.service: Deactivated successfully. Jul 10 00:27:49.035152 systemd[1]: session-29.scope: Deactivated successfully. Jul 10 00:27:49.035931 systemd-logind[1538]: Session 29 logged out. Waiting for processes to exit. Jul 10 00:27:49.038068 systemd-logind[1538]: Removed session 29. Jul 10 00:27:49.039317 systemd[1]: Started sshd@29-10.0.0.116:22-10.0.0.1:47454.service - OpenSSH per-connection server daemon (10.0.0.1:47454). Jul 10 00:27:49.089447 sshd[4527]: Accepted publickey for core from 10.0.0.1 port 47454 ssh2: RSA SHA256:a/WzkVKs173+YSebQY64/4LigDpieaPOYRH6W2gWTe4 Jul 10 00:27:49.090863 sshd-session[4527]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:27:49.095513 systemd-logind[1538]: New session 30 of user core. Jul 10 00:27:49.106697 systemd[1]: Started session-30.scope - Session 30 of User core. Jul 10 00:27:49.229409 kubelet[2756]: E0710 00:27:49.228765 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:27:49.230019 containerd[1574]: time="2025-07-10T00:27:49.229673694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hv4hb,Uid:e763834d-defc-4e9e-9e12-ca1e488303a5,Namespace:kube-system,Attempt:0,}" Jul 10 00:27:49.253393 containerd[1574]: time="2025-07-10T00:27:49.253346049Z" level=info msg="connecting to shim 15bd93ee98d3d55110cd5d7c7e610ea4416796141d732cf15858875108e10f8d" address="unix:///run/containerd/s/c54889531a1590706ec4a210eaab216776f23ba1ae2a416076492eeed3279776" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:27:49.284613 systemd[1]: Started cri-containerd-15bd93ee98d3d55110cd5d7c7e610ea4416796141d732cf15858875108e10f8d.scope - libcontainer container 15bd93ee98d3d55110cd5d7c7e610ea4416796141d732cf15858875108e10f8d. Jul 10 00:27:49.315244 containerd[1574]: time="2025-07-10T00:27:49.315184513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hv4hb,Uid:e763834d-defc-4e9e-9e12-ca1e488303a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"15bd93ee98d3d55110cd5d7c7e610ea4416796141d732cf15858875108e10f8d\"" Jul 10 00:27:49.317520 kubelet[2756]: E0710 00:27:49.317488 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:27:49.323516 containerd[1574]: time="2025-07-10T00:27:49.322803160Z" level=info msg="CreateContainer within sandbox \"15bd93ee98d3d55110cd5d7c7e610ea4416796141d732cf15858875108e10f8d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 10 00:27:49.331819 containerd[1574]: time="2025-07-10T00:27:49.331515628Z" level=info msg="Container b092976a8df35ab11dcb20eb26a4654a08e507b41da5be83fa383d0f52f0c935: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:27:49.350039 containerd[1574]: time="2025-07-10T00:27:49.349988140Z" level=info msg="CreateContainer within sandbox \"15bd93ee98d3d55110cd5d7c7e610ea4416796141d732cf15858875108e10f8d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b092976a8df35ab11dcb20eb26a4654a08e507b41da5be83fa383d0f52f0c935\"" Jul 10 00:27:49.350456 containerd[1574]: time="2025-07-10T00:27:49.350406042Z" level=info msg="StartContainer for \"b092976a8df35ab11dcb20eb26a4654a08e507b41da5be83fa383d0f52f0c935\"" Jul 10 00:27:49.351332 containerd[1574]: time="2025-07-10T00:27:49.351302441Z" level=info msg="connecting to shim b092976a8df35ab11dcb20eb26a4654a08e507b41da5be83fa383d0f52f0c935" address="unix:///run/containerd/s/c54889531a1590706ec4a210eaab216776f23ba1ae2a416076492eeed3279776" protocol=ttrpc version=3 Jul 10 00:27:49.374585 systemd[1]: Started cri-containerd-b092976a8df35ab11dcb20eb26a4654a08e507b41da5be83fa383d0f52f0c935.scope - libcontainer container b092976a8df35ab11dcb20eb26a4654a08e507b41da5be83fa383d0f52f0c935. Jul 10 00:27:49.406899 containerd[1574]: time="2025-07-10T00:27:49.406854353Z" level=info msg="StartContainer for \"b092976a8df35ab11dcb20eb26a4654a08e507b41da5be83fa383d0f52f0c935\" returns successfully" Jul 10 00:27:49.418586 systemd[1]: cri-containerd-b092976a8df35ab11dcb20eb26a4654a08e507b41da5be83fa383d0f52f0c935.scope: Deactivated successfully. Jul 10 00:27:49.421030 containerd[1574]: time="2025-07-10T00:27:49.420844211Z" level=info msg="received exit event container_id:\"b092976a8df35ab11dcb20eb26a4654a08e507b41da5be83fa383d0f52f0c935\" id:\"b092976a8df35ab11dcb20eb26a4654a08e507b41da5be83fa383d0f52f0c935\" pid:4600 exited_at:{seconds:1752107269 nanos:420426620}" Jul 10 00:27:49.421030 containerd[1574]: time="2025-07-10T00:27:49.420925586Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b092976a8df35ab11dcb20eb26a4654a08e507b41da5be83fa383d0f52f0c935\" id:\"b092976a8df35ab11dcb20eb26a4654a08e507b41da5be83fa383d0f52f0c935\" pid:4600 exited_at:{seconds:1752107269 nanos:420426620}" Jul 10 00:27:50.335794 kubelet[2756]: E0710 00:27:50.335756 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:27:50.337609 containerd[1574]: time="2025-07-10T00:27:50.337567553Z" level=info msg="CreateContainer within sandbox \"15bd93ee98d3d55110cd5d7c7e610ea4416796141d732cf15858875108e10f8d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 10 00:27:50.405932 containerd[1574]: time="2025-07-10T00:27:50.405868665Z" level=info msg="Container 43e1b5a7051c826b03e8c764f4ec40917a4800886d9051099570e3b8b93e54f3: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:27:50.414389 containerd[1574]: time="2025-07-10T00:27:50.414325937Z" level=info msg="CreateContainer within sandbox \"15bd93ee98d3d55110cd5d7c7e610ea4416796141d732cf15858875108e10f8d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"43e1b5a7051c826b03e8c764f4ec40917a4800886d9051099570e3b8b93e54f3\"" Jul 10 00:27:50.415008 containerd[1574]: time="2025-07-10T00:27:50.414959327Z" level=info msg="StartContainer for \"43e1b5a7051c826b03e8c764f4ec40917a4800886d9051099570e3b8b93e54f3\"" Jul 10 00:27:50.416033 containerd[1574]: time="2025-07-10T00:27:50.416004617Z" level=info msg="connecting to shim 43e1b5a7051c826b03e8c764f4ec40917a4800886d9051099570e3b8b93e54f3" address="unix:///run/containerd/s/c54889531a1590706ec4a210eaab216776f23ba1ae2a416076492eeed3279776" protocol=ttrpc version=3 Jul 10 00:27:50.442601 systemd[1]: Started cri-containerd-43e1b5a7051c826b03e8c764f4ec40917a4800886d9051099570e3b8b93e54f3.scope - libcontainer container 43e1b5a7051c826b03e8c764f4ec40917a4800886d9051099570e3b8b93e54f3. Jul 10 00:27:50.473875 containerd[1574]: time="2025-07-10T00:27:50.473813982Z" level=info msg="StartContainer for \"43e1b5a7051c826b03e8c764f4ec40917a4800886d9051099570e3b8b93e54f3\" returns successfully" Jul 10 00:27:50.480671 systemd[1]: cri-containerd-43e1b5a7051c826b03e8c764f4ec40917a4800886d9051099570e3b8b93e54f3.scope: Deactivated successfully. Jul 10 00:27:50.482025 containerd[1574]: time="2025-07-10T00:27:50.481986575Z" level=info msg="received exit event container_id:\"43e1b5a7051c826b03e8c764f4ec40917a4800886d9051099570e3b8b93e54f3\" id:\"43e1b5a7051c826b03e8c764f4ec40917a4800886d9051099570e3b8b93e54f3\" pid:4645 exited_at:{seconds:1752107270 nanos:481791095}" Jul 10 00:27:50.482226 containerd[1574]: time="2025-07-10T00:27:50.482195972Z" level=info msg="TaskExit event in podsandbox handler container_id:\"43e1b5a7051c826b03e8c764f4ec40917a4800886d9051099570e3b8b93e54f3\" id:\"43e1b5a7051c826b03e8c764f4ec40917a4800886d9051099570e3b8b93e54f3\" pid:4645 exited_at:{seconds:1752107270 nanos:481791095}" Jul 10 00:27:50.503511 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-43e1b5a7051c826b03e8c764f4ec40917a4800886d9051099570e3b8b93e54f3-rootfs.mount: Deactivated successfully. Jul 10 00:27:51.106603 kubelet[2756]: E0710 00:27:51.106551 2756 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 10 00:27:51.339851 kubelet[2756]: E0710 00:27:51.339812 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:27:51.342672 containerd[1574]: time="2025-07-10T00:27:51.342619135Z" level=info msg="CreateContainer within sandbox \"15bd93ee98d3d55110cd5d7c7e610ea4416796141d732cf15858875108e10f8d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 10 00:27:51.490963 containerd[1574]: time="2025-07-10T00:27:51.490899974Z" level=info msg="Container 5e055ab88dc74a6e6c7d2fc22b5e446e491bb30636dfaf00217e73c68712ad4a: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:27:51.502749 containerd[1574]: time="2025-07-10T00:27:51.502692128Z" level=info msg="CreateContainer within sandbox \"15bd93ee98d3d55110cd5d7c7e610ea4416796141d732cf15858875108e10f8d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5e055ab88dc74a6e6c7d2fc22b5e446e491bb30636dfaf00217e73c68712ad4a\"" Jul 10 00:27:51.503287 containerd[1574]: time="2025-07-10T00:27:51.503258592Z" level=info msg="StartContainer for \"5e055ab88dc74a6e6c7d2fc22b5e446e491bb30636dfaf00217e73c68712ad4a\"" Jul 10 00:27:51.504740 containerd[1574]: time="2025-07-10T00:27:51.504691606Z" level=info msg="connecting to shim 5e055ab88dc74a6e6c7d2fc22b5e446e491bb30636dfaf00217e73c68712ad4a" address="unix:///run/containerd/s/c54889531a1590706ec4a210eaab216776f23ba1ae2a416076492eeed3279776" protocol=ttrpc version=3 Jul 10 00:27:51.528649 systemd[1]: Started cri-containerd-5e055ab88dc74a6e6c7d2fc22b5e446e491bb30636dfaf00217e73c68712ad4a.scope - libcontainer container 5e055ab88dc74a6e6c7d2fc22b5e446e491bb30636dfaf00217e73c68712ad4a. Jul 10 00:27:51.580925 containerd[1574]: time="2025-07-10T00:27:51.580873416Z" level=info msg="StartContainer for \"5e055ab88dc74a6e6c7d2fc22b5e446e491bb30636dfaf00217e73c68712ad4a\" returns successfully" Jul 10 00:27:51.584543 systemd[1]: cri-containerd-5e055ab88dc74a6e6c7d2fc22b5e446e491bb30636dfaf00217e73c68712ad4a.scope: Deactivated successfully. Jul 10 00:27:51.585955 containerd[1574]: time="2025-07-10T00:27:51.585919834Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5e055ab88dc74a6e6c7d2fc22b5e446e491bb30636dfaf00217e73c68712ad4a\" id:\"5e055ab88dc74a6e6c7d2fc22b5e446e491bb30636dfaf00217e73c68712ad4a\" pid:4691 exited_at:{seconds:1752107271 nanos:585520508}" Jul 10 00:27:51.586047 containerd[1574]: time="2025-07-10T00:27:51.585947778Z" level=info msg="received exit event container_id:\"5e055ab88dc74a6e6c7d2fc22b5e446e491bb30636dfaf00217e73c68712ad4a\" id:\"5e055ab88dc74a6e6c7d2fc22b5e446e491bb30636dfaf00217e73c68712ad4a\" pid:4691 exited_at:{seconds:1752107271 nanos:585520508}" Jul 10 00:27:51.612669 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5e055ab88dc74a6e6c7d2fc22b5e446e491bb30636dfaf00217e73c68712ad4a-rootfs.mount: Deactivated successfully. Jul 10 00:27:52.344254 kubelet[2756]: E0710 00:27:52.344198 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:27:52.346257 containerd[1574]: time="2025-07-10T00:27:52.346217930Z" level=info msg="CreateContainer within sandbox \"15bd93ee98d3d55110cd5d7c7e610ea4416796141d732cf15858875108e10f8d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 10 00:27:52.488764 containerd[1574]: time="2025-07-10T00:27:52.488697626Z" level=info msg="Container 5bb29552cf0ac2f0bfb3e2b3c01462d57a30bdd2d7870ee5afa4d1b5e0b87233: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:27:52.496918 containerd[1574]: time="2025-07-10T00:27:52.496865283Z" level=info msg="CreateContainer within sandbox \"15bd93ee98d3d55110cd5d7c7e610ea4416796141d732cf15858875108e10f8d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5bb29552cf0ac2f0bfb3e2b3c01462d57a30bdd2d7870ee5afa4d1b5e0b87233\"" Jul 10 00:27:52.497479 containerd[1574]: time="2025-07-10T00:27:52.497420835Z" level=info msg="StartContainer for \"5bb29552cf0ac2f0bfb3e2b3c01462d57a30bdd2d7870ee5afa4d1b5e0b87233\"" Jul 10 00:27:52.498326 containerd[1574]: time="2025-07-10T00:27:52.498302114Z" level=info msg="connecting to shim 5bb29552cf0ac2f0bfb3e2b3c01462d57a30bdd2d7870ee5afa4d1b5e0b87233" address="unix:///run/containerd/s/c54889531a1590706ec4a210eaab216776f23ba1ae2a416076492eeed3279776" protocol=ttrpc version=3 Jul 10 00:27:52.519582 systemd[1]: Started cri-containerd-5bb29552cf0ac2f0bfb3e2b3c01462d57a30bdd2d7870ee5afa4d1b5e0b87233.scope - libcontainer container 5bb29552cf0ac2f0bfb3e2b3c01462d57a30bdd2d7870ee5afa4d1b5e0b87233. Jul 10 00:27:52.546536 systemd[1]: cri-containerd-5bb29552cf0ac2f0bfb3e2b3c01462d57a30bdd2d7870ee5afa4d1b5e0b87233.scope: Deactivated successfully. Jul 10 00:27:52.547153 containerd[1574]: time="2025-07-10T00:27:52.547092656Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5bb29552cf0ac2f0bfb3e2b3c01462d57a30bdd2d7870ee5afa4d1b5e0b87233\" id:\"5bb29552cf0ac2f0bfb3e2b3c01462d57a30bdd2d7870ee5afa4d1b5e0b87233\" pid:4731 exited_at:{seconds:1752107272 nanos:546801154}" Jul 10 00:27:52.548598 containerd[1574]: time="2025-07-10T00:27:52.548564583Z" level=info msg="received exit event container_id:\"5bb29552cf0ac2f0bfb3e2b3c01462d57a30bdd2d7870ee5afa4d1b5e0b87233\" id:\"5bb29552cf0ac2f0bfb3e2b3c01462d57a30bdd2d7870ee5afa4d1b5e0b87233\" pid:4731 exited_at:{seconds:1752107272 nanos:546801154}" Jul 10 00:27:52.557518 containerd[1574]: time="2025-07-10T00:27:52.557492460Z" level=info msg="StartContainer for \"5bb29552cf0ac2f0bfb3e2b3c01462d57a30bdd2d7870ee5afa4d1b5e0b87233\" returns successfully" Jul 10 00:27:52.571603 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5bb29552cf0ac2f0bfb3e2b3c01462d57a30bdd2d7870ee5afa4d1b5e0b87233-rootfs.mount: Deactivated successfully. Jul 10 00:27:53.033655 kubelet[2756]: E0710 00:27:53.033617 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:27:53.350037 kubelet[2756]: E0710 00:27:53.349905 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:27:53.352402 containerd[1574]: time="2025-07-10T00:27:53.352355229Z" level=info msg="CreateContainer within sandbox \"15bd93ee98d3d55110cd5d7c7e610ea4416796141d732cf15858875108e10f8d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 10 00:27:53.366401 containerd[1574]: time="2025-07-10T00:27:53.366306462Z" level=info msg="Container eb936860f73fa4ecdf1bcc1d3acb6270242b63f2947f051d325517d1c16bb8b2: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:27:53.376837 containerd[1574]: time="2025-07-10T00:27:53.376781805Z" level=info msg="CreateContainer within sandbox \"15bd93ee98d3d55110cd5d7c7e610ea4416796141d732cf15858875108e10f8d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"eb936860f73fa4ecdf1bcc1d3acb6270242b63f2947f051d325517d1c16bb8b2\"" Jul 10 00:27:53.377386 containerd[1574]: time="2025-07-10T00:27:53.377359129Z" level=info msg="StartContainer for \"eb936860f73fa4ecdf1bcc1d3acb6270242b63f2947f051d325517d1c16bb8b2\"" Jul 10 00:27:53.378697 containerd[1574]: time="2025-07-10T00:27:53.378667095Z" level=info msg="connecting to shim eb936860f73fa4ecdf1bcc1d3acb6270242b63f2947f051d325517d1c16bb8b2" address="unix:///run/containerd/s/c54889531a1590706ec4a210eaab216776f23ba1ae2a416076492eeed3279776" protocol=ttrpc version=3 Jul 10 00:27:53.405697 systemd[1]: Started cri-containerd-eb936860f73fa4ecdf1bcc1d3acb6270242b63f2947f051d325517d1c16bb8b2.scope - libcontainer container eb936860f73fa4ecdf1bcc1d3acb6270242b63f2947f051d325517d1c16bb8b2. Jul 10 00:27:53.445294 containerd[1574]: time="2025-07-10T00:27:53.445245617Z" level=info msg="StartContainer for \"eb936860f73fa4ecdf1bcc1d3acb6270242b63f2947f051d325517d1c16bb8b2\" returns successfully" Jul 10 00:27:53.520406 containerd[1574]: time="2025-07-10T00:27:53.520325653Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eb936860f73fa4ecdf1bcc1d3acb6270242b63f2947f051d325517d1c16bb8b2\" id:\"c673a29c466f4f22a3d55bc0d204a03d60eb03280f4344bc1f1adad365fe6e57\" pid:4799 exited_at:{seconds:1752107273 nanos:519906310}" Jul 10 00:27:53.909499 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Jul 10 00:27:54.356149 kubelet[2756]: E0710 00:27:54.355633 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:27:54.371913 kubelet[2756]: I0710 00:27:54.371825 2756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-hv4hb" podStartSLOduration=6.371804205 podStartE2EDuration="6.371804205s" podCreationTimestamp="2025-07-10 00:27:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:27:54.371584369 +0000 UTC m=+98.422624009" watchObservedRunningTime="2025-07-10 00:27:54.371804205 +0000 UTC m=+98.422843835" Jul 10 00:27:55.393548 kubelet[2756]: E0710 00:27:55.393491 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:27:55.473320 containerd[1574]: time="2025-07-10T00:27:55.473254073Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eb936860f73fa4ecdf1bcc1d3acb6270242b63f2947f051d325517d1c16bb8b2\" id:\"afc13bae3bda1a1d0274fd70b79871e324060f658de35f293988efb6d9d9b598\" pid:4925 exit_status:1 exited_at:{seconds:1752107275 nanos:472611156}" Jul 10 00:27:56.395807 kubelet[2756]: E0710 00:27:56.395759 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:27:57.133155 systemd-networkd[1485]: lxc_health: Link UP Jul 10 00:27:57.142623 systemd-networkd[1485]: lxc_health: Gained carrier Jul 10 00:27:57.397646 kubelet[2756]: E0710 00:27:57.397109 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:27:57.593034 containerd[1574]: time="2025-07-10T00:27:57.592988852Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eb936860f73fa4ecdf1bcc1d3acb6270242b63f2947f051d325517d1c16bb8b2\" id:\"1f1c7f5cfd186157f3da6c8b9cd56f7afc02afc64d05e8d83e4e41fc14cf0e29\" pid:5323 exited_at:{seconds:1752107277 nanos:592400389}" Jul 10 00:27:58.325719 systemd-networkd[1485]: lxc_health: Gained IPv6LL Jul 10 00:27:58.399088 kubelet[2756]: E0710 00:27:58.399033 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:27:59.400757 kubelet[2756]: E0710 00:27:59.400705 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:27:59.693646 containerd[1574]: time="2025-07-10T00:27:59.693290209Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eb936860f73fa4ecdf1bcc1d3acb6270242b63f2947f051d325517d1c16bb8b2\" id:\"f398d6ab62304fb03791ad30a4c0b42d0e48c92e18b854914944f2a93df7ab95\" pid:5361 exited_at:{seconds:1752107279 nanos:692899159}" Jul 10 00:28:01.796053 containerd[1574]: time="2025-07-10T00:28:01.796000126Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eb936860f73fa4ecdf1bcc1d3acb6270242b63f2947f051d325517d1c16bb8b2\" id:\"269c259b35faa3b6bd5f4fb6cf5d11c34c86bbbc5435d7927f80168fee2bcfa5\" pid:5389 exited_at:{seconds:1752107281 nanos:795632019}" Jul 10 00:28:03.891502 containerd[1574]: time="2025-07-10T00:28:03.891426996Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eb936860f73fa4ecdf1bcc1d3acb6270242b63f2947f051d325517d1c16bb8b2\" id:\"c66468fc573e0d34083a4ec7abadb5b5dce7c08060a6ed751f89c4df99fd9098\" pid:5419 exited_at:{seconds:1752107283 nanos:890918805}" Jul 10 00:28:03.906450 sshd[4534]: Connection closed by 10.0.0.1 port 47454 Jul 10 00:28:03.906944 sshd-session[4527]: pam_unix(sshd:session): session closed for user core Jul 10 00:28:03.910846 systemd[1]: sshd@29-10.0.0.116:22-10.0.0.1:47454.service: Deactivated successfully. Jul 10 00:28:03.913354 systemd[1]: session-30.scope: Deactivated successfully. Jul 10 00:28:03.915229 systemd-logind[1538]: Session 30 logged out. Waiting for processes to exit. Jul 10 00:28:03.917178 systemd-logind[1538]: Removed session 30.