Aug 19 08:08:29.903467 kernel: Linux version 6.12.41-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Mon Aug 18 22:19:37 -00 2025 Aug 19 08:08:29.903493 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=cc23dd01793203541561c15ffc568736bb5dae0d652141296dd11bf777bdf42f Aug 19 08:08:29.903504 kernel: BIOS-provided physical RAM map: Aug 19 08:08:29.903511 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000002ffff] usable Aug 19 08:08:29.903518 kernel: BIOS-e820: [mem 0x0000000000030000-0x000000000004ffff] reserved Aug 19 08:08:29.903525 kernel: BIOS-e820: [mem 0x0000000000050000-0x000000000009efff] usable Aug 19 08:08:29.903532 kernel: BIOS-e820: [mem 0x000000000009f000-0x000000000009ffff] reserved Aug 19 08:08:29.903539 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009b8ecfff] usable Aug 19 08:08:29.903549 kernel: BIOS-e820: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Aug 19 08:08:29.903555 kernel: BIOS-e820: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Aug 19 08:08:29.903562 kernel: BIOS-e820: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Aug 19 08:08:29.903571 kernel: BIOS-e820: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Aug 19 08:08:29.903578 kernel: BIOS-e820: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Aug 19 08:08:29.903584 kernel: BIOS-e820: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Aug 19 08:08:29.903592 kernel: BIOS-e820: [mem 0x000000009bfb7000-0x000000009bffffff] usable Aug 19 08:08:29.903600 kernel: BIOS-e820: [mem 0x000000009c000000-0x000000009cffffff] reserved Aug 19 08:08:29.903611 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Aug 19 08:08:29.903618 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Aug 19 08:08:29.903639 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Aug 19 08:08:29.903646 kernel: NX (Execute Disable) protection: active Aug 19 08:08:29.903653 kernel: APIC: Static calls initialized Aug 19 08:08:29.903661 kernel: e820: update [mem 0x9a13f018-0x9a148c57] usable ==> usable Aug 19 08:08:29.903668 kernel: e820: update [mem 0x9a102018-0x9a13ee57] usable ==> usable Aug 19 08:08:29.903675 kernel: extended physical RAM map: Aug 19 08:08:29.903682 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000002ffff] usable Aug 19 08:08:29.903690 kernel: reserve setup_data: [mem 0x0000000000030000-0x000000000004ffff] reserved Aug 19 08:08:29.903697 kernel: reserve setup_data: [mem 0x0000000000050000-0x000000000009efff] usable Aug 19 08:08:29.903707 kernel: reserve setup_data: [mem 0x000000000009f000-0x000000000009ffff] reserved Aug 19 08:08:29.903715 kernel: reserve setup_data: [mem 0x0000000000100000-0x000000009a102017] usable Aug 19 08:08:29.903722 kernel: reserve setup_data: [mem 0x000000009a102018-0x000000009a13ee57] usable Aug 19 08:08:29.903729 kernel: reserve setup_data: [mem 0x000000009a13ee58-0x000000009a13f017] usable Aug 19 08:08:29.903736 kernel: reserve setup_data: [mem 0x000000009a13f018-0x000000009a148c57] usable Aug 19 08:08:29.903743 kernel: reserve setup_data: [mem 0x000000009a148c58-0x000000009b8ecfff] usable Aug 19 08:08:29.903750 kernel: reserve setup_data: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Aug 19 08:08:29.903757 kernel: reserve setup_data: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Aug 19 08:08:29.903765 kernel: reserve setup_data: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Aug 19 08:08:29.903772 kernel: reserve setup_data: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Aug 19 08:08:29.903779 kernel: reserve setup_data: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Aug 19 08:08:29.903788 kernel: reserve setup_data: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Aug 19 08:08:29.903796 kernel: reserve setup_data: [mem 0x000000009bfb7000-0x000000009bffffff] usable Aug 19 08:08:29.903806 kernel: reserve setup_data: [mem 0x000000009c000000-0x000000009cffffff] reserved Aug 19 08:08:29.903814 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Aug 19 08:08:29.903821 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Aug 19 08:08:29.903829 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Aug 19 08:08:29.903838 kernel: efi: EFI v2.7 by EDK II Aug 19 08:08:29.903854 kernel: efi: SMBIOS=0x9b9d5000 ACPI=0x9bb7e000 ACPI 2.0=0x9bb7e014 MEMATTR=0x9a1af018 RNG=0x9bb73018 Aug 19 08:08:29.903861 kernel: random: crng init done Aug 19 08:08:29.903869 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Aug 19 08:08:29.903876 kernel: secureboot: Secure boot enabled Aug 19 08:08:29.903884 kernel: SMBIOS 2.8 present. Aug 19 08:08:29.903891 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Aug 19 08:08:29.903899 kernel: DMI: Memory slots populated: 1/1 Aug 19 08:08:29.903906 kernel: Hypervisor detected: KVM Aug 19 08:08:29.903913 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 19 08:08:29.903921 kernel: kvm-clock: using sched offset of 6609103176 cycles Aug 19 08:08:29.903931 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 19 08:08:29.903940 kernel: tsc: Detected 2794.750 MHz processor Aug 19 08:08:29.903948 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 19 08:08:29.903955 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 19 08:08:29.903963 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x400000000 Aug 19 08:08:29.903971 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Aug 19 08:08:29.903981 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 19 08:08:29.903991 kernel: Using GB pages for direct mapping Aug 19 08:08:29.904000 kernel: ACPI: Early table checksum verification disabled Aug 19 08:08:29.904010 kernel: ACPI: RSDP 0x000000009BB7E014 000024 (v02 BOCHS ) Aug 19 08:08:29.904018 kernel: ACPI: XSDT 0x000000009BB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Aug 19 08:08:29.904026 kernel: ACPI: FACP 0x000000009BB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Aug 19 08:08:29.904033 kernel: ACPI: DSDT 0x000000009BB7A000 002237 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 19 08:08:29.904041 kernel: ACPI: FACS 0x000000009BBDD000 000040 Aug 19 08:08:29.904049 kernel: ACPI: APIC 0x000000009BB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 19 08:08:29.904056 kernel: ACPI: HPET 0x000000009BB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 19 08:08:29.904064 kernel: ACPI: MCFG 0x000000009BB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 19 08:08:29.904071 kernel: ACPI: WAET 0x000000009BB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 19 08:08:29.904081 kernel: ACPI: BGRT 0x000000009BB74000 000038 (v01 INTEL EDK2 00000002 01000013) Aug 19 08:08:29.904089 kernel: ACPI: Reserving FACP table memory at [mem 0x9bb79000-0x9bb790f3] Aug 19 08:08:29.904096 kernel: ACPI: Reserving DSDT table memory at [mem 0x9bb7a000-0x9bb7c236] Aug 19 08:08:29.904104 kernel: ACPI: Reserving FACS table memory at [mem 0x9bbdd000-0x9bbdd03f] Aug 19 08:08:29.904111 kernel: ACPI: Reserving APIC table memory at [mem 0x9bb78000-0x9bb7808f] Aug 19 08:08:29.904119 kernel: ACPI: Reserving HPET table memory at [mem 0x9bb77000-0x9bb77037] Aug 19 08:08:29.904126 kernel: ACPI: Reserving MCFG table memory at [mem 0x9bb76000-0x9bb7603b] Aug 19 08:08:29.904134 kernel: ACPI: Reserving WAET table memory at [mem 0x9bb75000-0x9bb75027] Aug 19 08:08:29.904141 kernel: ACPI: Reserving BGRT table memory at [mem 0x9bb74000-0x9bb74037] Aug 19 08:08:29.904151 kernel: No NUMA configuration found Aug 19 08:08:29.904159 kernel: Faking a node at [mem 0x0000000000000000-0x000000009bffffff] Aug 19 08:08:29.904166 kernel: NODE_DATA(0) allocated [mem 0x9bf57dc0-0x9bf5efff] Aug 19 08:08:29.904174 kernel: Zone ranges: Aug 19 08:08:29.904182 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 19 08:08:29.904189 kernel: DMA32 [mem 0x0000000001000000-0x000000009bffffff] Aug 19 08:08:29.904197 kernel: Normal empty Aug 19 08:08:29.904204 kernel: Device empty Aug 19 08:08:29.904212 kernel: Movable zone start for each node Aug 19 08:08:29.904221 kernel: Early memory node ranges Aug 19 08:08:29.904229 kernel: node 0: [mem 0x0000000000001000-0x000000000002ffff] Aug 19 08:08:29.904237 kernel: node 0: [mem 0x0000000000050000-0x000000000009efff] Aug 19 08:08:29.904244 kernel: node 0: [mem 0x0000000000100000-0x000000009b8ecfff] Aug 19 08:08:29.904283 kernel: node 0: [mem 0x000000009bbff000-0x000000009bfb0fff] Aug 19 08:08:29.904291 kernel: node 0: [mem 0x000000009bfb7000-0x000000009bffffff] Aug 19 08:08:29.904299 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009bffffff] Aug 19 08:08:29.904307 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 19 08:08:29.904324 kernel: On node 0, zone DMA: 32 pages in unavailable ranges Aug 19 08:08:29.904342 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Aug 19 08:08:29.904370 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Aug 19 08:08:29.904388 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Aug 19 08:08:29.904396 kernel: On node 0, zone DMA32: 16384 pages in unavailable ranges Aug 19 08:08:29.904403 kernel: ACPI: PM-Timer IO Port: 0x608 Aug 19 08:08:29.904411 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 19 08:08:29.904419 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Aug 19 08:08:29.904426 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Aug 19 08:08:29.904443 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 19 08:08:29.904469 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 19 08:08:29.904489 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 19 08:08:29.904497 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 19 08:08:29.904504 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 19 08:08:29.904512 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Aug 19 08:08:29.904520 kernel: TSC deadline timer available Aug 19 08:08:29.904527 kernel: CPU topo: Max. logical packages: 1 Aug 19 08:08:29.904535 kernel: CPU topo: Max. logical dies: 1 Aug 19 08:08:29.904542 kernel: CPU topo: Max. dies per package: 1 Aug 19 08:08:29.904560 kernel: CPU topo: Max. threads per core: 1 Aug 19 08:08:29.904568 kernel: CPU topo: Num. cores per package: 4 Aug 19 08:08:29.904575 kernel: CPU topo: Num. threads per package: 4 Aug 19 08:08:29.904583 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Aug 19 08:08:29.904595 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Aug 19 08:08:29.904603 kernel: kvm-guest: KVM setup pv remote TLB flush Aug 19 08:08:29.904611 kernel: kvm-guest: setup PV sched yield Aug 19 08:08:29.904619 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Aug 19 08:08:29.904627 kernel: Booting paravirtualized kernel on KVM Aug 19 08:08:29.904637 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 19 08:08:29.904646 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Aug 19 08:08:29.904654 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Aug 19 08:08:29.904661 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Aug 19 08:08:29.904669 kernel: pcpu-alloc: [0] 0 1 2 3 Aug 19 08:08:29.904677 kernel: kvm-guest: PV spinlocks enabled Aug 19 08:08:29.904685 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Aug 19 08:08:29.904694 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=cc23dd01793203541561c15ffc568736bb5dae0d652141296dd11bf777bdf42f Aug 19 08:08:29.904705 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 19 08:08:29.904713 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 19 08:08:29.904721 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 19 08:08:29.904729 kernel: Fallback order for Node 0: 0 Aug 19 08:08:29.904736 kernel: Built 1 zonelists, mobility grouping on. Total pages: 638054 Aug 19 08:08:29.904744 kernel: Policy zone: DMA32 Aug 19 08:08:29.904752 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 19 08:08:29.904760 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Aug 19 08:08:29.904770 kernel: ftrace: allocating 40101 entries in 157 pages Aug 19 08:08:29.904778 kernel: ftrace: allocated 157 pages with 5 groups Aug 19 08:08:29.904786 kernel: Dynamic Preempt: voluntary Aug 19 08:08:29.904793 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 19 08:08:29.904802 kernel: rcu: RCU event tracing is enabled. Aug 19 08:08:29.904810 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Aug 19 08:08:29.904818 kernel: Trampoline variant of Tasks RCU enabled. Aug 19 08:08:29.904826 kernel: Rude variant of Tasks RCU enabled. Aug 19 08:08:29.904834 kernel: Tracing variant of Tasks RCU enabled. Aug 19 08:08:29.904842 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 19 08:08:29.904861 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Aug 19 08:08:29.904869 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Aug 19 08:08:29.904877 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Aug 19 08:08:29.904887 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Aug 19 08:08:29.904895 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Aug 19 08:08:29.904908 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 19 08:08:29.904916 kernel: Console: colour dummy device 80x25 Aug 19 08:08:29.904924 kernel: printk: legacy console [ttyS0] enabled Aug 19 08:08:29.904932 kernel: ACPI: Core revision 20240827 Aug 19 08:08:29.904942 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Aug 19 08:08:29.904950 kernel: APIC: Switch to symmetric I/O mode setup Aug 19 08:08:29.904958 kernel: x2apic enabled Aug 19 08:08:29.904966 kernel: APIC: Switched APIC routing to: physical x2apic Aug 19 08:08:29.904974 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Aug 19 08:08:29.904982 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Aug 19 08:08:29.904990 kernel: kvm-guest: setup PV IPIs Aug 19 08:08:29.904998 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Aug 19 08:08:29.905006 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Aug 19 08:08:29.905016 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Aug 19 08:08:29.905024 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Aug 19 08:08:29.905032 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Aug 19 08:08:29.905040 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Aug 19 08:08:29.905051 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 19 08:08:29.905059 kernel: Spectre V2 : Mitigation: Retpolines Aug 19 08:08:29.905067 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 19 08:08:29.905075 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Aug 19 08:08:29.905085 kernel: RETBleed: Mitigation: untrained return thunk Aug 19 08:08:29.905093 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 19 08:08:29.905101 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Aug 19 08:08:29.905109 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Aug 19 08:08:29.905117 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Aug 19 08:08:29.905125 kernel: x86/bugs: return thunk changed Aug 19 08:08:29.905133 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Aug 19 08:08:29.905145 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 19 08:08:29.905153 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 19 08:08:29.905164 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 19 08:08:29.905172 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 19 08:08:29.905180 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Aug 19 08:08:29.905188 kernel: Freeing SMP alternatives memory: 32K Aug 19 08:08:29.905195 kernel: pid_max: default: 32768 minimum: 301 Aug 19 08:08:29.905203 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Aug 19 08:08:29.905211 kernel: landlock: Up and running. Aug 19 08:08:29.905219 kernel: SELinux: Initializing. Aug 19 08:08:29.905227 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 19 08:08:29.905237 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 19 08:08:29.905246 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Aug 19 08:08:29.905253 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Aug 19 08:08:29.905276 kernel: ... version: 0 Aug 19 08:08:29.905285 kernel: ... bit width: 48 Aug 19 08:08:29.905295 kernel: ... generic registers: 6 Aug 19 08:08:29.905303 kernel: ... value mask: 0000ffffffffffff Aug 19 08:08:29.905311 kernel: ... max period: 00007fffffffffff Aug 19 08:08:29.905319 kernel: ... fixed-purpose events: 0 Aug 19 08:08:29.905330 kernel: ... event mask: 000000000000003f Aug 19 08:08:29.905338 kernel: signal: max sigframe size: 1776 Aug 19 08:08:29.905346 kernel: rcu: Hierarchical SRCU implementation. Aug 19 08:08:29.905354 kernel: rcu: Max phase no-delay instances is 400. Aug 19 08:08:29.905362 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Aug 19 08:08:29.905369 kernel: smp: Bringing up secondary CPUs ... Aug 19 08:08:29.905377 kernel: smpboot: x86: Booting SMP configuration: Aug 19 08:08:29.905385 kernel: .... node #0, CPUs: #1 #2 #3 Aug 19 08:08:29.905393 kernel: smp: Brought up 1 node, 4 CPUs Aug 19 08:08:29.905403 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Aug 19 08:08:29.905412 kernel: Memory: 2409216K/2552216K available (14336K kernel code, 2430K rwdata, 9960K rodata, 54040K init, 2928K bss, 137064K reserved, 0K cma-reserved) Aug 19 08:08:29.905420 kernel: devtmpfs: initialized Aug 19 08:08:29.905427 kernel: x86/mm: Memory block size: 128MB Aug 19 08:08:29.905435 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bb7f000-0x9bbfefff] (524288 bytes) Aug 19 08:08:29.905443 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bfb5000-0x9bfb6fff] (8192 bytes) Aug 19 08:08:29.905452 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 19 08:08:29.905460 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Aug 19 08:08:29.905472 kernel: pinctrl core: initialized pinctrl subsystem Aug 19 08:08:29.905498 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 19 08:08:29.905517 kernel: audit: initializing netlink subsys (disabled) Aug 19 08:08:29.905528 kernel: audit: type=2000 audit(1755590907.197:1): state=initialized audit_enabled=0 res=1 Aug 19 08:08:29.905536 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 19 08:08:29.905544 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 19 08:08:29.905552 kernel: cpuidle: using governor menu Aug 19 08:08:29.905560 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 19 08:08:29.905568 kernel: dca service started, version 1.12.1 Aug 19 08:08:29.905578 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Aug 19 08:08:29.905586 kernel: PCI: Using configuration type 1 for base access Aug 19 08:08:29.905593 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 19 08:08:29.905601 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 19 08:08:29.905609 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Aug 19 08:08:29.905617 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 19 08:08:29.905625 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 19 08:08:29.905633 kernel: ACPI: Added _OSI(Module Device) Aug 19 08:08:29.905641 kernel: ACPI: Added _OSI(Processor Device) Aug 19 08:08:29.905651 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 19 08:08:29.905659 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 19 08:08:29.905667 kernel: ACPI: Interpreter enabled Aug 19 08:08:29.905675 kernel: ACPI: PM: (supports S0 S5) Aug 19 08:08:29.905682 kernel: ACPI: Using IOAPIC for interrupt routing Aug 19 08:08:29.905690 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 19 08:08:29.905698 kernel: PCI: Using E820 reservations for host bridge windows Aug 19 08:08:29.905706 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Aug 19 08:08:29.905714 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 19 08:08:29.905978 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 19 08:08:29.906113 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Aug 19 08:08:29.906235 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Aug 19 08:08:29.906246 kernel: PCI host bridge to bus 0000:00 Aug 19 08:08:29.906433 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 19 08:08:29.907303 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 19 08:08:29.907449 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 19 08:08:29.907569 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Aug 19 08:08:29.907680 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Aug 19 08:08:29.907791 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Aug 19 08:08:29.907958 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 19 08:08:29.908137 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Aug 19 08:08:29.908321 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Aug 19 08:08:29.908454 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Aug 19 08:08:29.908575 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Aug 19 08:08:29.908696 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Aug 19 08:08:29.908817 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 19 08:08:29.908969 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Aug 19 08:08:29.909101 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Aug 19 08:08:29.909222 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Aug 19 08:08:29.909378 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Aug 19 08:08:29.909521 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Aug 19 08:08:29.909648 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Aug 19 08:08:29.909773 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Aug 19 08:08:29.909908 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Aug 19 08:08:29.910049 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Aug 19 08:08:29.910179 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Aug 19 08:08:29.910397 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Aug 19 08:08:29.910524 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Aug 19 08:08:29.910645 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Aug 19 08:08:29.910777 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Aug 19 08:08:29.910913 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Aug 19 08:08:29.911049 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Aug 19 08:08:29.911180 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Aug 19 08:08:29.911323 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Aug 19 08:08:29.911564 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Aug 19 08:08:29.911690 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Aug 19 08:08:29.911702 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 19 08:08:29.911711 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 19 08:08:29.911719 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 19 08:08:29.911728 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 19 08:08:29.911741 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Aug 19 08:08:29.911749 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Aug 19 08:08:29.911757 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Aug 19 08:08:29.911765 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Aug 19 08:08:29.911774 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Aug 19 08:08:29.911782 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Aug 19 08:08:29.911791 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Aug 19 08:08:29.911799 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Aug 19 08:08:29.911807 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Aug 19 08:08:29.911818 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Aug 19 08:08:29.911827 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Aug 19 08:08:29.911835 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Aug 19 08:08:29.911843 kernel: iommu: Default domain type: Translated Aug 19 08:08:29.911861 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 19 08:08:29.911869 kernel: efivars: Registered efivars operations Aug 19 08:08:29.911878 kernel: PCI: Using ACPI for IRQ routing Aug 19 08:08:29.911887 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 19 08:08:29.911895 kernel: e820: reserve RAM buffer [mem 0x0009f000-0x0009ffff] Aug 19 08:08:29.911906 kernel: e820: reserve RAM buffer [mem 0x9a102018-0x9bffffff] Aug 19 08:08:29.911915 kernel: e820: reserve RAM buffer [mem 0x9a13f018-0x9bffffff] Aug 19 08:08:29.911923 kernel: e820: reserve RAM buffer [mem 0x9b8ed000-0x9bffffff] Aug 19 08:08:29.911931 kernel: e820: reserve RAM buffer [mem 0x9bfb1000-0x9bffffff] Aug 19 08:08:29.912058 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Aug 19 08:08:29.912207 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Aug 19 08:08:29.912368 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 19 08:08:29.912380 kernel: vgaarb: loaded Aug 19 08:08:29.912393 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Aug 19 08:08:29.912401 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Aug 19 08:08:29.912409 kernel: clocksource: Switched to clocksource kvm-clock Aug 19 08:08:29.912417 kernel: VFS: Disk quotas dquot_6.6.0 Aug 19 08:08:29.912426 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 19 08:08:29.912435 kernel: pnp: PnP ACPI init Aug 19 08:08:29.912586 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Aug 19 08:08:29.912599 kernel: pnp: PnP ACPI: found 6 devices Aug 19 08:08:29.912607 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 19 08:08:29.912620 kernel: NET: Registered PF_INET protocol family Aug 19 08:08:29.912628 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 19 08:08:29.912637 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 19 08:08:29.912646 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 19 08:08:29.912655 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 19 08:08:29.912663 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Aug 19 08:08:29.912672 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 19 08:08:29.912680 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 19 08:08:29.912692 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 19 08:08:29.912700 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 19 08:08:29.912709 kernel: NET: Registered PF_XDP protocol family Aug 19 08:08:29.912834 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Aug 19 08:08:29.912972 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Aug 19 08:08:29.913087 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 19 08:08:29.913198 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 19 08:08:29.913325 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 19 08:08:29.913470 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Aug 19 08:08:29.914817 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Aug 19 08:08:29.914957 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Aug 19 08:08:29.914969 kernel: PCI: CLS 0 bytes, default 64 Aug 19 08:08:29.914979 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Aug 19 08:08:29.914987 kernel: Initialise system trusted keyrings Aug 19 08:08:29.914996 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 19 08:08:29.915005 kernel: Key type asymmetric registered Aug 19 08:08:29.915013 kernel: Asymmetric key parser 'x509' registered Aug 19 08:08:29.915026 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Aug 19 08:08:29.915052 kernel: io scheduler mq-deadline registered Aug 19 08:08:29.915063 kernel: io scheduler kyber registered Aug 19 08:08:29.915072 kernel: io scheduler bfq registered Aug 19 08:08:29.915082 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 19 08:08:29.915092 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Aug 19 08:08:29.915102 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Aug 19 08:08:29.915111 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Aug 19 08:08:29.915120 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 19 08:08:29.915131 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 19 08:08:29.915139 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 19 08:08:29.915147 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 19 08:08:29.915156 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 19 08:08:29.915358 kernel: rtc_cmos 00:04: RTC can wake from S4 Aug 19 08:08:29.915373 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 19 08:08:29.915491 kernel: rtc_cmos 00:04: registered as rtc0 Aug 19 08:08:29.915606 kernel: rtc_cmos 00:04: setting system clock to 2025-08-19T08:08:29 UTC (1755590909) Aug 19 08:08:29.915725 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Aug 19 08:08:29.915736 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Aug 19 08:08:29.915744 kernel: efifb: probing for efifb Aug 19 08:08:29.915753 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Aug 19 08:08:29.915762 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Aug 19 08:08:29.915770 kernel: efifb: scrolling: redraw Aug 19 08:08:29.915778 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Aug 19 08:08:29.915787 kernel: Console: switching to colour frame buffer device 160x50 Aug 19 08:08:29.915795 kernel: fb0: EFI VGA frame buffer device Aug 19 08:08:29.915806 kernel: pstore: Using crash dump compression: deflate Aug 19 08:08:29.915815 kernel: pstore: Registered efi_pstore as persistent store backend Aug 19 08:08:29.915826 kernel: NET: Registered PF_INET6 protocol family Aug 19 08:08:29.915834 kernel: Segment Routing with IPv6 Aug 19 08:08:29.915843 kernel: In-situ OAM (IOAM) with IPv6 Aug 19 08:08:29.915861 kernel: NET: Registered PF_PACKET protocol family Aug 19 08:08:29.915871 kernel: Key type dns_resolver registered Aug 19 08:08:29.915880 kernel: IPI shorthand broadcast: enabled Aug 19 08:08:29.915888 kernel: sched_clock: Marking stable (3933002491, 134007900)->(4083432869, -16422478) Aug 19 08:08:29.915897 kernel: registered taskstats version 1 Aug 19 08:08:29.915905 kernel: Loading compiled-in X.509 certificates Aug 19 08:08:29.915914 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.41-flatcar: 93a065b103c00d4b81cc5822e4e7f9674e63afaf' Aug 19 08:08:29.915922 kernel: Demotion targets for Node 0: null Aug 19 08:08:29.915931 kernel: Key type .fscrypt registered Aug 19 08:08:29.915940 kernel: Key type fscrypt-provisioning registered Aug 19 08:08:29.915950 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 19 08:08:29.915959 kernel: ima: Allocated hash algorithm: sha1 Aug 19 08:08:29.915967 kernel: ima: No architecture policies found Aug 19 08:08:29.915975 kernel: clk: Disabling unused clocks Aug 19 08:08:29.915984 kernel: Warning: unable to open an initial console. Aug 19 08:08:29.915993 kernel: Freeing unused kernel image (initmem) memory: 54040K Aug 19 08:08:29.916001 kernel: Write protecting the kernel read-only data: 24576k Aug 19 08:08:29.916010 kernel: Freeing unused kernel image (rodata/data gap) memory: 280K Aug 19 08:08:29.916021 kernel: Run /init as init process Aug 19 08:08:29.916029 kernel: with arguments: Aug 19 08:08:29.916038 kernel: /init Aug 19 08:08:29.916046 kernel: with environment: Aug 19 08:08:29.916054 kernel: HOME=/ Aug 19 08:08:29.916062 kernel: TERM=linux Aug 19 08:08:29.916070 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 19 08:08:29.916080 systemd[1]: Successfully made /usr/ read-only. Aug 19 08:08:29.916095 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 19 08:08:29.916104 systemd[1]: Detected virtualization kvm. Aug 19 08:08:29.916113 systemd[1]: Detected architecture x86-64. Aug 19 08:08:29.916122 systemd[1]: Running in initrd. Aug 19 08:08:29.916130 systemd[1]: No hostname configured, using default hostname. Aug 19 08:08:29.916140 systemd[1]: Hostname set to . Aug 19 08:08:29.916148 systemd[1]: Initializing machine ID from VM UUID. Aug 19 08:08:29.916157 systemd[1]: Queued start job for default target initrd.target. Aug 19 08:08:29.916168 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 19 08:08:29.916178 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 19 08:08:29.916187 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 19 08:08:29.916196 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 19 08:08:29.916205 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 19 08:08:29.916215 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 19 08:08:29.916225 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 19 08:08:29.916236 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 19 08:08:29.916245 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 19 08:08:29.916254 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 19 08:08:29.916278 systemd[1]: Reached target paths.target - Path Units. Aug 19 08:08:29.916287 systemd[1]: Reached target slices.target - Slice Units. Aug 19 08:08:29.916296 systemd[1]: Reached target swap.target - Swaps. Aug 19 08:08:29.916305 systemd[1]: Reached target timers.target - Timer Units. Aug 19 08:08:29.916314 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 19 08:08:29.916326 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 19 08:08:29.916335 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 19 08:08:29.916344 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Aug 19 08:08:29.916353 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 19 08:08:29.916362 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 19 08:08:29.916370 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 19 08:08:29.916380 systemd[1]: Reached target sockets.target - Socket Units. Aug 19 08:08:29.916388 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 19 08:08:29.916397 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 19 08:08:29.916408 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 19 08:08:29.916420 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Aug 19 08:08:29.916429 systemd[1]: Starting systemd-fsck-usr.service... Aug 19 08:08:29.916438 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 19 08:08:29.916447 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 19 08:08:29.916456 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 19 08:08:29.916465 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 19 08:08:29.916476 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 19 08:08:29.916485 systemd[1]: Finished systemd-fsck-usr.service. Aug 19 08:08:29.916494 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 19 08:08:29.916526 systemd-journald[219]: Collecting audit messages is disabled. Aug 19 08:08:29.916551 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 19 08:08:29.916561 systemd-journald[219]: Journal started Aug 19 08:08:29.916580 systemd-journald[219]: Runtime Journal (/run/log/journal/804d849edef04514bf8307b430a64a41) is 6M, max 48.2M, 42.2M free. Aug 19 08:08:29.905823 systemd-modules-load[222]: Inserted module 'overlay' Aug 19 08:08:29.919293 systemd[1]: Started systemd-journald.service - Journal Service. Aug 19 08:08:29.919813 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 19 08:08:29.924766 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 19 08:08:29.926105 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 19 08:08:29.932388 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 19 08:08:29.935899 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 19 08:08:29.937291 kernel: Bridge firewalling registered Aug 19 08:08:29.937289 systemd-modules-load[222]: Inserted module 'br_netfilter' Aug 19 08:08:29.939050 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 19 08:08:29.941200 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 19 08:08:29.950062 systemd-tmpfiles[242]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Aug 19 08:08:29.952768 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 19 08:08:29.953376 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 19 08:08:29.956624 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 19 08:08:29.959240 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 19 08:08:29.964140 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 19 08:08:29.972172 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 19 08:08:29.992813 dracut-cmdline[264]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=cc23dd01793203541561c15ffc568736bb5dae0d652141296dd11bf777bdf42f Aug 19 08:08:30.019156 systemd-resolved[259]: Positive Trust Anchors: Aug 19 08:08:30.019201 systemd-resolved[259]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 19 08:08:30.019233 systemd-resolved[259]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 19 08:08:30.022853 systemd-resolved[259]: Defaulting to hostname 'linux'. Aug 19 08:08:30.028468 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 19 08:08:30.031286 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 19 08:08:30.119303 kernel: SCSI subsystem initialized Aug 19 08:08:30.133300 kernel: Loading iSCSI transport class v2.0-870. Aug 19 08:08:30.147302 kernel: iscsi: registered transport (tcp) Aug 19 08:08:30.169461 kernel: iscsi: registered transport (qla4xxx) Aug 19 08:08:30.169500 kernel: QLogic iSCSI HBA Driver Aug 19 08:08:30.192496 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 19 08:08:30.210082 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 19 08:08:30.210905 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 19 08:08:30.270425 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 19 08:08:30.272580 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 19 08:08:30.331293 kernel: raid6: avx2x4 gen() 29519 MB/s Aug 19 08:08:30.348289 kernel: raid6: avx2x2 gen() 29428 MB/s Aug 19 08:08:30.365330 kernel: raid6: avx2x1 gen() 25617 MB/s Aug 19 08:08:30.365351 kernel: raid6: using algorithm avx2x4 gen() 29519 MB/s Aug 19 08:08:30.383319 kernel: raid6: .... xor() 7970 MB/s, rmw enabled Aug 19 08:08:30.383343 kernel: raid6: using avx2x2 recovery algorithm Aug 19 08:08:30.403290 kernel: xor: automatically using best checksumming function avx Aug 19 08:08:30.571322 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 19 08:08:30.580119 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 19 08:08:30.582589 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 19 08:08:30.628860 systemd-udevd[473]: Using default interface naming scheme 'v255'. Aug 19 08:08:30.635738 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 19 08:08:30.638049 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 19 08:08:30.662962 dracut-pre-trigger[480]: rd.md=0: removing MD RAID activation Aug 19 08:08:30.694248 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 19 08:08:30.696064 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 19 08:08:30.796511 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 19 08:08:30.798710 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 19 08:08:30.842287 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Aug 19 08:08:30.842782 kernel: cryptd: max_cpu_qlen set to 1000 Aug 19 08:08:30.845702 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Aug 19 08:08:30.854630 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 19 08:08:30.854655 kernel: GPT:9289727 != 19775487 Aug 19 08:08:30.854666 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Aug 19 08:08:30.854677 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 19 08:08:30.854687 kernel: GPT:9289727 != 19775487 Aug 19 08:08:30.854696 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 19 08:08:30.854712 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 19 08:08:30.858288 kernel: AES CTR mode by8 optimization enabled Aug 19 08:08:30.866449 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 19 08:08:30.866578 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 19 08:08:30.871613 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 19 08:08:30.875633 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 19 08:08:30.890919 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 19 08:08:30.897310 kernel: libata version 3.00 loaded. Aug 19 08:08:30.910588 kernel: ahci 0000:00:1f.2: version 3.0 Aug 19 08:08:30.910792 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Aug 19 08:08:30.914107 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Aug 19 08:08:30.918966 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Aug 19 08:08:30.919141 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Aug 19 08:08:30.919325 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Aug 19 08:08:30.921291 kernel: scsi host0: ahci Aug 19 08:08:30.921523 kernel: scsi host1: ahci Aug 19 08:08:30.921676 kernel: scsi host2: ahci Aug 19 08:08:30.922296 kernel: scsi host3: ahci Aug 19 08:08:30.923519 kernel: scsi host4: ahci Aug 19 08:08:30.923733 kernel: scsi host5: ahci Aug 19 08:08:30.923931 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 0 Aug 19 08:08:30.925480 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 0 Aug 19 08:08:30.925500 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 0 Aug 19 08:08:30.927458 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Aug 19 08:08:30.932052 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 0 Aug 19 08:08:30.932068 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 0 Aug 19 08:08:30.932084 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 0 Aug 19 08:08:30.950496 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Aug 19 08:08:30.950949 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Aug 19 08:08:30.959794 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 19 08:08:30.960912 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 19 08:08:30.962571 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 19 08:08:30.962625 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 19 08:08:30.966669 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 19 08:08:30.981911 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 19 08:08:30.985059 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 19 08:08:30.993886 disk-uuid[637]: Primary Header is updated. Aug 19 08:08:30.993886 disk-uuid[637]: Secondary Entries is updated. Aug 19 08:08:30.993886 disk-uuid[637]: Secondary Header is updated. Aug 19 08:08:31.000314 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 19 08:08:31.005009 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 19 08:08:31.008465 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 19 08:08:31.240678 kernel: ata6: SATA link down (SStatus 0 SControl 300) Aug 19 08:08:31.240736 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Aug 19 08:08:31.241278 kernel: ata2: SATA link down (SStatus 0 SControl 300) Aug 19 08:08:31.242302 kernel: ata5: SATA link down (SStatus 0 SControl 300) Aug 19 08:08:31.242389 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Aug 19 08:08:31.243402 kernel: ata3.00: applying bridge limits Aug 19 08:08:31.244297 kernel: ata1: SATA link down (SStatus 0 SControl 300) Aug 19 08:08:31.244321 kernel: ata4: SATA link down (SStatus 0 SControl 300) Aug 19 08:08:31.245298 kernel: ata3.00: configured for UDMA/100 Aug 19 08:08:31.246305 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Aug 19 08:08:31.303304 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Aug 19 08:08:31.303673 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Aug 19 08:08:31.333301 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Aug 19 08:08:31.761402 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 19 08:08:31.763143 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 19 08:08:31.764776 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 19 08:08:31.766058 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 19 08:08:31.769008 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 19 08:08:31.800909 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 19 08:08:32.008302 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 19 08:08:32.009023 disk-uuid[641]: The operation has completed successfully. Aug 19 08:08:32.040364 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 19 08:08:32.040510 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 19 08:08:32.074972 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 19 08:08:32.103590 sh[673]: Success Aug 19 08:08:32.121323 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 19 08:08:32.121408 kernel: device-mapper: uevent: version 1.0.3 Aug 19 08:08:32.121422 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Aug 19 08:08:32.132301 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Aug 19 08:08:32.164606 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 19 08:08:32.169057 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 19 08:08:32.195537 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 19 08:08:32.201015 kernel: BTRFS: device fsid 99050df3-5e04-4f37-acde-dec46aab7896 devid 1 transid 39 /dev/mapper/usr (253:0) scanned by mount (685) Aug 19 08:08:32.201047 kernel: BTRFS info (device dm-0): first mount of filesystem 99050df3-5e04-4f37-acde-dec46aab7896 Aug 19 08:08:32.201058 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 19 08:08:32.201872 kernel: BTRFS info (device dm-0): using free-space-tree Aug 19 08:08:32.206921 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 19 08:08:32.208367 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Aug 19 08:08:32.209766 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 19 08:08:32.210626 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 19 08:08:32.212439 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 19 08:08:32.239336 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (715) Aug 19 08:08:32.241620 kernel: BTRFS info (device vda6): first mount of filesystem 43dd0637-5e0b-4b8d-a544-a82ca0652f6f Aug 19 08:08:32.241652 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 19 08:08:32.241669 kernel: BTRFS info (device vda6): using free-space-tree Aug 19 08:08:32.250300 kernel: BTRFS info (device vda6): last unmount of filesystem 43dd0637-5e0b-4b8d-a544-a82ca0652f6f Aug 19 08:08:32.250867 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 19 08:08:32.254179 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 19 08:08:32.411506 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 19 08:08:32.414222 ignition[757]: Ignition 2.21.0 Aug 19 08:08:32.414235 ignition[757]: Stage: fetch-offline Aug 19 08:08:32.414246 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 19 08:08:32.414285 ignition[757]: no configs at "/usr/lib/ignition/base.d" Aug 19 08:08:32.414296 ignition[757]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 19 08:08:32.416409 ignition[757]: parsed url from cmdline: "" Aug 19 08:08:32.416414 ignition[757]: no config URL provided Aug 19 08:08:32.416420 ignition[757]: reading system config file "/usr/lib/ignition/user.ign" Aug 19 08:08:32.416430 ignition[757]: no config at "/usr/lib/ignition/user.ign" Aug 19 08:08:32.417406 ignition[757]: op(1): [started] loading QEMU firmware config module Aug 19 08:08:32.417415 ignition[757]: op(1): executing: "modprobe" "qemu_fw_cfg" Aug 19 08:08:32.427187 ignition[757]: op(1): [finished] loading QEMU firmware config module Aug 19 08:08:32.452396 systemd-networkd[862]: lo: Link UP Aug 19 08:08:32.452407 systemd-networkd[862]: lo: Gained carrier Aug 19 08:08:32.454022 systemd-networkd[862]: Enumeration completed Aug 19 08:08:32.454123 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 19 08:08:32.454441 systemd-networkd[862]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 19 08:08:32.454446 systemd-networkd[862]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 19 08:08:32.454961 systemd-networkd[862]: eth0: Link UP Aug 19 08:08:32.455168 systemd-networkd[862]: eth0: Gained carrier Aug 19 08:08:32.455177 systemd-networkd[862]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 19 08:08:32.455920 systemd[1]: Reached target network.target - Network. Aug 19 08:08:32.477530 ignition[757]: parsing config with SHA512: 766efa1c69e204ed7e08f6902839984fa863d916bf80779c08d92ef4fe9cbbcba0a59a027fa40353fe6ce02c7b4fabbfeba970cb65009df24961ed0df89f2f38 Aug 19 08:08:32.478940 systemd-networkd[862]: eth0: DHCPv4 address 10.0.0.49/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 19 08:08:32.485796 unknown[757]: fetched base config from "system" Aug 19 08:08:32.485809 unknown[757]: fetched user config from "qemu" Aug 19 08:08:32.486189 ignition[757]: fetch-offline: fetch-offline passed Aug 19 08:08:32.486252 ignition[757]: Ignition finished successfully Aug 19 08:08:32.489406 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 19 08:08:32.491199 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Aug 19 08:08:32.493700 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 19 08:08:32.540807 ignition[869]: Ignition 2.21.0 Aug 19 08:08:32.540822 ignition[869]: Stage: kargs Aug 19 08:08:32.543976 ignition[869]: no configs at "/usr/lib/ignition/base.d" Aug 19 08:08:32.544006 ignition[869]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 19 08:08:32.546459 ignition[869]: kargs: kargs passed Aug 19 08:08:32.546523 ignition[869]: Ignition finished successfully Aug 19 08:08:32.552303 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 19 08:08:32.555447 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 19 08:08:32.607273 ignition[877]: Ignition 2.21.0 Aug 19 08:08:32.607289 ignition[877]: Stage: disks Aug 19 08:08:32.607433 ignition[877]: no configs at "/usr/lib/ignition/base.d" Aug 19 08:08:32.607443 ignition[877]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 19 08:08:32.609517 ignition[877]: disks: disks passed Aug 19 08:08:32.609595 ignition[877]: Ignition finished successfully Aug 19 08:08:32.615142 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 19 08:08:32.615810 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 19 08:08:32.617471 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 19 08:08:32.617789 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 19 08:08:32.618105 systemd[1]: Reached target sysinit.target - System Initialization. Aug 19 08:08:32.618579 systemd[1]: Reached target basic.target - Basic System. Aug 19 08:08:32.626662 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 19 08:08:32.653042 systemd-fsck[887]: ROOT: clean, 15/553520 files, 52789/553472 blocks Aug 19 08:08:32.660696 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 19 08:08:32.662004 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 19 08:08:32.937315 kernel: EXT4-fs (vda9): mounted filesystem 41966107-04fa-426e-9830-6b4efa50e27b r/w with ordered data mode. Quota mode: none. Aug 19 08:08:32.938578 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 19 08:08:32.940483 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 19 08:08:32.942545 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 19 08:08:32.944096 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 19 08:08:32.946484 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 19 08:08:32.946552 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 19 08:08:32.948226 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 19 08:08:32.955052 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 19 08:08:32.958048 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 19 08:08:32.962829 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (895) Aug 19 08:08:32.962857 kernel: BTRFS info (device vda6): first mount of filesystem 43dd0637-5e0b-4b8d-a544-a82ca0652f6f Aug 19 08:08:32.962870 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 19 08:08:32.962880 kernel: BTRFS info (device vda6): using free-space-tree Aug 19 08:08:32.966780 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 19 08:08:33.000054 initrd-setup-root[919]: cut: /sysroot/etc/passwd: No such file or directory Aug 19 08:08:33.004664 initrd-setup-root[926]: cut: /sysroot/etc/group: No such file or directory Aug 19 08:08:33.008849 initrd-setup-root[933]: cut: /sysroot/etc/shadow: No such file or directory Aug 19 08:08:33.013831 initrd-setup-root[940]: cut: /sysroot/etc/gshadow: No such file or directory Aug 19 08:08:33.114241 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 19 08:08:33.115644 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 19 08:08:33.117967 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 19 08:08:33.146360 kernel: BTRFS info (device vda6): last unmount of filesystem 43dd0637-5e0b-4b8d-a544-a82ca0652f6f Aug 19 08:08:33.200361 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 19 08:08:33.210815 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 19 08:08:33.230771 ignition[1009]: INFO : Ignition 2.21.0 Aug 19 08:08:33.230771 ignition[1009]: INFO : Stage: mount Aug 19 08:08:33.232598 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 19 08:08:33.232598 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 19 08:08:33.232598 ignition[1009]: INFO : mount: mount passed Aug 19 08:08:33.232598 ignition[1009]: INFO : Ignition finished successfully Aug 19 08:08:33.234805 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 19 08:08:33.237509 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 19 08:08:33.260707 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 19 08:08:33.284081 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1021) Aug 19 08:08:33.284126 kernel: BTRFS info (device vda6): first mount of filesystem 43dd0637-5e0b-4b8d-a544-a82ca0652f6f Aug 19 08:08:33.284138 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 19 08:08:33.285650 kernel: BTRFS info (device vda6): using free-space-tree Aug 19 08:08:33.289470 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 19 08:08:33.323150 ignition[1038]: INFO : Ignition 2.21.0 Aug 19 08:08:33.323150 ignition[1038]: INFO : Stage: files Aug 19 08:08:33.324874 ignition[1038]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 19 08:08:33.324874 ignition[1038]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 19 08:08:33.327771 ignition[1038]: DEBUG : files: compiled without relabeling support, skipping Aug 19 08:08:33.329675 ignition[1038]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 19 08:08:33.329675 ignition[1038]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 19 08:08:33.332355 ignition[1038]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 19 08:08:33.332355 ignition[1038]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 19 08:08:33.335072 ignition[1038]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 19 08:08:33.335072 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Aug 19 08:08:33.335072 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Aug 19 08:08:33.332806 unknown[1038]: wrote ssh authorized keys file for user: core Aug 19 08:08:33.371953 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 19 08:08:33.595674 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Aug 19 08:08:33.595674 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 19 08:08:33.599383 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Aug 19 08:08:33.818640 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Aug 19 08:08:33.949536 systemd-networkd[862]: eth0: Gained IPv6LL Aug 19 08:08:34.064237 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 19 08:08:34.066412 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Aug 19 08:08:34.066412 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Aug 19 08:08:34.066412 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 19 08:08:34.066412 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 19 08:08:34.066412 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 19 08:08:34.066412 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 19 08:08:34.066412 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 19 08:08:34.066412 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 19 08:08:34.080785 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 19 08:08:34.080785 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 19 08:08:34.080785 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 19 08:08:34.080785 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 19 08:08:34.080785 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 19 08:08:34.080785 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Aug 19 08:08:34.434439 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Aug 19 08:08:35.200738 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 19 08:08:35.200738 ignition[1038]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Aug 19 08:08:35.204846 ignition[1038]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 19 08:08:35.207007 ignition[1038]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 19 08:08:35.207007 ignition[1038]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Aug 19 08:08:35.207007 ignition[1038]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Aug 19 08:08:35.207007 ignition[1038]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 19 08:08:35.207007 ignition[1038]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 19 08:08:35.207007 ignition[1038]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Aug 19 08:08:35.207007 ignition[1038]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Aug 19 08:08:35.228402 ignition[1038]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Aug 19 08:08:35.233066 ignition[1038]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Aug 19 08:08:35.234988 ignition[1038]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Aug 19 08:08:35.234988 ignition[1038]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Aug 19 08:08:35.234988 ignition[1038]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Aug 19 08:08:35.234988 ignition[1038]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 19 08:08:35.234988 ignition[1038]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 19 08:08:35.234988 ignition[1038]: INFO : files: files passed Aug 19 08:08:35.234988 ignition[1038]: INFO : Ignition finished successfully Aug 19 08:08:35.240895 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 19 08:08:35.243767 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 19 08:08:35.247252 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 19 08:08:35.274487 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 19 08:08:35.274703 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 19 08:08:35.278432 initrd-setup-root-after-ignition[1067]: grep: /sysroot/oem/oem-release: No such file or directory Aug 19 08:08:35.282466 initrd-setup-root-after-ignition[1069]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 19 08:08:35.282466 initrd-setup-root-after-ignition[1069]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 19 08:08:35.285844 initrd-setup-root-after-ignition[1073]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 19 08:08:35.289537 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 19 08:08:35.290299 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 19 08:08:35.294789 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 19 08:08:35.364108 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 19 08:08:35.364240 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 19 08:08:35.365194 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 19 08:08:35.367806 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 19 08:08:35.368151 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 19 08:08:35.369076 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 19 08:08:35.388475 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 19 08:08:35.392381 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 19 08:08:35.425094 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 19 08:08:35.425622 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 19 08:08:35.425961 systemd[1]: Stopped target timers.target - Timer Units. Aug 19 08:08:35.426291 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 19 08:08:35.426411 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 19 08:08:35.427078 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 19 08:08:35.427559 systemd[1]: Stopped target basic.target - Basic System. Aug 19 08:08:35.427892 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 19 08:08:35.428207 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 19 08:08:35.428696 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 19 08:08:35.429005 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Aug 19 08:08:35.429440 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 19 08:08:35.429805 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 19 08:08:35.430128 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 19 08:08:35.430608 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 19 08:08:35.430922 systemd[1]: Stopped target swap.target - Swaps. Aug 19 08:08:35.431214 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 19 08:08:35.431331 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 19 08:08:35.432036 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 19 08:08:35.432566 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 19 08:08:35.432820 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 19 08:08:35.432930 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 19 08:08:35.460578 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 19 08:08:35.460681 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 19 08:08:35.462868 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 19 08:08:35.462971 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 19 08:08:35.465623 systemd[1]: Stopped target paths.target - Path Units. Aug 19 08:08:35.465854 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 19 08:08:35.470323 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 19 08:08:35.470999 systemd[1]: Stopped target slices.target - Slice Units. Aug 19 08:08:35.471326 systemd[1]: Stopped target sockets.target - Socket Units. Aug 19 08:08:35.471971 systemd[1]: iscsid.socket: Deactivated successfully. Aug 19 08:08:35.472056 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 19 08:08:35.476708 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 19 08:08:35.476789 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 19 08:08:35.478407 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 19 08:08:35.478511 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 19 08:08:35.480048 systemd[1]: ignition-files.service: Deactivated successfully. Aug 19 08:08:35.480144 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 19 08:08:35.482830 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 19 08:08:35.484510 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 19 08:08:35.488252 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 19 08:08:35.488457 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 19 08:08:35.488816 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 19 08:08:35.488917 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 19 08:08:35.497455 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 19 08:08:35.497578 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 19 08:08:35.522475 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 19 08:08:35.523758 ignition[1094]: INFO : Ignition 2.21.0 Aug 19 08:08:35.523758 ignition[1094]: INFO : Stage: umount Aug 19 08:08:35.523758 ignition[1094]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 19 08:08:35.523758 ignition[1094]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 19 08:08:35.529369 ignition[1094]: INFO : umount: umount passed Aug 19 08:08:35.529369 ignition[1094]: INFO : Ignition finished successfully Aug 19 08:08:35.527257 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 19 08:08:35.527413 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 19 08:08:35.529579 systemd[1]: Stopped target network.target - Network. Aug 19 08:08:35.529855 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 19 08:08:35.529912 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 19 08:08:35.530214 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 19 08:08:35.530259 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 19 08:08:35.530936 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 19 08:08:35.530999 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 19 08:08:35.531284 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 19 08:08:35.531331 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 19 08:08:35.531878 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 19 08:08:35.532193 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 19 08:08:35.550799 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 19 08:08:35.550969 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 19 08:08:35.555120 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Aug 19 08:08:35.555442 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 19 08:08:35.555492 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 19 08:08:35.559314 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Aug 19 08:08:35.561363 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 19 08:08:35.561497 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 19 08:08:35.565752 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Aug 19 08:08:35.565953 systemd[1]: Stopped target network-pre.target - Preparation for Network. Aug 19 08:08:35.566320 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 19 08:08:35.566357 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 19 08:08:35.572230 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 19 08:08:35.572697 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 19 08:08:35.572753 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 19 08:08:35.573057 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 19 08:08:35.573099 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 19 08:08:35.578053 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 19 08:08:35.578099 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 19 08:08:35.578614 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 19 08:08:35.579749 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 19 08:08:35.600280 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 19 08:08:35.605432 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 19 08:08:35.608328 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 19 08:08:35.608445 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 19 08:08:35.609439 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 19 08:08:35.609529 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 19 08:08:35.611616 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 19 08:08:35.611663 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 19 08:08:35.611917 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 19 08:08:35.611976 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 19 08:08:35.612702 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 19 08:08:35.612759 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 19 08:08:35.613488 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 19 08:08:35.613542 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 19 08:08:35.615057 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 19 08:08:35.622843 systemd[1]: systemd-network-generator.service: Deactivated successfully. Aug 19 08:08:35.622908 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Aug 19 08:08:35.626654 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 19 08:08:35.626715 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 19 08:08:35.629767 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Aug 19 08:08:35.629813 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 19 08:08:35.633103 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 19 08:08:35.633152 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 19 08:08:35.633773 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 19 08:08:35.633816 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 19 08:08:35.654661 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 19 08:08:35.654790 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 19 08:08:35.706753 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 19 08:08:35.706898 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 19 08:08:35.707869 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 19 08:08:35.709791 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 19 08:08:35.709848 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 19 08:08:35.713281 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 19 08:08:35.748692 systemd[1]: Switching root. Aug 19 08:08:35.798190 systemd-journald[219]: Journal stopped Aug 19 08:08:37.615973 systemd-journald[219]: Received SIGTERM from PID 1 (systemd). Aug 19 08:08:37.616051 kernel: SELinux: policy capability network_peer_controls=1 Aug 19 08:08:37.616067 kernel: SELinux: policy capability open_perms=1 Aug 19 08:08:37.616084 kernel: SELinux: policy capability extended_socket_class=1 Aug 19 08:08:37.616104 kernel: SELinux: policy capability always_check_network=0 Aug 19 08:08:37.616116 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 19 08:08:37.616127 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 19 08:08:37.616145 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 19 08:08:37.616158 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 19 08:08:37.616170 kernel: SELinux: policy capability userspace_initial_context=0 Aug 19 08:08:37.616182 kernel: audit: type=1403 audit(1755590916.650:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 19 08:08:37.616194 systemd[1]: Successfully loaded SELinux policy in 59.991ms. Aug 19 08:08:37.616223 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 11.006ms. Aug 19 08:08:37.616237 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 19 08:08:37.616250 systemd[1]: Detected virtualization kvm. Aug 19 08:08:37.616277 systemd[1]: Detected architecture x86-64. Aug 19 08:08:37.616290 systemd[1]: Detected first boot. Aug 19 08:08:37.616304 systemd[1]: Initializing machine ID from VM UUID. Aug 19 08:08:37.616316 zram_generator::config[1140]: No configuration found. Aug 19 08:08:37.616336 kernel: Guest personality initialized and is inactive Aug 19 08:08:37.616347 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Aug 19 08:08:37.616359 kernel: Initialized host personality Aug 19 08:08:37.616370 kernel: NET: Registered PF_VSOCK protocol family Aug 19 08:08:37.616382 systemd[1]: Populated /etc with preset unit settings. Aug 19 08:08:37.616395 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Aug 19 08:08:37.616415 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 19 08:08:37.616430 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 19 08:08:37.616442 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 19 08:08:37.616455 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 19 08:08:37.616467 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 19 08:08:37.616479 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 19 08:08:37.616491 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 19 08:08:37.616504 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 19 08:08:37.616516 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 19 08:08:37.616530 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 19 08:08:37.616543 systemd[1]: Created slice user.slice - User and Session Slice. Aug 19 08:08:37.616555 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 19 08:08:37.616567 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 19 08:08:37.616579 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 19 08:08:37.616591 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 19 08:08:37.616604 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 19 08:08:37.616619 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 19 08:08:37.616638 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 19 08:08:37.616650 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 19 08:08:37.616663 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 19 08:08:37.616675 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 19 08:08:37.616694 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 19 08:08:37.616706 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 19 08:08:37.616718 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 19 08:08:37.616730 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 19 08:08:37.616742 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 19 08:08:37.616757 systemd[1]: Reached target slices.target - Slice Units. Aug 19 08:08:37.616769 systemd[1]: Reached target swap.target - Swaps. Aug 19 08:08:37.616780 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 19 08:08:37.616792 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 19 08:08:37.616804 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Aug 19 08:08:37.616816 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 19 08:08:37.616828 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 19 08:08:37.616840 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 19 08:08:37.616852 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 19 08:08:37.616866 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 19 08:08:37.616878 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 19 08:08:37.616890 systemd[1]: Mounting media.mount - External Media Directory... Aug 19 08:08:37.616902 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 19 08:08:37.616915 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 19 08:08:37.616927 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 19 08:08:37.616940 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 19 08:08:37.616958 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 19 08:08:37.616973 systemd[1]: Reached target machines.target - Containers. Aug 19 08:08:37.616984 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 19 08:08:37.616997 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 19 08:08:37.617010 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 19 08:08:37.617026 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 19 08:08:37.617042 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 19 08:08:37.617057 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 19 08:08:37.617072 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 19 08:08:37.617087 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 19 08:08:37.617102 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 19 08:08:37.617115 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 19 08:08:37.617126 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 19 08:08:37.617138 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 19 08:08:37.617150 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 19 08:08:37.617162 systemd[1]: Stopped systemd-fsck-usr.service. Aug 19 08:08:37.617175 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 19 08:08:37.617187 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 19 08:08:37.617201 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 19 08:08:37.617213 kernel: loop: module loaded Aug 19 08:08:37.617224 kernel: fuse: init (API version 7.41) Aug 19 08:08:37.617245 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 19 08:08:37.617257 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 19 08:08:37.646393 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Aug 19 08:08:37.646410 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 19 08:08:37.646429 systemd[1]: verity-setup.service: Deactivated successfully. Aug 19 08:08:37.646442 systemd[1]: Stopped verity-setup.service. Aug 19 08:08:37.646458 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 19 08:08:37.648497 systemd-journald[1211]: Collecting audit messages is disabled. Aug 19 08:08:37.648557 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 19 08:08:37.648586 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 19 08:08:37.648605 systemd[1]: Mounted media.mount - External Media Directory. Aug 19 08:08:37.648618 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 19 08:08:37.648639 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 19 08:08:37.648651 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 19 08:08:37.648663 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 19 08:08:37.648680 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 19 08:08:37.648692 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 19 08:08:37.648704 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 19 08:08:37.648716 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 19 08:08:37.648737 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 19 08:08:37.648753 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 19 08:08:37.648777 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 19 08:08:37.648790 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 19 08:08:37.648802 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 19 08:08:37.648818 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 19 08:08:37.648837 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 19 08:08:37.648860 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 19 08:08:37.648874 systemd-journald[1211]: Journal started Aug 19 08:08:37.648896 systemd-journald[1211]: Runtime Journal (/run/log/journal/804d849edef04514bf8307b430a64a41) is 6M, max 48.2M, 42.2M free. Aug 19 08:08:37.393947 systemd[1]: Queued start job for default target multi-user.target. Aug 19 08:08:37.413317 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Aug 19 08:08:37.413812 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 19 08:08:37.651871 systemd[1]: Started systemd-journald.service - Journal Service. Aug 19 08:08:37.653180 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 19 08:08:37.666710 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 19 08:08:37.672341 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 19 08:08:37.675849 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 19 08:08:37.676992 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 19 08:08:37.677123 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 19 08:08:37.680446 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Aug 19 08:08:37.736234 kernel: ACPI: bus type drm_connector registered Aug 19 08:08:37.745468 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 19 08:08:37.746910 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 19 08:08:37.748502 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 19 08:08:37.750524 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 19 08:08:37.751813 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 19 08:08:37.753699 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 19 08:08:37.755025 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 19 08:08:37.758381 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 19 08:08:37.761933 systemd-journald[1211]: Time spent on flushing to /var/log/journal/804d849edef04514bf8307b430a64a41 is 21.921ms for 1035 entries. Aug 19 08:08:37.761933 systemd-journald[1211]: System Journal (/var/log/journal/804d849edef04514bf8307b430a64a41) is 8M, max 195.6M, 187.6M free. Aug 19 08:08:37.796065 systemd-journald[1211]: Received client request to flush runtime journal. Aug 19 08:08:37.796117 kernel: loop0: detected capacity change from 0 to 111000 Aug 19 08:08:37.761965 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 19 08:08:37.765243 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 19 08:08:37.770425 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 19 08:08:37.771240 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 19 08:08:37.771470 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 19 08:08:37.776059 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Aug 19 08:08:37.780758 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 19 08:08:37.782703 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 19 08:08:37.785365 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 19 08:08:37.787162 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 19 08:08:37.795546 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 19 08:08:37.801387 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Aug 19 08:08:37.803311 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 19 08:08:37.804949 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 19 08:08:37.815824 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. Aug 19 08:08:37.815843 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. Aug 19 08:08:37.823081 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 19 08:08:37.827344 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 19 08:08:37.828301 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 19 08:08:37.838582 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Aug 19 08:08:37.857410 kernel: loop1: detected capacity change from 0 to 224512 Aug 19 08:08:37.865762 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 19 08:08:37.869239 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 19 08:08:37.893346 kernel: loop2: detected capacity change from 0 to 128016 Aug 19 08:08:37.899929 systemd-tmpfiles[1280]: ACLs are not supported, ignoring. Aug 19 08:08:37.899952 systemd-tmpfiles[1280]: ACLs are not supported, ignoring. Aug 19 08:08:37.904146 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 19 08:08:37.996315 kernel: loop3: detected capacity change from 0 to 111000 Aug 19 08:08:38.007295 kernel: loop4: detected capacity change from 0 to 224512 Aug 19 08:08:38.015301 kernel: loop5: detected capacity change from 0 to 128016 Aug 19 08:08:38.027029 (sd-merge)[1284]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Aug 19 08:08:38.027635 (sd-merge)[1284]: Merged extensions into '/usr'. Aug 19 08:08:38.034474 systemd[1]: Reload requested from client PID 1256 ('systemd-sysext') (unit systemd-sysext.service)... Aug 19 08:08:38.034492 systemd[1]: Reloading... Aug 19 08:08:38.181330 zram_generator::config[1307]: No configuration found. Aug 19 08:08:38.369347 ldconfig[1251]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 19 08:08:38.420933 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 19 08:08:38.421040 systemd[1]: Reloading finished in 386 ms. Aug 19 08:08:38.452872 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 19 08:08:38.454820 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 19 08:08:38.471699 systemd[1]: Starting ensure-sysext.service... Aug 19 08:08:38.473579 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 19 08:08:38.488067 systemd[1]: Reload requested from client PID 1347 ('systemctl') (unit ensure-sysext.service)... Aug 19 08:08:38.488081 systemd[1]: Reloading... Aug 19 08:08:38.493787 systemd-tmpfiles[1348]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Aug 19 08:08:38.494125 systemd-tmpfiles[1348]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Aug 19 08:08:38.494586 systemd-tmpfiles[1348]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 19 08:08:38.494935 systemd-tmpfiles[1348]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 19 08:08:38.495898 systemd-tmpfiles[1348]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 19 08:08:38.496242 systemd-tmpfiles[1348]: ACLs are not supported, ignoring. Aug 19 08:08:38.496406 systemd-tmpfiles[1348]: ACLs are not supported, ignoring. Aug 19 08:08:38.501050 systemd-tmpfiles[1348]: Detected autofs mount point /boot during canonicalization of boot. Aug 19 08:08:38.501132 systemd-tmpfiles[1348]: Skipping /boot Aug 19 08:08:38.512021 systemd-tmpfiles[1348]: Detected autofs mount point /boot during canonicalization of boot. Aug 19 08:08:38.512106 systemd-tmpfiles[1348]: Skipping /boot Aug 19 08:08:38.549341 zram_generator::config[1374]: No configuration found. Aug 19 08:08:38.831088 systemd[1]: Reloading finished in 342 ms. Aug 19 08:08:38.851516 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 19 08:08:38.853286 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 19 08:08:38.882656 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 19 08:08:38.885314 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 19 08:08:38.887819 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 19 08:08:38.900782 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 19 08:08:38.904511 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 19 08:08:38.907738 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 19 08:08:38.912616 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 19 08:08:38.912801 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 19 08:08:38.919342 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 19 08:08:38.922476 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 19 08:08:38.926632 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 19 08:08:38.927887 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 19 08:08:38.928016 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 19 08:08:38.930560 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 19 08:08:38.931751 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 19 08:08:38.933530 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 19 08:08:38.935544 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 19 08:08:38.935798 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 19 08:08:38.942478 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 19 08:08:38.942723 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 19 08:08:38.952082 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 19 08:08:38.952423 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 19 08:08:38.955436 systemd-udevd[1419]: Using default interface naming scheme 'v255'. Aug 19 08:08:38.958248 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 19 08:08:38.965483 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 19 08:08:38.965747 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 19 08:08:38.966688 augenrules[1448]: No rules Aug 19 08:08:38.968145 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 19 08:08:38.972362 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 19 08:08:38.980648 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 19 08:08:38.983399 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 19 08:08:38.984539 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 19 08:08:38.985389 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 19 08:08:38.986714 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 19 08:08:38.986986 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 19 08:08:38.989210 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 19 08:08:38.990936 systemd[1]: audit-rules.service: Deactivated successfully. Aug 19 08:08:38.991968 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 19 08:08:38.997568 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 19 08:08:38.999863 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 19 08:08:39.001806 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 19 08:08:39.002044 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 19 08:08:39.003707 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 19 08:08:39.004317 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 19 08:08:39.005867 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 19 08:08:39.006141 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 19 08:08:39.010998 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 19 08:08:39.011221 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 19 08:08:39.012976 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 19 08:08:39.023619 systemd[1]: Finished ensure-sysext.service. Aug 19 08:08:39.034857 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 19 08:08:39.035908 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 19 08:08:39.035972 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 19 08:08:39.040433 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 19 08:08:39.041531 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 19 08:08:39.054366 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Aug 19 08:08:39.113205 systemd-resolved[1417]: Positive Trust Anchors: Aug 19 08:08:39.113224 systemd-resolved[1417]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 19 08:08:39.113254 systemd-resolved[1417]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 19 08:08:39.121379 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 19 08:08:39.124550 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 19 08:08:39.125942 systemd-resolved[1417]: Defaulting to hostname 'linux'. Aug 19 08:08:39.130085 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 19 08:08:39.131340 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 19 08:08:39.148090 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 19 08:08:39.152302 kernel: mousedev: PS/2 mouse device common for all mice Aug 19 08:08:39.155326 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Aug 19 08:08:39.167286 kernel: ACPI: button: Power Button [PWRF] Aug 19 08:08:39.180785 systemd-networkd[1498]: lo: Link UP Aug 19 08:08:39.180799 systemd-networkd[1498]: lo: Gained carrier Aug 19 08:08:39.182519 systemd-networkd[1498]: Enumeration completed Aug 19 08:08:39.182616 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 19 08:08:39.184105 systemd-networkd[1498]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 19 08:08:39.184119 systemd-networkd[1498]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 19 08:08:39.184519 systemd[1]: Reached target network.target - Network. Aug 19 08:08:39.186886 systemd-networkd[1498]: eth0: Link UP Aug 19 08:08:39.187076 systemd-networkd[1498]: eth0: Gained carrier Aug 19 08:08:39.187100 systemd-networkd[1498]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 19 08:08:39.187607 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Aug 19 08:08:39.192520 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 19 08:08:39.200433 systemd-networkd[1498]: eth0: DHCPv4 address 10.0.0.49/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 19 08:08:39.213540 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Aug 19 08:08:39.219313 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Aug 19 08:08:39.219491 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Aug 19 08:08:39.222970 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Aug 19 08:08:39.241058 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 19 08:08:40.627434 systemd-resolved[1417]: Clock change detected. Flushing caches. Aug 19 08:08:40.627478 systemd-timesyncd[1499]: Contacted time server 10.0.0.1:123 (10.0.0.1). Aug 19 08:08:40.627530 systemd-timesyncd[1499]: Initial clock synchronization to Tue 2025-08-19 08:08:40.627369 UTC. Aug 19 08:08:40.627597 systemd[1]: Reached target sysinit.target - System Initialization. Aug 19 08:08:40.628754 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 19 08:08:40.629979 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 19 08:08:40.631228 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Aug 19 08:08:40.632338 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 19 08:08:40.633589 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 19 08:08:40.633623 systemd[1]: Reached target paths.target - Path Units. Aug 19 08:08:40.634529 systemd[1]: Reached target time-set.target - System Time Set. Aug 19 08:08:40.635703 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 19 08:08:40.636876 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 19 08:08:40.638099 systemd[1]: Reached target timers.target - Timer Units. Aug 19 08:08:40.722173 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 19 08:08:40.725537 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 19 08:08:40.729538 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Aug 19 08:08:40.731009 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Aug 19 08:08:40.732274 systemd[1]: Reached target ssh-access.target - SSH Access Available. Aug 19 08:08:40.739121 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 19 08:08:40.740495 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Aug 19 08:08:40.742291 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 19 08:08:40.749745 systemd[1]: Reached target sockets.target - Socket Units. Aug 19 08:08:40.750851 systemd[1]: Reached target basic.target - Basic System. Aug 19 08:08:40.751904 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 19 08:08:40.752048 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 19 08:08:40.769869 systemd[1]: Starting containerd.service - containerd container runtime... Aug 19 08:08:40.773292 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 19 08:08:40.776346 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 19 08:08:40.780717 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 19 08:08:40.785424 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 19 08:08:40.786765 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 19 08:08:40.788155 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Aug 19 08:08:40.791034 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 19 08:08:40.800540 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 19 08:08:40.802893 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 19 08:08:40.811016 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 19 08:08:40.819212 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 19 08:08:40.820106 jq[1538]: false Aug 19 08:08:40.822057 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 19 08:08:40.823311 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 19 08:08:40.824469 systemd[1]: Starting update-engine.service - Update Engine... Aug 19 08:08:40.827267 google_oslogin_nss_cache[1540]: oslogin_cache_refresh[1540]: Refreshing passwd entry cache Aug 19 08:08:40.827689 oslogin_cache_refresh[1540]: Refreshing passwd entry cache Aug 19 08:08:40.828602 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 19 08:08:40.834866 extend-filesystems[1539]: Found /dev/vda6 Aug 19 08:08:40.836654 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 19 08:08:40.837196 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 19 08:08:40.837610 google_oslogin_nss_cache[1540]: oslogin_cache_refresh[1540]: Failure getting users, quitting Aug 19 08:08:40.839673 oslogin_cache_refresh[1540]: Failure getting users, quitting Aug 19 08:08:40.840615 google_oslogin_nss_cache[1540]: oslogin_cache_refresh[1540]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Aug 19 08:08:40.840615 google_oslogin_nss_cache[1540]: oslogin_cache_refresh[1540]: Refreshing group entry cache Aug 19 08:08:40.839892 oslogin_cache_refresh[1540]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Aug 19 08:08:40.839987 oslogin_cache_refresh[1540]: Refreshing group entry cache Aug 19 08:08:40.843171 extend-filesystems[1539]: Found /dev/vda9 Aug 19 08:08:40.847993 oslogin_cache_refresh[1540]: Failure getting groups, quitting Aug 19 08:08:40.848266 google_oslogin_nss_cache[1540]: oslogin_cache_refresh[1540]: Failure getting groups, quitting Aug 19 08:08:40.848266 google_oslogin_nss_cache[1540]: oslogin_cache_refresh[1540]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Aug 19 08:08:40.848004 oslogin_cache_refresh[1540]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Aug 19 08:08:40.848458 extend-filesystems[1539]: Checking size of /dev/vda9 Aug 19 08:08:40.851192 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 19 08:08:40.853116 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 19 08:08:40.853368 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 19 08:08:40.855431 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Aug 19 08:08:40.855688 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Aug 19 08:08:40.858781 jq[1551]: true Aug 19 08:08:40.863111 update_engine[1550]: I20250819 08:08:40.861860 1550 main.cc:92] Flatcar Update Engine starting Aug 19 08:08:40.870770 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 19 08:08:40.879237 kernel: kvm_amd: TSC scaling supported Aug 19 08:08:40.879288 kernel: kvm_amd: Nested Virtualization enabled Aug 19 08:08:40.879302 kernel: kvm_amd: Nested Paging enabled Aug 19 08:08:40.879314 kernel: kvm_amd: LBR virtualization supported Aug 19 08:08:40.885109 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Aug 19 08:08:40.885143 kernel: kvm_amd: Virtual GIF supported Aug 19 08:08:40.884485 (ntainerd)[1577]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 19 08:08:40.894698 extend-filesystems[1539]: Resized partition /dev/vda9 Aug 19 08:08:40.898019 jq[1568]: true Aug 19 08:08:40.902415 systemd[1]: motdgen.service: Deactivated successfully. Aug 19 08:08:40.903259 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 19 08:08:40.904695 extend-filesystems[1582]: resize2fs 1.47.2 (1-Jan-2025) Aug 19 08:08:40.910220 dbus-daemon[1536]: [system] SELinux support is enabled Aug 19 08:08:40.910640 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 19 08:08:40.913833 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 19 08:08:40.913861 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 19 08:08:40.915482 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 19 08:08:40.915505 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 19 08:08:40.921010 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Aug 19 08:08:40.925554 tar[1558]: linux-amd64/LICENSE Aug 19 08:08:40.925805 tar[1558]: linux-amd64/helm Aug 19 08:08:40.927427 systemd[1]: Started update-engine.service - Update Engine. Aug 19 08:08:40.927745 update_engine[1550]: I20250819 08:08:40.927529 1550 update_check_scheduler.cc:74] Next update check in 11m10s Aug 19 08:08:40.930373 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 19 08:08:40.936447 systemd-logind[1548]: Watching system buttons on /dev/input/event2 (Power Button) Aug 19 08:08:40.936474 systemd-logind[1548]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 19 08:08:40.937678 systemd-logind[1548]: New seat seat0. Aug 19 08:08:41.025064 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Aug 19 08:08:41.027789 systemd[1]: Started systemd-logind.service - User Login Management. Aug 19 08:08:41.052152 extend-filesystems[1582]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Aug 19 08:08:41.052152 extend-filesystems[1582]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 19 08:08:41.052152 extend-filesystems[1582]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Aug 19 08:08:41.059455 extend-filesystems[1539]: Resized filesystem in /dev/vda9 Aug 19 08:08:41.061524 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 19 08:08:41.062134 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 19 08:08:41.065164 bash[1601]: Updated "/home/core/.ssh/authorized_keys" Aug 19 08:08:41.089039 sshd_keygen[1574]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 19 08:08:41.189013 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 19 08:08:41.190855 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 19 08:08:41.211986 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 19 08:08:41.234415 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 19 08:08:41.235688 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Aug 19 08:08:41.246303 kernel: EDAC MC: Ver: 3.0.0 Aug 19 08:08:41.261267 systemd[1]: issuegen.service: Deactivated successfully. Aug 19 08:08:41.261608 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 19 08:08:41.264879 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 19 08:08:41.267639 locksmithd[1586]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 19 08:08:41.289535 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 19 08:08:41.293573 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 19 08:08:41.296462 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 19 08:08:41.297795 systemd[1]: Reached target getty.target - Login Prompts. Aug 19 08:08:41.360176 containerd[1577]: time="2025-08-19T08:08:41Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Aug 19 08:08:41.361156 containerd[1577]: time="2025-08-19T08:08:41.361127804Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Aug 19 08:08:41.375859 containerd[1577]: time="2025-08-19T08:08:41.375801452Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="12.513µs" Aug 19 08:08:41.375859 containerd[1577]: time="2025-08-19T08:08:41.375845064Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Aug 19 08:08:41.375859 containerd[1577]: time="2025-08-19T08:08:41.375868628Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Aug 19 08:08:41.376206 containerd[1577]: time="2025-08-19T08:08:41.376113848Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Aug 19 08:08:41.376206 containerd[1577]: time="2025-08-19T08:08:41.376135158Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Aug 19 08:08:41.376206 containerd[1577]: time="2025-08-19T08:08:41.376173880Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Aug 19 08:08:41.376272 containerd[1577]: time="2025-08-19T08:08:41.376242940Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Aug 19 08:08:41.376272 containerd[1577]: time="2025-08-19T08:08:41.376254421Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Aug 19 08:08:41.376634 containerd[1577]: time="2025-08-19T08:08:41.376609447Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Aug 19 08:08:41.376634 containerd[1577]: time="2025-08-19T08:08:41.376627571Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Aug 19 08:08:41.376683 containerd[1577]: time="2025-08-19T08:08:41.376638812Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Aug 19 08:08:41.376683 containerd[1577]: time="2025-08-19T08:08:41.376647588Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Aug 19 08:08:41.376879 containerd[1577]: time="2025-08-19T08:08:41.376755611Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Aug 19 08:08:41.377049 containerd[1577]: time="2025-08-19T08:08:41.377024685Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Aug 19 08:08:41.377080 containerd[1577]: time="2025-08-19T08:08:41.377063989Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Aug 19 08:08:41.377080 containerd[1577]: time="2025-08-19T08:08:41.377076052Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Aug 19 08:08:41.377167 containerd[1577]: time="2025-08-19T08:08:41.377144951Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Aug 19 08:08:41.377441 containerd[1577]: time="2025-08-19T08:08:41.377417101Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Aug 19 08:08:41.377516 containerd[1577]: time="2025-08-19T08:08:41.377497993Z" level=info msg="metadata content store policy set" policy=shared Aug 19 08:08:41.384719 containerd[1577]: time="2025-08-19T08:08:41.384694013Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Aug 19 08:08:41.384910 containerd[1577]: time="2025-08-19T08:08:41.384867678Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Aug 19 08:08:41.384946 containerd[1577]: time="2025-08-19T08:08:41.384920808Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Aug 19 08:08:41.384946 containerd[1577]: time="2025-08-19T08:08:41.384941166Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Aug 19 08:08:41.385014 containerd[1577]: time="2025-08-19T08:08:41.384954230Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Aug 19 08:08:41.385014 containerd[1577]: time="2025-08-19T08:08:41.384966323Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Aug 19 08:08:41.385014 containerd[1577]: time="2025-08-19T08:08:41.384987232Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Aug 19 08:08:41.385014 containerd[1577]: time="2025-08-19T08:08:41.385006238Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Aug 19 08:08:41.385162 containerd[1577]: time="2025-08-19T08:08:41.385026195Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Aug 19 08:08:41.385162 containerd[1577]: time="2025-08-19T08:08:41.385036785Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Aug 19 08:08:41.385162 containerd[1577]: time="2025-08-19T08:08:41.385046133Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Aug 19 08:08:41.385162 containerd[1577]: time="2025-08-19T08:08:41.385078193Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Aug 19 08:08:41.385409 containerd[1577]: time="2025-08-19T08:08:41.385271154Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Aug 19 08:08:41.385409 containerd[1577]: time="2025-08-19T08:08:41.385302864Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Aug 19 08:08:41.385409 containerd[1577]: time="2025-08-19T08:08:41.385324214Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Aug 19 08:08:41.385409 containerd[1577]: time="2025-08-19T08:08:41.385340655Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Aug 19 08:08:41.385409 containerd[1577]: time="2025-08-19T08:08:41.385351856Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Aug 19 08:08:41.385409 containerd[1577]: time="2025-08-19T08:08:41.385362375Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Aug 19 08:08:41.385409 containerd[1577]: time="2025-08-19T08:08:41.385374228Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Aug 19 08:08:41.385409 containerd[1577]: time="2025-08-19T08:08:41.385397932Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Aug 19 08:08:41.385409 containerd[1577]: time="2025-08-19T08:08:41.385409554Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Aug 19 08:08:41.385676 containerd[1577]: time="2025-08-19T08:08:41.385421156Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Aug 19 08:08:41.385676 containerd[1577]: time="2025-08-19T08:08:41.385431184Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Aug 19 08:08:41.385676 containerd[1577]: time="2025-08-19T08:08:41.385541531Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Aug 19 08:08:41.385676 containerd[1577]: time="2025-08-19T08:08:41.385586616Z" level=info msg="Start snapshots syncer" Aug 19 08:08:41.385676 containerd[1577]: time="2025-08-19T08:08:41.385613526Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Aug 19 08:08:41.386424 containerd[1577]: time="2025-08-19T08:08:41.386341070Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Aug 19 08:08:41.386825 containerd[1577]: time="2025-08-19T08:08:41.386573506Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Aug 19 08:08:41.388440 containerd[1577]: time="2025-08-19T08:08:41.388274966Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Aug 19 08:08:41.388702 containerd[1577]: time="2025-08-19T08:08:41.388676188Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Aug 19 08:08:41.388745 containerd[1577]: time="2025-08-19T08:08:41.388726472Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Aug 19 08:08:41.388767 containerd[1577]: time="2025-08-19T08:08:41.388747642Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Aug 19 08:08:41.388787 containerd[1577]: time="2025-08-19T08:08:41.388764714Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Aug 19 08:08:41.388815 containerd[1577]: time="2025-08-19T08:08:41.388781796Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Aug 19 08:08:41.388815 containerd[1577]: time="2025-08-19T08:08:41.388799048Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Aug 19 08:08:41.388853 containerd[1577]: time="2025-08-19T08:08:41.388819667Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Aug 19 08:08:41.388882 containerd[1577]: time="2025-08-19T08:08:41.388866935Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Aug 19 08:08:41.388934 containerd[1577]: time="2025-08-19T08:08:41.388912851Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Aug 19 08:08:41.388963 containerd[1577]: time="2025-08-19T08:08:41.388937728Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Aug 19 08:08:41.389049 containerd[1577]: time="2025-08-19T08:08:41.388996378Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Aug 19 08:08:41.389049 containerd[1577]: time="2025-08-19T08:08:41.389027396Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Aug 19 08:08:41.389110 containerd[1577]: time="2025-08-19T08:08:41.389052122Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Aug 19 08:08:41.389110 containerd[1577]: time="2025-08-19T08:08:41.389064856Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Aug 19 08:08:41.389110 containerd[1577]: time="2025-08-19T08:08:41.389077580Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Aug 19 08:08:41.389178 containerd[1577]: time="2025-08-19T08:08:41.389137513Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Aug 19 08:08:41.389178 containerd[1577]: time="2025-08-19T08:08:41.389158732Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Aug 19 08:08:41.389238 containerd[1577]: time="2025-08-19T08:08:41.389220558Z" level=info msg="runtime interface created" Aug 19 08:08:41.389238 containerd[1577]: time="2025-08-19T08:08:41.389231819Z" level=info msg="created NRI interface" Aug 19 08:08:41.389275 containerd[1577]: time="2025-08-19T08:08:41.389242720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Aug 19 08:08:41.389275 containerd[1577]: time="2025-08-19T08:08:41.389267075Z" level=info msg="Connect containerd service" Aug 19 08:08:41.389420 containerd[1577]: time="2025-08-19T08:08:41.389391809Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 19 08:08:41.413400 containerd[1577]: time="2025-08-19T08:08:41.413331698Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 19 08:08:41.492771 containerd[1577]: time="2025-08-19T08:08:41.492306007Z" level=info msg="Start subscribing containerd event" Aug 19 08:08:41.492771 containerd[1577]: time="2025-08-19T08:08:41.492389233Z" level=info msg="Start recovering state" Aug 19 08:08:41.492771 containerd[1577]: time="2025-08-19T08:08:41.492521531Z" level=info msg="Start event monitor" Aug 19 08:08:41.492771 containerd[1577]: time="2025-08-19T08:08:41.492535557Z" level=info msg="Start cni network conf syncer for default" Aug 19 08:08:41.492771 containerd[1577]: time="2025-08-19T08:08:41.492553721Z" level=info msg="Start streaming server" Aug 19 08:08:41.492771 containerd[1577]: time="2025-08-19T08:08:41.492566004Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Aug 19 08:08:41.492771 containerd[1577]: time="2025-08-19T08:08:41.492573147Z" level=info msg="runtime interface starting up..." Aug 19 08:08:41.492771 containerd[1577]: time="2025-08-19T08:08:41.492579650Z" level=info msg="starting plugins..." Aug 19 08:08:41.492771 containerd[1577]: time="2025-08-19T08:08:41.492595800Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Aug 19 08:08:41.492771 containerd[1577]: time="2025-08-19T08:08:41.492704494Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 19 08:08:41.492771 containerd[1577]: time="2025-08-19T08:08:41.492776288Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 19 08:08:41.492983 systemd[1]: Started containerd.service - containerd container runtime. Aug 19 08:08:41.493458 containerd[1577]: time="2025-08-19T08:08:41.493440884Z" level=info msg="containerd successfully booted in 0.134091s" Aug 19 08:08:41.497697 tar[1558]: linux-amd64/README.md Aug 19 08:08:41.516407 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 19 08:08:41.626256 systemd-networkd[1498]: eth0: Gained IPv6LL Aug 19 08:08:41.629179 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 19 08:08:41.630877 systemd[1]: Reached target network-online.target - Network is Online. Aug 19 08:08:41.633355 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Aug 19 08:08:41.635774 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 19 08:08:41.637858 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 19 08:08:41.669328 systemd[1]: coreos-metadata.service: Deactivated successfully. Aug 19 08:08:41.669693 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Aug 19 08:08:41.671646 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 19 08:08:41.674257 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 19 08:08:43.048973 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 08:08:43.050923 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 19 08:08:43.053076 systemd[1]: Startup finished in 4.043s (kernel) + 6.961s (initrd) + 5.076s (userspace) = 16.080s. Aug 19 08:08:43.053355 (kubelet)[1678]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 19 08:08:43.609404 kubelet[1678]: E0819 08:08:43.609334 1678 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 19 08:08:43.613525 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 19 08:08:43.613732 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 19 08:08:43.614156 systemd[1]: kubelet.service: Consumed 1.793s CPU time, 265.8M memory peak. Aug 19 08:08:44.593120 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 19 08:08:44.594668 systemd[1]: Started sshd@0-10.0.0.49:22-10.0.0.1:37102.service - OpenSSH per-connection server daemon (10.0.0.1:37102). Aug 19 08:08:44.680849 sshd[1692]: Accepted publickey for core from 10.0.0.1 port 37102 ssh2: RSA SHA256:kecLVWRG1G7MHrHN/yG6X078KPWjs/jTMbEJqAmOzyM Aug 19 08:08:44.683108 sshd-session[1692]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:08:44.690500 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 19 08:08:44.691929 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 19 08:08:44.698160 systemd-logind[1548]: New session 1 of user core. Aug 19 08:08:44.724395 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 19 08:08:44.728741 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 19 08:08:44.761747 (systemd)[1697]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 19 08:08:44.764171 systemd-logind[1548]: New session c1 of user core. Aug 19 08:08:44.970845 systemd[1697]: Queued start job for default target default.target. Aug 19 08:08:44.989032 systemd[1697]: Created slice app.slice - User Application Slice. Aug 19 08:08:44.989076 systemd[1697]: Reached target paths.target - Paths. Aug 19 08:08:44.989139 systemd[1697]: Reached target timers.target - Timers. Aug 19 08:08:44.992121 systemd[1697]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 19 08:08:45.039638 systemd[1697]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 19 08:08:45.039769 systemd[1697]: Reached target sockets.target - Sockets. Aug 19 08:08:45.039810 systemd[1697]: Reached target basic.target - Basic System. Aug 19 08:08:45.039850 systemd[1697]: Reached target default.target - Main User Target. Aug 19 08:08:45.039879 systemd[1697]: Startup finished in 268ms. Aug 19 08:08:45.040168 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 19 08:08:45.041902 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 19 08:08:45.105691 systemd[1]: Started sshd@1-10.0.0.49:22-10.0.0.1:37116.service - OpenSSH per-connection server daemon (10.0.0.1:37116). Aug 19 08:08:45.185472 sshd[1708]: Accepted publickey for core from 10.0.0.1 port 37116 ssh2: RSA SHA256:kecLVWRG1G7MHrHN/yG6X078KPWjs/jTMbEJqAmOzyM Aug 19 08:08:45.186928 sshd-session[1708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:08:45.191368 systemd-logind[1548]: New session 2 of user core. Aug 19 08:08:45.201211 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 19 08:08:45.253608 sshd[1711]: Connection closed by 10.0.0.1 port 37116 Aug 19 08:08:45.253856 sshd-session[1708]: pam_unix(sshd:session): session closed for user core Aug 19 08:08:45.268605 systemd[1]: sshd@1-10.0.0.49:22-10.0.0.1:37116.service: Deactivated successfully. Aug 19 08:08:45.270789 systemd[1]: session-2.scope: Deactivated successfully. Aug 19 08:08:45.273315 systemd-logind[1548]: Session 2 logged out. Waiting for processes to exit. Aug 19 08:08:45.275377 systemd[1]: Started sshd@2-10.0.0.49:22-10.0.0.1:37126.service - OpenSSH per-connection server daemon (10.0.0.1:37126). Aug 19 08:08:45.276292 systemd-logind[1548]: Removed session 2. Aug 19 08:08:45.324912 sshd[1717]: Accepted publickey for core from 10.0.0.1 port 37126 ssh2: RSA SHA256:kecLVWRG1G7MHrHN/yG6X078KPWjs/jTMbEJqAmOzyM Aug 19 08:08:45.326160 sshd-session[1717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:08:45.330147 systemd-logind[1548]: New session 3 of user core. Aug 19 08:08:45.340215 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 19 08:08:45.388879 sshd[1720]: Connection closed by 10.0.0.1 port 37126 Aug 19 08:08:45.389240 sshd-session[1717]: pam_unix(sshd:session): session closed for user core Aug 19 08:08:45.402826 systemd[1]: sshd@2-10.0.0.49:22-10.0.0.1:37126.service: Deactivated successfully. Aug 19 08:08:45.404690 systemd[1]: session-3.scope: Deactivated successfully. Aug 19 08:08:45.405436 systemd-logind[1548]: Session 3 logged out. Waiting for processes to exit. Aug 19 08:08:45.408306 systemd[1]: Started sshd@3-10.0.0.49:22-10.0.0.1:37130.service - OpenSSH per-connection server daemon (10.0.0.1:37130). Aug 19 08:08:45.408861 systemd-logind[1548]: Removed session 3. Aug 19 08:08:45.458103 sshd[1726]: Accepted publickey for core from 10.0.0.1 port 37130 ssh2: RSA SHA256:kecLVWRG1G7MHrHN/yG6X078KPWjs/jTMbEJqAmOzyM Aug 19 08:08:45.459326 sshd-session[1726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:08:45.463275 systemd-logind[1548]: New session 4 of user core. Aug 19 08:08:45.478207 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 19 08:08:45.532335 sshd[1729]: Connection closed by 10.0.0.1 port 37130 Aug 19 08:08:45.532776 sshd-session[1726]: pam_unix(sshd:session): session closed for user core Aug 19 08:08:45.545616 systemd[1]: sshd@3-10.0.0.49:22-10.0.0.1:37130.service: Deactivated successfully. Aug 19 08:08:45.547375 systemd[1]: session-4.scope: Deactivated successfully. Aug 19 08:08:45.548101 systemd-logind[1548]: Session 4 logged out. Waiting for processes to exit. Aug 19 08:08:45.550769 systemd[1]: Started sshd@4-10.0.0.49:22-10.0.0.1:37140.service - OpenSSH per-connection server daemon (10.0.0.1:37140). Aug 19 08:08:45.551327 systemd-logind[1548]: Removed session 4. Aug 19 08:08:45.589852 sshd[1735]: Accepted publickey for core from 10.0.0.1 port 37140 ssh2: RSA SHA256:kecLVWRG1G7MHrHN/yG6X078KPWjs/jTMbEJqAmOzyM Aug 19 08:08:45.591759 sshd-session[1735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:08:45.596337 systemd-logind[1548]: New session 5 of user core. Aug 19 08:08:45.606216 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 19 08:08:45.663895 sudo[1739]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 19 08:08:45.664247 sudo[1739]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 19 08:08:45.684842 sudo[1739]: pam_unix(sudo:session): session closed for user root Aug 19 08:08:45.686629 sshd[1738]: Connection closed by 10.0.0.1 port 37140 Aug 19 08:08:45.687041 sshd-session[1735]: pam_unix(sshd:session): session closed for user core Aug 19 08:08:45.700936 systemd[1]: sshd@4-10.0.0.49:22-10.0.0.1:37140.service: Deactivated successfully. Aug 19 08:08:45.702894 systemd[1]: session-5.scope: Deactivated successfully. Aug 19 08:08:45.703656 systemd-logind[1548]: Session 5 logged out. Waiting for processes to exit. Aug 19 08:08:45.706630 systemd[1]: Started sshd@5-10.0.0.49:22-10.0.0.1:37146.service - OpenSSH per-connection server daemon (10.0.0.1:37146). Aug 19 08:08:45.707176 systemd-logind[1548]: Removed session 5. Aug 19 08:08:45.759280 sshd[1745]: Accepted publickey for core from 10.0.0.1 port 37146 ssh2: RSA SHA256:kecLVWRG1G7MHrHN/yG6X078KPWjs/jTMbEJqAmOzyM Aug 19 08:08:45.760587 sshd-session[1745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:08:45.764650 systemd-logind[1548]: New session 6 of user core. Aug 19 08:08:45.772238 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 19 08:08:45.825029 sudo[1750]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 19 08:08:45.825381 sudo[1750]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 19 08:08:45.832387 sudo[1750]: pam_unix(sudo:session): session closed for user root Aug 19 08:08:45.838305 sudo[1749]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Aug 19 08:08:45.838605 sudo[1749]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 19 08:08:45.848329 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 19 08:08:45.892558 augenrules[1772]: No rules Aug 19 08:08:45.894437 systemd[1]: audit-rules.service: Deactivated successfully. Aug 19 08:08:45.894706 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 19 08:08:45.895920 sudo[1749]: pam_unix(sudo:session): session closed for user root Aug 19 08:08:45.897429 sshd[1748]: Connection closed by 10.0.0.1 port 37146 Aug 19 08:08:45.897762 sshd-session[1745]: pam_unix(sshd:session): session closed for user core Aug 19 08:08:45.906882 systemd[1]: sshd@5-10.0.0.49:22-10.0.0.1:37146.service: Deactivated successfully. Aug 19 08:08:45.908886 systemd[1]: session-6.scope: Deactivated successfully. Aug 19 08:08:45.909657 systemd-logind[1548]: Session 6 logged out. Waiting for processes to exit. Aug 19 08:08:45.912454 systemd[1]: Started sshd@6-10.0.0.49:22-10.0.0.1:37148.service - OpenSSH per-connection server daemon (10.0.0.1:37148). Aug 19 08:08:45.912991 systemd-logind[1548]: Removed session 6. Aug 19 08:08:45.959110 sshd[1781]: Accepted publickey for core from 10.0.0.1 port 37148 ssh2: RSA SHA256:kecLVWRG1G7MHrHN/yG6X078KPWjs/jTMbEJqAmOzyM Aug 19 08:08:45.960367 sshd-session[1781]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:08:45.964773 systemd-logind[1548]: New session 7 of user core. Aug 19 08:08:45.974222 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 19 08:08:46.027477 sudo[1785]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 19 08:08:46.027790 sudo[1785]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 19 08:08:46.322540 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 19 08:08:46.345418 (dockerd)[1805]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 19 08:08:46.556578 dockerd[1805]: time="2025-08-19T08:08:46.556508978Z" level=info msg="Starting up" Aug 19 08:08:46.557315 dockerd[1805]: time="2025-08-19T08:08:46.557287338Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Aug 19 08:08:46.569012 dockerd[1805]: time="2025-08-19T08:08:46.568959308Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Aug 19 08:08:47.201189 dockerd[1805]: time="2025-08-19T08:08:47.201126306Z" level=info msg="Loading containers: start." Aug 19 08:08:47.213123 kernel: Initializing XFRM netlink socket Aug 19 08:08:47.490839 systemd-networkd[1498]: docker0: Link UP Aug 19 08:08:47.498160 dockerd[1805]: time="2025-08-19T08:08:47.498112849Z" level=info msg="Loading containers: done." Aug 19 08:08:47.512262 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3283178092-merged.mount: Deactivated successfully. Aug 19 08:08:47.514438 dockerd[1805]: time="2025-08-19T08:08:47.514385625Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 19 08:08:47.514521 dockerd[1805]: time="2025-08-19T08:08:47.514470464Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Aug 19 08:08:47.514583 dockerd[1805]: time="2025-08-19T08:08:47.514565633Z" level=info msg="Initializing buildkit" Aug 19 08:08:47.545195 dockerd[1805]: time="2025-08-19T08:08:47.545134488Z" level=info msg="Completed buildkit initialization" Aug 19 08:08:47.555257 dockerd[1805]: time="2025-08-19T08:08:47.555215597Z" level=info msg="Daemon has completed initialization" Aug 19 08:08:47.555411 dockerd[1805]: time="2025-08-19T08:08:47.555287031Z" level=info msg="API listen on /run/docker.sock" Aug 19 08:08:47.555593 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 19 08:08:48.504508 containerd[1577]: time="2025-08-19T08:08:48.504433652Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\"" Aug 19 08:08:49.191290 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount385236207.mount: Deactivated successfully. Aug 19 08:08:50.549400 containerd[1577]: time="2025-08-19T08:08:50.549325413Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:08:50.550107 containerd[1577]: time="2025-08-19T08:08:50.550043008Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.8: active requests=0, bytes read=28800687" Aug 19 08:08:50.551423 containerd[1577]: time="2025-08-19T08:08:50.551395714Z" level=info msg="ImageCreate event name:\"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:08:50.554761 containerd[1577]: time="2025-08-19T08:08:50.554698265Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:08:50.555627 containerd[1577]: time="2025-08-19T08:08:50.555596329Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.8\" with image id \"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\", size \"28797487\" in 2.051091394s" Aug 19 08:08:50.555627 containerd[1577]: time="2025-08-19T08:08:50.555633609Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\" returns image reference \"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\"" Aug 19 08:08:50.556461 containerd[1577]: time="2025-08-19T08:08:50.556414262Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\"" Aug 19 08:08:52.111688 containerd[1577]: time="2025-08-19T08:08:52.111621350Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:08:52.112484 containerd[1577]: time="2025-08-19T08:08:52.112449352Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.8: active requests=0, bytes read=24784128" Aug 19 08:08:52.113851 containerd[1577]: time="2025-08-19T08:08:52.113818669Z" level=info msg="ImageCreate event name:\"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:08:52.117653 containerd[1577]: time="2025-08-19T08:08:52.117626237Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:08:52.118572 containerd[1577]: time="2025-08-19T08:08:52.118519442Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.8\" with image id \"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\", size \"26387322\" in 1.56206835s" Aug 19 08:08:52.118572 containerd[1577]: time="2025-08-19T08:08:52.118579224Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\" returns image reference \"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\"" Aug 19 08:08:52.119233 containerd[1577]: time="2025-08-19T08:08:52.119194908Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\"" Aug 19 08:08:53.680775 containerd[1577]: time="2025-08-19T08:08:53.680697337Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:08:53.681735 containerd[1577]: time="2025-08-19T08:08:53.681669329Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.8: active requests=0, bytes read=19175036" Aug 19 08:08:53.682884 containerd[1577]: time="2025-08-19T08:08:53.682825016Z" level=info msg="ImageCreate event name:\"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:08:53.685247 containerd[1577]: time="2025-08-19T08:08:53.685206029Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:08:53.686167 containerd[1577]: time="2025-08-19T08:08:53.686108782Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.8\" with image id \"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\", size \"20778248\" in 1.566855274s" Aug 19 08:08:53.686167 containerd[1577]: time="2025-08-19T08:08:53.686139499Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\" returns image reference \"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\"" Aug 19 08:08:53.686624 containerd[1577]: time="2025-08-19T08:08:53.686596937Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\"" Aug 19 08:08:53.713493 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 19 08:08:53.715393 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 19 08:08:53.963998 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 08:08:53.968605 (kubelet)[2094]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 19 08:08:54.100479 kubelet[2094]: E0819 08:08:54.100394 2094 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 19 08:08:54.106705 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 19 08:08:54.106940 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 19 08:08:54.107414 systemd[1]: kubelet.service: Consumed 286ms CPU time, 111.1M memory peak. Aug 19 08:08:55.397793 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2165919012.mount: Deactivated successfully. Aug 19 08:08:56.170458 containerd[1577]: time="2025-08-19T08:08:56.170367359Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:08:56.171117 containerd[1577]: time="2025-08-19T08:08:56.171050911Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.8: active requests=0, bytes read=30897170" Aug 19 08:08:56.172306 containerd[1577]: time="2025-08-19T08:08:56.172264165Z" level=info msg="ImageCreate event name:\"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:08:56.174170 containerd[1577]: time="2025-08-19T08:08:56.174138789Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:08:56.174906 containerd[1577]: time="2025-08-19T08:08:56.174859100Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.8\" with image id \"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\", repo tag \"registry.k8s.io/kube-proxy:v1.32.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\", size \"30896189\" in 2.488133201s" Aug 19 08:08:56.174935 containerd[1577]: time="2025-08-19T08:08:56.174908502Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\" returns image reference \"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\"" Aug 19 08:08:56.175703 containerd[1577]: time="2025-08-19T08:08:56.175680690Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 19 08:08:56.715421 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1834656423.mount: Deactivated successfully. Aug 19 08:08:58.305864 containerd[1577]: time="2025-08-19T08:08:58.305787359Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:08:58.306555 containerd[1577]: time="2025-08-19T08:08:58.306518510Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Aug 19 08:08:58.307866 containerd[1577]: time="2025-08-19T08:08:58.307811083Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:08:58.310533 containerd[1577]: time="2025-08-19T08:08:58.310496577Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:08:58.311462 containerd[1577]: time="2025-08-19T08:08:58.311436058Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.135728418s" Aug 19 08:08:58.311529 containerd[1577]: time="2025-08-19T08:08:58.311477226Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 19 08:08:58.312059 containerd[1577]: time="2025-08-19T08:08:58.312014202Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 19 08:08:58.908487 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1524310187.mount: Deactivated successfully. Aug 19 08:08:58.914612 containerd[1577]: time="2025-08-19T08:08:58.914552697Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 19 08:08:58.915278 containerd[1577]: time="2025-08-19T08:08:58.915223073Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Aug 19 08:08:58.916415 containerd[1577]: time="2025-08-19T08:08:58.916380964Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 19 08:08:58.918726 containerd[1577]: time="2025-08-19T08:08:58.918673552Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 19 08:08:58.919469 containerd[1577]: time="2025-08-19T08:08:58.919421103Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 607.37951ms" Aug 19 08:08:58.919469 containerd[1577]: time="2025-08-19T08:08:58.919460527Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 19 08:08:58.920236 containerd[1577]: time="2025-08-19T08:08:58.920188792Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Aug 19 08:08:59.575516 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3950173268.mount: Deactivated successfully. Aug 19 08:09:01.400601 containerd[1577]: time="2025-08-19T08:09:01.400530647Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:09:01.401301 containerd[1577]: time="2025-08-19T08:09:01.401223767Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Aug 19 08:09:01.402604 containerd[1577]: time="2025-08-19T08:09:01.402558970Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:09:01.405609 containerd[1577]: time="2025-08-19T08:09:01.405541661Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:09:01.406774 containerd[1577]: time="2025-08-19T08:09:01.406708188Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.486487666s" Aug 19 08:09:01.406774 containerd[1577]: time="2025-08-19T08:09:01.406759564Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Aug 19 08:09:03.882661 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 08:09:03.882841 systemd[1]: kubelet.service: Consumed 286ms CPU time, 111.1M memory peak. Aug 19 08:09:03.885134 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 19 08:09:03.909685 systemd[1]: Reload requested from client PID 2250 ('systemctl') (unit session-7.scope)... Aug 19 08:09:03.909701 systemd[1]: Reloading... Aug 19 08:09:03.995185 zram_generator::config[2296]: No configuration found. Aug 19 08:09:04.385002 systemd[1]: Reloading finished in 474 ms. Aug 19 08:09:04.462797 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 19 08:09:04.462895 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 19 08:09:04.463202 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 08:09:04.463247 systemd[1]: kubelet.service: Consumed 147ms CPU time, 98.3M memory peak. Aug 19 08:09:04.464699 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 19 08:09:04.628786 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 08:09:04.632936 (kubelet)[2341]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 19 08:09:04.673028 kubelet[2341]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 19 08:09:04.673028 kubelet[2341]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 19 08:09:04.673028 kubelet[2341]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 19 08:09:04.673028 kubelet[2341]: I0819 08:09:04.672998 2341 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 19 08:09:05.109507 kubelet[2341]: I0819 08:09:05.109450 2341 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Aug 19 08:09:05.109507 kubelet[2341]: I0819 08:09:05.109482 2341 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 19 08:09:05.109792 kubelet[2341]: I0819 08:09:05.109771 2341 server.go:954] "Client rotation is on, will bootstrap in background" Aug 19 08:09:05.269317 kubelet[2341]: E0819 08:09:05.269255 2341 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.49:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.49:6443: connect: connection refused" logger="UnhandledError" Aug 19 08:09:05.270169 kubelet[2341]: I0819 08:09:05.270145 2341 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 19 08:09:05.277847 kubelet[2341]: I0819 08:09:05.277822 2341 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Aug 19 08:09:05.283068 kubelet[2341]: I0819 08:09:05.283033 2341 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 19 08:09:05.284572 kubelet[2341]: I0819 08:09:05.284524 2341 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 19 08:09:05.284785 kubelet[2341]: I0819 08:09:05.284560 2341 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 19 08:09:05.284959 kubelet[2341]: I0819 08:09:05.284793 2341 topology_manager.go:138] "Creating topology manager with none policy" Aug 19 08:09:05.284959 kubelet[2341]: I0819 08:09:05.284804 2341 container_manager_linux.go:304] "Creating device plugin manager" Aug 19 08:09:05.285005 kubelet[2341]: I0819 08:09:05.284985 2341 state_mem.go:36] "Initialized new in-memory state store" Aug 19 08:09:05.287538 kubelet[2341]: I0819 08:09:05.287502 2341 kubelet.go:446] "Attempting to sync node with API server" Aug 19 08:09:05.287538 kubelet[2341]: I0819 08:09:05.287538 2341 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 19 08:09:05.287632 kubelet[2341]: I0819 08:09:05.287565 2341 kubelet.go:352] "Adding apiserver pod source" Aug 19 08:09:05.287632 kubelet[2341]: I0819 08:09:05.287580 2341 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 19 08:09:05.293602 kubelet[2341]: W0819 08:09:05.293493 2341 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Aug 19 08:09:05.293842 kubelet[2341]: E0819 08:09:05.293756 2341 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.49:6443: connect: connection refused" logger="UnhandledError" Aug 19 08:09:05.293938 kubelet[2341]: W0819 08:09:05.293495 2341 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.49:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Aug 19 08:09:05.293938 kubelet[2341]: E0819 08:09:05.293874 2341 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.49:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.49:6443: connect: connection refused" logger="UnhandledError" Aug 19 08:09:05.293938 kubelet[2341]: I0819 08:09:05.293651 2341 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Aug 19 08:09:05.294698 kubelet[2341]: I0819 08:09:05.294653 2341 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 19 08:09:05.296632 kubelet[2341]: W0819 08:09:05.296595 2341 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 19 08:09:05.298845 kubelet[2341]: I0819 08:09:05.298814 2341 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 19 08:09:05.298908 kubelet[2341]: I0819 08:09:05.298857 2341 server.go:1287] "Started kubelet" Aug 19 08:09:05.301013 kubelet[2341]: I0819 08:09:05.300655 2341 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Aug 19 08:09:05.302247 kubelet[2341]: I0819 08:09:05.301628 2341 server.go:479] "Adding debug handlers to kubelet server" Aug 19 08:09:05.302803 kubelet[2341]: I0819 08:09:05.302724 2341 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 19 08:09:05.303170 kubelet[2341]: I0819 08:09:05.303136 2341 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 19 08:09:05.303927 kubelet[2341]: I0819 08:09:05.303903 2341 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 19 08:09:05.304301 kubelet[2341]: I0819 08:09:05.304275 2341 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 19 08:09:05.305772 kubelet[2341]: E0819 08:09:05.305756 2341 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 19 08:09:05.305872 kubelet[2341]: I0819 08:09:05.305859 2341 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 19 08:09:05.306074 kubelet[2341]: I0819 08:09:05.306059 2341 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 19 08:09:05.306208 kubelet[2341]: I0819 08:09:05.306196 2341 reconciler.go:26] "Reconciler: start to sync state" Aug 19 08:09:05.306547 kubelet[2341]: W0819 08:09:05.306516 2341 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Aug 19 08:09:05.306637 kubelet[2341]: E0819 08:09:05.306620 2341 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.49:6443: connect: connection refused" logger="UnhandledError" Aug 19 08:09:05.306760 kubelet[2341]: E0819 08:09:05.306714 2341 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.49:6443: connect: connection refused" interval="200ms" Aug 19 08:09:05.307364 kubelet[2341]: I0819 08:09:05.307341 2341 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 19 08:09:05.309473 kubelet[2341]: E0819 08:09:05.306421 2341 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.49:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.49:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.185d1ca9c96dd5ec default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-08-19 08:09:05.298830828 +0000 UTC m=+0.662101825,LastTimestamp:2025-08-19 08:09:05.298830828 +0000 UTC m=+0.662101825,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Aug 19 08:09:05.309662 kubelet[2341]: I0819 08:09:05.309636 2341 factory.go:221] Registration of the containerd container factory successfully Aug 19 08:09:05.309662 kubelet[2341]: I0819 08:09:05.309655 2341 factory.go:221] Registration of the systemd container factory successfully Aug 19 08:09:05.311660 kubelet[2341]: E0819 08:09:05.311622 2341 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 19 08:09:05.325337 kubelet[2341]: I0819 08:09:05.325289 2341 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 19 08:09:05.325337 kubelet[2341]: I0819 08:09:05.325311 2341 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 19 08:09:05.325337 kubelet[2341]: I0819 08:09:05.325345 2341 state_mem.go:36] "Initialized new in-memory state store" Aug 19 08:09:05.328137 kubelet[2341]: I0819 08:09:05.328059 2341 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 19 08:09:05.329469 kubelet[2341]: I0819 08:09:05.329442 2341 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 19 08:09:05.329524 kubelet[2341]: I0819 08:09:05.329479 2341 status_manager.go:227] "Starting to sync pod status with apiserver" Aug 19 08:09:05.329524 kubelet[2341]: I0819 08:09:05.329502 2341 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 19 08:09:05.329524 kubelet[2341]: I0819 08:09:05.329511 2341 kubelet.go:2382] "Starting kubelet main sync loop" Aug 19 08:09:05.329610 kubelet[2341]: E0819 08:09:05.329562 2341 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 19 08:09:05.406526 kubelet[2341]: E0819 08:09:05.406438 2341 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 19 08:09:05.429880 kubelet[2341]: E0819 08:09:05.429834 2341 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 19 08:09:05.507193 kubelet[2341]: E0819 08:09:05.507124 2341 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 19 08:09:05.507563 kubelet[2341]: E0819 08:09:05.507523 2341 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.49:6443: connect: connection refused" interval="400ms" Aug 19 08:09:05.607800 kubelet[2341]: E0819 08:09:05.607734 2341 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 19 08:09:05.630975 kubelet[2341]: E0819 08:09:05.630938 2341 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 19 08:09:05.708305 kubelet[2341]: E0819 08:09:05.708207 2341 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 19 08:09:05.730281 kubelet[2341]: W0819 08:09:05.730194 2341 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Aug 19 08:09:05.730281 kubelet[2341]: E0819 08:09:05.730261 2341 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.49:6443: connect: connection refused" logger="UnhandledError" Aug 19 08:09:05.730534 kubelet[2341]: I0819 08:09:05.730383 2341 policy_none.go:49] "None policy: Start" Aug 19 08:09:05.730534 kubelet[2341]: I0819 08:09:05.730448 2341 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 19 08:09:05.730534 kubelet[2341]: I0819 08:09:05.730470 2341 state_mem.go:35] "Initializing new in-memory state store" Aug 19 08:09:05.739350 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 19 08:09:05.755578 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 19 08:09:05.760571 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 19 08:09:05.773047 kubelet[2341]: I0819 08:09:05.773020 2341 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 19 08:09:05.773287 kubelet[2341]: I0819 08:09:05.773268 2341 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 19 08:09:05.773336 kubelet[2341]: I0819 08:09:05.773288 2341 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 19 08:09:05.773584 kubelet[2341]: I0819 08:09:05.773561 2341 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 19 08:09:05.774376 kubelet[2341]: E0819 08:09:05.774356 2341 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 19 08:09:05.774520 kubelet[2341]: E0819 08:09:05.774490 2341 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Aug 19 08:09:05.877231 kubelet[2341]: I0819 08:09:05.877018 2341 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 19 08:09:05.877524 kubelet[2341]: E0819 08:09:05.877479 2341 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.49:6443/api/v1/nodes\": dial tcp 10.0.0.49:6443: connect: connection refused" node="localhost" Aug 19 08:09:05.908148 kubelet[2341]: E0819 08:09:05.908072 2341 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.49:6443: connect: connection refused" interval="800ms" Aug 19 08:09:06.041133 systemd[1]: Created slice kubepods-burstable-pod9cbe47ec8b06374d605fc7967feb4a4a.slice - libcontainer container kubepods-burstable-pod9cbe47ec8b06374d605fc7967feb4a4a.slice. Aug 19 08:09:06.055233 kubelet[2341]: E0819 08:09:06.055197 2341 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 19 08:09:06.058618 systemd[1]: Created slice kubepods-burstable-poda9176403b596d0b29ae8ad12d635226d.slice - libcontainer container kubepods-burstable-poda9176403b596d0b29ae8ad12d635226d.slice. Aug 19 08:09:06.073629 kubelet[2341]: E0819 08:09:06.073569 2341 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 19 08:09:06.076345 systemd[1]: Created slice kubepods-burstable-poda88c9297c136b0f15880bf567e89a977.slice - libcontainer container kubepods-burstable-poda88c9297c136b0f15880bf567e89a977.slice. Aug 19 08:09:06.078796 kubelet[2341]: I0819 08:09:06.078735 2341 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 19 08:09:06.078921 kubelet[2341]: E0819 08:09:06.078737 2341 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 19 08:09:06.079222 kubelet[2341]: E0819 08:09:06.079176 2341 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.49:6443/api/v1/nodes\": dial tcp 10.0.0.49:6443: connect: connection refused" node="localhost" Aug 19 08:09:06.110703 kubelet[2341]: I0819 08:09:06.110654 2341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9cbe47ec8b06374d605fc7967feb4a4a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9cbe47ec8b06374d605fc7967feb4a4a\") " pod="kube-system/kube-apiserver-localhost" Aug 19 08:09:06.110703 kubelet[2341]: I0819 08:09:06.110694 2341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Aug 19 08:09:06.110815 kubelet[2341]: I0819 08:09:06.110710 2341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Aug 19 08:09:06.110815 kubelet[2341]: I0819 08:09:06.110731 2341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Aug 19 08:09:06.110815 kubelet[2341]: I0819 08:09:06.110747 2341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Aug 19 08:09:06.110815 kubelet[2341]: I0819 08:09:06.110765 2341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9176403b596d0b29ae8ad12d635226d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a9176403b596d0b29ae8ad12d635226d\") " pod="kube-system/kube-scheduler-localhost" Aug 19 08:09:06.110815 kubelet[2341]: I0819 08:09:06.110781 2341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9cbe47ec8b06374d605fc7967feb4a4a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9cbe47ec8b06374d605fc7967feb4a4a\") " pod="kube-system/kube-apiserver-localhost" Aug 19 08:09:06.110945 kubelet[2341]: I0819 08:09:06.110799 2341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Aug 19 08:09:06.110945 kubelet[2341]: I0819 08:09:06.110813 2341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9cbe47ec8b06374d605fc7967feb4a4a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9cbe47ec8b06374d605fc7967feb4a4a\") " pod="kube-system/kube-apiserver-localhost" Aug 19 08:09:06.356650 kubelet[2341]: E0819 08:09:06.356500 2341 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:09:06.357382 containerd[1577]: time="2025-08-19T08:09:06.357326399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9cbe47ec8b06374d605fc7967feb4a4a,Namespace:kube-system,Attempt:0,}" Aug 19 08:09:06.374625 kubelet[2341]: E0819 08:09:06.374583 2341 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:09:06.375200 containerd[1577]: time="2025-08-19T08:09:06.375153218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a9176403b596d0b29ae8ad12d635226d,Namespace:kube-system,Attempt:0,}" Aug 19 08:09:06.379375 kubelet[2341]: E0819 08:09:06.379330 2341 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:09:06.379728 containerd[1577]: time="2025-08-19T08:09:06.379703278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a88c9297c136b0f15880bf567e89a977,Namespace:kube-system,Attempt:0,}" Aug 19 08:09:06.473221 kubelet[2341]: W0819 08:09:06.473076 2341 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.49:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Aug 19 08:09:06.473221 kubelet[2341]: E0819 08:09:06.473223 2341 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.49:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.49:6443: connect: connection refused" logger="UnhandledError" Aug 19 08:09:06.480945 kubelet[2341]: I0819 08:09:06.480897 2341 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 19 08:09:06.481400 kubelet[2341]: E0819 08:09:06.481355 2341 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.49:6443/api/v1/nodes\": dial tcp 10.0.0.49:6443: connect: connection refused" node="localhost" Aug 19 08:09:06.560900 containerd[1577]: time="2025-08-19T08:09:06.560812753Z" level=info msg="connecting to shim 74bb81da2e111bf84e475eed3f413932dbae7c116dd02028ec7b1f58f76efa84" address="unix:///run/containerd/s/e9d6a22d6fa56a70fa0031eb56ad9dba2e8bc0d4d96cc635369a844580ea7d0f" namespace=k8s.io protocol=ttrpc version=3 Aug 19 08:09:06.564020 containerd[1577]: time="2025-08-19T08:09:06.563954814Z" level=info msg="connecting to shim 5bd0a556d8b49d0aedeeb2be883f5b9796bf0112de5e0b271df4e53989d0e093" address="unix:///run/containerd/s/dc8877d7c03a04842694306987803543323d1b7c46fb6667708f48c56c2852f9" namespace=k8s.io protocol=ttrpc version=3 Aug 19 08:09:06.579351 containerd[1577]: time="2025-08-19T08:09:06.578863702Z" level=info msg="connecting to shim bde6f90290eebb95f0854d7eb8c5de9caa1536b195e64c8eb57ae6e3bca63667" address="unix:///run/containerd/s/786b3ed41b3d4d3330d4e955fb3f09bf869b8b444f30bdb40fb07b9c49462346" namespace=k8s.io protocol=ttrpc version=3 Aug 19 08:09:06.603581 kubelet[2341]: W0819 08:09:06.603428 2341 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Aug 19 08:09:06.603828 kubelet[2341]: E0819 08:09:06.603787 2341 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.49:6443: connect: connection refused" logger="UnhandledError" Aug 19 08:09:06.668377 systemd[1]: Started cri-containerd-74bb81da2e111bf84e475eed3f413932dbae7c116dd02028ec7b1f58f76efa84.scope - libcontainer container 74bb81da2e111bf84e475eed3f413932dbae7c116dd02028ec7b1f58f76efa84. Aug 19 08:09:06.673129 systemd[1]: Started cri-containerd-5bd0a556d8b49d0aedeeb2be883f5b9796bf0112de5e0b271df4e53989d0e093.scope - libcontainer container 5bd0a556d8b49d0aedeeb2be883f5b9796bf0112de5e0b271df4e53989d0e093. Aug 19 08:09:06.686244 systemd[1]: Started cri-containerd-bde6f90290eebb95f0854d7eb8c5de9caa1536b195e64c8eb57ae6e3bca63667.scope - libcontainer container bde6f90290eebb95f0854d7eb8c5de9caa1536b195e64c8eb57ae6e3bca63667. Aug 19 08:09:06.686717 kubelet[2341]: W0819 08:09:06.686646 2341 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Aug 19 08:09:06.686858 kubelet[2341]: E0819 08:09:06.686824 2341 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.49:6443: connect: connection refused" logger="UnhandledError" Aug 19 08:09:06.709377 kubelet[2341]: E0819 08:09:06.709338 2341 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.49:6443: connect: connection refused" interval="1.6s" Aug 19 08:09:06.735891 containerd[1577]: time="2025-08-19T08:09:06.735783030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a88c9297c136b0f15880bf567e89a977,Namespace:kube-system,Attempt:0,} returns sandbox id \"74bb81da2e111bf84e475eed3f413932dbae7c116dd02028ec7b1f58f76efa84\"" Aug 19 08:09:06.738886 kubelet[2341]: E0819 08:09:06.738214 2341 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:09:06.739522 containerd[1577]: time="2025-08-19T08:09:06.739479249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a9176403b596d0b29ae8ad12d635226d,Namespace:kube-system,Attempt:0,} returns sandbox id \"bde6f90290eebb95f0854d7eb8c5de9caa1536b195e64c8eb57ae6e3bca63667\"" Aug 19 08:09:06.740473 kubelet[2341]: E0819 08:09:06.740432 2341 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:09:06.741434 containerd[1577]: time="2025-08-19T08:09:06.741388428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9cbe47ec8b06374d605fc7967feb4a4a,Namespace:kube-system,Attempt:0,} returns sandbox id \"5bd0a556d8b49d0aedeeb2be883f5b9796bf0112de5e0b271df4e53989d0e093\"" Aug 19 08:09:06.742291 kubelet[2341]: E0819 08:09:06.742240 2341 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:09:06.742366 containerd[1577]: time="2025-08-19T08:09:06.742283196Z" level=info msg="CreateContainer within sandbox \"74bb81da2e111bf84e475eed3f413932dbae7c116dd02028ec7b1f58f76efa84\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 19 08:09:06.742967 containerd[1577]: time="2025-08-19T08:09:06.742944335Z" level=info msg="CreateContainer within sandbox \"bde6f90290eebb95f0854d7eb8c5de9caa1536b195e64c8eb57ae6e3bca63667\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 19 08:09:06.743887 containerd[1577]: time="2025-08-19T08:09:06.743861044Z" level=info msg="CreateContainer within sandbox \"5bd0a556d8b49d0aedeeb2be883f5b9796bf0112de5e0b271df4e53989d0e093\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 19 08:09:06.752846 containerd[1577]: time="2025-08-19T08:09:06.752820529Z" level=info msg="Container 9d2317ea3281ff59431af9b576ccd226df142778aaaeba42f7934e1efe61f24c: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:09:06.756179 containerd[1577]: time="2025-08-19T08:09:06.756147667Z" level=info msg="Container 3eaf79d3b7d2655821f2e610ff63ec212bcc67548452563bdd3db3da020232ec: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:09:06.760891 containerd[1577]: time="2025-08-19T08:09:06.760842157Z" level=info msg="CreateContainer within sandbox \"74bb81da2e111bf84e475eed3f413932dbae7c116dd02028ec7b1f58f76efa84\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9d2317ea3281ff59431af9b576ccd226df142778aaaeba42f7934e1efe61f24c\"" Aug 19 08:09:06.761435 containerd[1577]: time="2025-08-19T08:09:06.761394242Z" level=info msg="StartContainer for \"9d2317ea3281ff59431af9b576ccd226df142778aaaeba42f7934e1efe61f24c\"" Aug 19 08:09:06.762363 containerd[1577]: time="2025-08-19T08:09:06.762328443Z" level=info msg="Container 037f4db08eade92dfc2fec38c5ec8c7880ec639e6146dd6408e2a56e846109d5: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:09:06.762611 containerd[1577]: time="2025-08-19T08:09:06.762581327Z" level=info msg="connecting to shim 9d2317ea3281ff59431af9b576ccd226df142778aaaeba42f7934e1efe61f24c" address="unix:///run/containerd/s/e9d6a22d6fa56a70fa0031eb56ad9dba2e8bc0d4d96cc635369a844580ea7d0f" protocol=ttrpc version=3 Aug 19 08:09:06.766032 containerd[1577]: time="2025-08-19T08:09:06.765995718Z" level=info msg="CreateContainer within sandbox \"bde6f90290eebb95f0854d7eb8c5de9caa1536b195e64c8eb57ae6e3bca63667\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3eaf79d3b7d2655821f2e610ff63ec212bcc67548452563bdd3db3da020232ec\"" Aug 19 08:09:06.766684 containerd[1577]: time="2025-08-19T08:09:06.766380029Z" level=info msg="StartContainer for \"3eaf79d3b7d2655821f2e610ff63ec212bcc67548452563bdd3db3da020232ec\"" Aug 19 08:09:06.767385 containerd[1577]: time="2025-08-19T08:09:06.767360186Z" level=info msg="connecting to shim 3eaf79d3b7d2655821f2e610ff63ec212bcc67548452563bdd3db3da020232ec" address="unix:///run/containerd/s/786b3ed41b3d4d3330d4e955fb3f09bf869b8b444f30bdb40fb07b9c49462346" protocol=ttrpc version=3 Aug 19 08:09:06.771825 containerd[1577]: time="2025-08-19T08:09:06.771797084Z" level=info msg="CreateContainer within sandbox \"5bd0a556d8b49d0aedeeb2be883f5b9796bf0112de5e0b271df4e53989d0e093\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"037f4db08eade92dfc2fec38c5ec8c7880ec639e6146dd6408e2a56e846109d5\"" Aug 19 08:09:06.772556 containerd[1577]: time="2025-08-19T08:09:06.772409402Z" level=info msg="StartContainer for \"037f4db08eade92dfc2fec38c5ec8c7880ec639e6146dd6408e2a56e846109d5\"" Aug 19 08:09:06.773580 containerd[1577]: time="2025-08-19T08:09:06.773542526Z" level=info msg="connecting to shim 037f4db08eade92dfc2fec38c5ec8c7880ec639e6146dd6408e2a56e846109d5" address="unix:///run/containerd/s/dc8877d7c03a04842694306987803543323d1b7c46fb6667708f48c56c2852f9" protocol=ttrpc version=3 Aug 19 08:09:06.784349 systemd[1]: Started cri-containerd-9d2317ea3281ff59431af9b576ccd226df142778aaaeba42f7934e1efe61f24c.scope - libcontainer container 9d2317ea3281ff59431af9b576ccd226df142778aaaeba42f7934e1efe61f24c. Aug 19 08:09:06.788014 systemd[1]: Started cri-containerd-3eaf79d3b7d2655821f2e610ff63ec212bcc67548452563bdd3db3da020232ec.scope - libcontainer container 3eaf79d3b7d2655821f2e610ff63ec212bcc67548452563bdd3db3da020232ec. Aug 19 08:09:06.798507 systemd[1]: Started cri-containerd-037f4db08eade92dfc2fec38c5ec8c7880ec639e6146dd6408e2a56e846109d5.scope - libcontainer container 037f4db08eade92dfc2fec38c5ec8c7880ec639e6146dd6408e2a56e846109d5. Aug 19 08:09:06.798753 kubelet[2341]: W0819 08:09:06.798537 2341 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Aug 19 08:09:06.798753 kubelet[2341]: E0819 08:09:06.798608 2341 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.49:6443: connect: connection refused" logger="UnhandledError" Aug 19 08:09:07.202453 containerd[1577]: time="2025-08-19T08:09:07.202382407Z" level=info msg="StartContainer for \"3eaf79d3b7d2655821f2e610ff63ec212bcc67548452563bdd3db3da020232ec\" returns successfully" Aug 19 08:09:07.203965 containerd[1577]: time="2025-08-19T08:09:07.203935679Z" level=info msg="StartContainer for \"9d2317ea3281ff59431af9b576ccd226df142778aaaeba42f7934e1efe61f24c\" returns successfully" Aug 19 08:09:07.204855 containerd[1577]: time="2025-08-19T08:09:07.204802424Z" level=info msg="StartContainer for \"037f4db08eade92dfc2fec38c5ec8c7880ec639e6146dd6408e2a56e846109d5\" returns successfully" Aug 19 08:09:07.283000 kubelet[2341]: I0819 08:09:07.282934 2341 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 19 08:09:07.343117 kubelet[2341]: E0819 08:09:07.343049 2341 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 19 08:09:07.343274 kubelet[2341]: E0819 08:09:07.343191 2341 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:09:07.345607 kubelet[2341]: E0819 08:09:07.345576 2341 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 19 08:09:07.348720 kubelet[2341]: E0819 08:09:07.348688 2341 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 19 08:09:07.349742 kubelet[2341]: E0819 08:09:07.348802 2341 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:09:07.349742 kubelet[2341]: E0819 08:09:07.349648 2341 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:09:08.347625 kubelet[2341]: E0819 08:09:08.347574 2341 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 19 08:09:08.348124 kubelet[2341]: E0819 08:09:08.347710 2341 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 19 08:09:08.348124 kubelet[2341]: E0819 08:09:08.347733 2341 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:09:08.348124 kubelet[2341]: E0819 08:09:08.347841 2341 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:09:08.819378 kubelet[2341]: E0819 08:09:08.819305 2341 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Aug 19 08:09:08.939661 kubelet[2341]: E0819 08:09:08.939413 2341 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.185d1ca9c96dd5ec default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-08-19 08:09:05.298830828 +0000 UTC m=+0.662101825,LastTimestamp:2025-08-19 08:09:05.298830828 +0000 UTC m=+0.662101825,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Aug 19 08:09:08.940817 kubelet[2341]: I0819 08:09:08.940778 2341 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Aug 19 08:09:08.940817 kubelet[2341]: E0819 08:09:08.940809 2341 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Aug 19 08:09:09.006999 kubelet[2341]: I0819 08:09:09.006942 2341 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Aug 19 08:09:09.083831 kubelet[2341]: E0819 08:09:09.083489 2341 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Aug 19 08:09:09.083831 kubelet[2341]: I0819 08:09:09.083527 2341 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Aug 19 08:09:09.085297 kubelet[2341]: E0819 08:09:09.085274 2341 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Aug 19 08:09:09.085297 kubelet[2341]: I0819 08:09:09.085295 2341 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Aug 19 08:09:09.086498 kubelet[2341]: E0819 08:09:09.086456 2341 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Aug 19 08:09:09.292137 kubelet[2341]: I0819 08:09:09.292071 2341 apiserver.go:52] "Watching apiserver" Aug 19 08:09:09.306447 kubelet[2341]: I0819 08:09:09.306416 2341 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 19 08:09:11.078916 systemd[1]: Reload requested from client PID 2617 ('systemctl') (unit session-7.scope)... Aug 19 08:09:11.078935 systemd[1]: Reloading... Aug 19 08:09:11.180128 zram_generator::config[2663]: No configuration found. Aug 19 08:09:11.497885 systemd[1]: Reloading finished in 418 ms. Aug 19 08:09:11.532484 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 19 08:09:11.554473 systemd[1]: kubelet.service: Deactivated successfully. Aug 19 08:09:11.554824 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 08:09:11.554876 systemd[1]: kubelet.service: Consumed 1.175s CPU time, 133M memory peak. Aug 19 08:09:11.556740 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 19 08:09:11.752125 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 08:09:11.760515 (kubelet)[2705]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 19 08:09:11.798593 kubelet[2705]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 19 08:09:11.798593 kubelet[2705]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 19 08:09:11.798593 kubelet[2705]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 19 08:09:11.799011 kubelet[2705]: I0819 08:09:11.798650 2705 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 19 08:09:11.805830 kubelet[2705]: I0819 08:09:11.805779 2705 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Aug 19 08:09:11.805830 kubelet[2705]: I0819 08:09:11.805811 2705 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 19 08:09:11.806113 kubelet[2705]: I0819 08:09:11.806077 2705 server.go:954] "Client rotation is on, will bootstrap in background" Aug 19 08:09:11.807328 kubelet[2705]: I0819 08:09:11.807301 2705 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 19 08:09:11.810171 kubelet[2705]: I0819 08:09:11.810135 2705 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 19 08:09:11.813481 kubelet[2705]: I0819 08:09:11.813447 2705 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Aug 19 08:09:11.818969 kubelet[2705]: I0819 08:09:11.818916 2705 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 19 08:09:11.819232 kubelet[2705]: I0819 08:09:11.819199 2705 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 19 08:09:11.819415 kubelet[2705]: I0819 08:09:11.819228 2705 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 19 08:09:11.819415 kubelet[2705]: I0819 08:09:11.819415 2705 topology_manager.go:138] "Creating topology manager with none policy" Aug 19 08:09:11.819546 kubelet[2705]: I0819 08:09:11.819423 2705 container_manager_linux.go:304] "Creating device plugin manager" Aug 19 08:09:11.819546 kubelet[2705]: I0819 08:09:11.819477 2705 state_mem.go:36] "Initialized new in-memory state store" Aug 19 08:09:11.819672 kubelet[2705]: I0819 08:09:11.819653 2705 kubelet.go:446] "Attempting to sync node with API server" Aug 19 08:09:11.819701 kubelet[2705]: I0819 08:09:11.819675 2705 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 19 08:09:11.819701 kubelet[2705]: I0819 08:09:11.819700 2705 kubelet.go:352] "Adding apiserver pod source" Aug 19 08:09:11.819745 kubelet[2705]: I0819 08:09:11.819710 2705 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 19 08:09:11.820726 kubelet[2705]: I0819 08:09:11.820208 2705 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Aug 19 08:09:11.820796 kubelet[2705]: I0819 08:09:11.820727 2705 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 19 08:09:11.821248 kubelet[2705]: I0819 08:09:11.821219 2705 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 19 08:09:11.821300 kubelet[2705]: I0819 08:09:11.821254 2705 server.go:1287] "Started kubelet" Aug 19 08:09:11.821559 kubelet[2705]: I0819 08:09:11.821532 2705 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Aug 19 08:09:11.822611 kubelet[2705]: I0819 08:09:11.822579 2705 server.go:479] "Adding debug handlers to kubelet server" Aug 19 08:09:11.824828 kubelet[2705]: I0819 08:09:11.824810 2705 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 19 08:09:11.829018 kubelet[2705]: I0819 08:09:11.821561 2705 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 19 08:09:11.829763 kubelet[2705]: I0819 08:09:11.829730 2705 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 19 08:09:11.830565 kubelet[2705]: I0819 08:09:11.830532 2705 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 19 08:09:11.831704 kubelet[2705]: E0819 08:09:11.831674 2705 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 19 08:09:11.831890 kubelet[2705]: I0819 08:09:11.831862 2705 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 19 08:09:11.832830 kubelet[2705]: I0819 08:09:11.832797 2705 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 19 08:09:11.833049 kubelet[2705]: I0819 08:09:11.833026 2705 reconciler.go:26] "Reconciler: start to sync state" Aug 19 08:09:11.834359 kubelet[2705]: I0819 08:09:11.834286 2705 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 19 08:09:11.835780 kubelet[2705]: I0819 08:09:11.835753 2705 factory.go:221] Registration of the containerd container factory successfully Aug 19 08:09:11.835780 kubelet[2705]: I0819 08:09:11.835770 2705 factory.go:221] Registration of the systemd container factory successfully Aug 19 08:09:11.838314 kubelet[2705]: I0819 08:09:11.838276 2705 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 19 08:09:11.839650 kubelet[2705]: I0819 08:09:11.839624 2705 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 19 08:09:11.839650 kubelet[2705]: I0819 08:09:11.839647 2705 status_manager.go:227] "Starting to sync pod status with apiserver" Aug 19 08:09:11.839725 kubelet[2705]: I0819 08:09:11.839665 2705 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 19 08:09:11.839725 kubelet[2705]: I0819 08:09:11.839673 2705 kubelet.go:2382] "Starting kubelet main sync loop" Aug 19 08:09:11.839772 kubelet[2705]: E0819 08:09:11.839724 2705 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 19 08:09:11.878000 kubelet[2705]: I0819 08:09:11.877955 2705 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 19 08:09:11.878000 kubelet[2705]: I0819 08:09:11.877976 2705 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 19 08:09:11.878000 kubelet[2705]: I0819 08:09:11.877995 2705 state_mem.go:36] "Initialized new in-memory state store" Aug 19 08:09:11.878249 kubelet[2705]: I0819 08:09:11.878198 2705 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 19 08:09:11.878249 kubelet[2705]: I0819 08:09:11.878210 2705 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 19 08:09:11.878249 kubelet[2705]: I0819 08:09:11.878228 2705 policy_none.go:49] "None policy: Start" Aug 19 08:09:11.878249 kubelet[2705]: I0819 08:09:11.878237 2705 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 19 08:09:11.878249 kubelet[2705]: I0819 08:09:11.878248 2705 state_mem.go:35] "Initializing new in-memory state store" Aug 19 08:09:11.878425 kubelet[2705]: I0819 08:09:11.878368 2705 state_mem.go:75] "Updated machine memory state" Aug 19 08:09:11.884133 kubelet[2705]: I0819 08:09:11.884079 2705 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 19 08:09:11.884466 kubelet[2705]: I0819 08:09:11.884449 2705 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 19 08:09:11.884584 kubelet[2705]: I0819 08:09:11.884543 2705 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 19 08:09:11.885323 kubelet[2705]: I0819 08:09:11.885131 2705 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 19 08:09:11.886787 kubelet[2705]: E0819 08:09:11.886762 2705 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 19 08:09:11.940516 kubelet[2705]: I0819 08:09:11.940458 2705 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Aug 19 08:09:11.940516 kubelet[2705]: I0819 08:09:11.940525 2705 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Aug 19 08:09:11.940700 kubelet[2705]: I0819 08:09:11.940543 2705 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Aug 19 08:09:11.992414 kubelet[2705]: I0819 08:09:11.992357 2705 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 19 08:09:11.999082 kubelet[2705]: I0819 08:09:11.999046 2705 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Aug 19 08:09:11.999173 kubelet[2705]: I0819 08:09:11.999151 2705 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Aug 19 08:09:12.077798 sudo[2743]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Aug 19 08:09:12.078188 sudo[2743]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Aug 19 08:09:12.134533 kubelet[2705]: I0819 08:09:12.134345 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Aug 19 08:09:12.134533 kubelet[2705]: I0819 08:09:12.134385 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Aug 19 08:09:12.134533 kubelet[2705]: I0819 08:09:12.134404 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Aug 19 08:09:12.134533 kubelet[2705]: I0819 08:09:12.134420 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9176403b596d0b29ae8ad12d635226d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a9176403b596d0b29ae8ad12d635226d\") " pod="kube-system/kube-scheduler-localhost" Aug 19 08:09:12.134533 kubelet[2705]: I0819 08:09:12.134434 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9cbe47ec8b06374d605fc7967feb4a4a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9cbe47ec8b06374d605fc7967feb4a4a\") " pod="kube-system/kube-apiserver-localhost" Aug 19 08:09:12.134764 kubelet[2705]: I0819 08:09:12.134448 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Aug 19 08:09:12.134764 kubelet[2705]: I0819 08:09:12.134461 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9cbe47ec8b06374d605fc7967feb4a4a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9cbe47ec8b06374d605fc7967feb4a4a\") " pod="kube-system/kube-apiserver-localhost" Aug 19 08:09:12.134764 kubelet[2705]: I0819 08:09:12.134475 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9cbe47ec8b06374d605fc7967feb4a4a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9cbe47ec8b06374d605fc7967feb4a4a\") " pod="kube-system/kube-apiserver-localhost" Aug 19 08:09:12.134764 kubelet[2705]: I0819 08:09:12.134490 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Aug 19 08:09:12.245226 kubelet[2705]: E0819 08:09:12.245184 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:09:12.246357 kubelet[2705]: E0819 08:09:12.246275 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:09:12.246587 kubelet[2705]: E0819 08:09:12.246406 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:09:12.577936 sudo[2743]: pam_unix(sudo:session): session closed for user root Aug 19 08:09:12.821037 kubelet[2705]: I0819 08:09:12.820985 2705 apiserver.go:52] "Watching apiserver" Aug 19 08:09:12.833779 kubelet[2705]: I0819 08:09:12.833688 2705 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 19 08:09:12.851242 kubelet[2705]: I0819 08:09:12.851194 2705 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Aug 19 08:09:12.851552 kubelet[2705]: I0819 08:09:12.851534 2705 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Aug 19 08:09:12.852403 kubelet[2705]: E0819 08:09:12.851877 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:09:13.090958 kubelet[2705]: E0819 08:09:13.089909 2705 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Aug 19 08:09:13.090958 kubelet[2705]: E0819 08:09:13.090570 2705 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Aug 19 08:09:13.090958 kubelet[2705]: E0819 08:09:13.090805 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:09:13.091486 kubelet[2705]: E0819 08:09:13.090082 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:09:13.123401 kubelet[2705]: I0819 08:09:13.123308 2705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.123270719 podStartE2EDuration="2.123270719s" podCreationTimestamp="2025-08-19 08:09:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-19 08:09:13.113417815 +0000 UTC m=+1.348525974" watchObservedRunningTime="2025-08-19 08:09:13.123270719 +0000 UTC m=+1.358378877" Aug 19 08:09:13.131081 kubelet[2705]: I0819 08:09:13.131018 2705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.130999576 podStartE2EDuration="2.130999576s" podCreationTimestamp="2025-08-19 08:09:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-19 08:09:13.124027739 +0000 UTC m=+1.359135897" watchObservedRunningTime="2025-08-19 08:09:13.130999576 +0000 UTC m=+1.366107734" Aug 19 08:09:13.765559 sudo[1785]: pam_unix(sudo:session): session closed for user root Aug 19 08:09:13.767140 sshd[1784]: Connection closed by 10.0.0.1 port 37148 Aug 19 08:09:13.767678 sshd-session[1781]: pam_unix(sshd:session): session closed for user core Aug 19 08:09:13.772540 systemd[1]: sshd@6-10.0.0.49:22-10.0.0.1:37148.service: Deactivated successfully. Aug 19 08:09:13.774971 systemd[1]: session-7.scope: Deactivated successfully. Aug 19 08:09:13.775211 systemd[1]: session-7.scope: Consumed 4.627s CPU time, 263.1M memory peak. Aug 19 08:09:13.776545 systemd-logind[1548]: Session 7 logged out. Waiting for processes to exit. Aug 19 08:09:13.777628 systemd-logind[1548]: Removed session 7. Aug 19 08:09:13.852720 kubelet[2705]: E0819 08:09:13.852686 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:09:13.852720 kubelet[2705]: E0819 08:09:13.852756 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:09:13.853198 kubelet[2705]: E0819 08:09:13.852826 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:09:14.855208 kubelet[2705]: E0819 08:09:14.855161 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:09:14.855844 kubelet[2705]: E0819 08:09:14.855257 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:09:16.921404 kubelet[2705]: I0819 08:09:16.921367 2705 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 19 08:09:16.921859 containerd[1577]: time="2025-08-19T08:09:16.921752155Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 19 08:09:16.922132 kubelet[2705]: I0819 08:09:16.921912 2705 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 19 08:09:17.674477 kubelet[2705]: I0819 08:09:17.674407 2705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=6.674386456 podStartE2EDuration="6.674386456s" podCreationTimestamp="2025-08-19 08:09:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-19 08:09:13.131634762 +0000 UTC m=+1.366742920" watchObservedRunningTime="2025-08-19 08:09:17.674386456 +0000 UTC m=+5.909494614" Aug 19 08:09:17.687664 systemd[1]: Created slice kubepods-besteffort-pod5003685b_b4fc_4eaa_8837_fa7abe2ff9cf.slice - libcontainer container kubepods-besteffort-pod5003685b_b4fc_4eaa_8837_fa7abe2ff9cf.slice. Aug 19 08:09:17.696995 systemd[1]: Created slice kubepods-burstable-podbf0f0808_84bb_4aad_b3a6_c95dc3ec4c80.slice - libcontainer container kubepods-burstable-podbf0f0808_84bb_4aad_b3a6_c95dc3ec4c80.slice. Aug 19 08:09:17.768671 kubelet[2705]: I0819 08:09:17.768612 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80-cni-path\") pod \"cilium-mn656\" (UID: \"bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80\") " pod="kube-system/cilium-mn656" Aug 19 08:09:17.768671 kubelet[2705]: I0819 08:09:17.768652 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80-xtables-lock\") pod \"cilium-mn656\" (UID: \"bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80\") " pod="kube-system/cilium-mn656" Aug 19 08:09:17.768671 kubelet[2705]: I0819 08:09:17.768673 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5003685b-b4fc-4eaa-8837-fa7abe2ff9cf-kube-proxy\") pod \"kube-proxy-b6hkl\" (UID: \"5003685b-b4fc-4eaa-8837-fa7abe2ff9cf\") " pod="kube-system/kube-proxy-b6hkl" Aug 19 08:09:17.768869 kubelet[2705]: I0819 08:09:17.768687 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80-hostproc\") pod \"cilium-mn656\" (UID: \"bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80\") " pod="kube-system/cilium-mn656" Aug 19 08:09:17.768869 kubelet[2705]: I0819 08:09:17.768700 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80-lib-modules\") pod \"cilium-mn656\" (UID: \"bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80\") " pod="kube-system/cilium-mn656" Aug 19 08:09:17.768869 kubelet[2705]: I0819 08:09:17.768716 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80-host-proc-sys-kernel\") pod \"cilium-mn656\" (UID: \"bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80\") " pod="kube-system/cilium-mn656" Aug 19 08:09:17.768869 kubelet[2705]: I0819 08:09:17.768730 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80-hubble-tls\") pod \"cilium-mn656\" (UID: \"bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80\") " pod="kube-system/cilium-mn656" Aug 19 08:09:17.768869 kubelet[2705]: I0819 08:09:17.768749 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwcqb\" (UniqueName: \"kubernetes.io/projected/bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80-kube-api-access-lwcqb\") pod \"cilium-mn656\" (UID: \"bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80\") " pod="kube-system/cilium-mn656" Aug 19 08:09:17.768994 kubelet[2705]: I0819 08:09:17.768852 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80-cilium-run\") pod \"cilium-mn656\" (UID: \"bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80\") " pod="kube-system/cilium-mn656" Aug 19 08:09:17.768994 kubelet[2705]: I0819 08:09:17.768899 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80-bpf-maps\") pod \"cilium-mn656\" (UID: \"bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80\") " pod="kube-system/cilium-mn656" Aug 19 08:09:17.768994 kubelet[2705]: I0819 08:09:17.768915 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80-cilium-cgroup\") pod \"cilium-mn656\" (UID: \"bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80\") " pod="kube-system/cilium-mn656" Aug 19 08:09:17.768994 kubelet[2705]: I0819 08:09:17.768930 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80-clustermesh-secrets\") pod \"cilium-mn656\" (UID: \"bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80\") " pod="kube-system/cilium-mn656" Aug 19 08:09:17.768994 kubelet[2705]: I0819 08:09:17.768951 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80-host-proc-sys-net\") pod \"cilium-mn656\" (UID: \"bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80\") " pod="kube-system/cilium-mn656" Aug 19 08:09:17.768994 kubelet[2705]: I0819 08:09:17.768980 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5k86h\" (UniqueName: \"kubernetes.io/projected/5003685b-b4fc-4eaa-8837-fa7abe2ff9cf-kube-api-access-5k86h\") pod \"kube-proxy-b6hkl\" (UID: \"5003685b-b4fc-4eaa-8837-fa7abe2ff9cf\") " pod="kube-system/kube-proxy-b6hkl" Aug 19 08:09:17.769153 kubelet[2705]: I0819 08:09:17.769006 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5003685b-b4fc-4eaa-8837-fa7abe2ff9cf-xtables-lock\") pod \"kube-proxy-b6hkl\" (UID: \"5003685b-b4fc-4eaa-8837-fa7abe2ff9cf\") " pod="kube-system/kube-proxy-b6hkl" Aug 19 08:09:17.769153 kubelet[2705]: I0819 08:09:17.769022 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5003685b-b4fc-4eaa-8837-fa7abe2ff9cf-lib-modules\") pod \"kube-proxy-b6hkl\" (UID: \"5003685b-b4fc-4eaa-8837-fa7abe2ff9cf\") " pod="kube-system/kube-proxy-b6hkl" Aug 19 08:09:17.769153 kubelet[2705]: I0819 08:09:17.769035 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80-etc-cni-netd\") pod \"cilium-mn656\" (UID: \"bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80\") " pod="kube-system/cilium-mn656" Aug 19 08:09:17.769153 kubelet[2705]: I0819 08:09:17.769048 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80-cilium-config-path\") pod \"cilium-mn656\" (UID: \"bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80\") " pod="kube-system/cilium-mn656" Aug 19 08:09:17.997612 kubelet[2705]: E0819 08:09:17.997554 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:09:17.998632 containerd[1577]: time="2025-08-19T08:09:17.998280838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b6hkl,Uid:5003685b-b4fc-4eaa-8837-fa7abe2ff9cf,Namespace:kube-system,Attempt:0,}" Aug 19 08:09:17.999730 kubelet[2705]: E0819 08:09:17.999684 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:09:18.000302 containerd[1577]: time="2025-08-19T08:09:18.000176691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mn656,Uid:bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80,Namespace:kube-system,Attempt:0,}" Aug 19 08:09:18.026866 containerd[1577]: time="2025-08-19T08:09:18.026811719Z" level=info msg="connecting to shim be74f2b7bb4d2e641a31f3fcd0073fd32a3ec8efffc5413ad1fcd6f92b144d35" address="unix:///run/containerd/s/6db068a65a84f9411936ffbd42eec33fad2d4ad64316afd02b03a55b1fd7ee50" namespace=k8s.io protocol=ttrpc version=3 Aug 19 08:09:18.034191 containerd[1577]: time="2025-08-19T08:09:18.034134563Z" level=info msg="connecting to shim 9b2650909b167e01248946ebbc3326e65d9a66abafad7cb76b913d7d4474f39c" address="unix:///run/containerd/s/1eaf4b584c81464bb7b2f35acc084282b98b1033da214d875d039267a898c626" namespace=k8s.io protocol=ttrpc version=3 Aug 19 08:09:18.057257 systemd[1]: Started cri-containerd-be74f2b7bb4d2e641a31f3fcd0073fd32a3ec8efffc5413ad1fcd6f92b144d35.scope - libcontainer container be74f2b7bb4d2e641a31f3fcd0073fd32a3ec8efffc5413ad1fcd6f92b144d35. Aug 19 08:09:18.061302 systemd[1]: Started cri-containerd-9b2650909b167e01248946ebbc3326e65d9a66abafad7cb76b913d7d4474f39c.scope - libcontainer container 9b2650909b167e01248946ebbc3326e65d9a66abafad7cb76b913d7d4474f39c. Aug 19 08:09:18.100772 systemd[1]: Created slice kubepods-besteffort-pod7bfdcc58_3a50_4307_876f_d489ae79afb2.slice - libcontainer container kubepods-besteffort-pod7bfdcc58_3a50_4307_876f_d489ae79afb2.slice. Aug 19 08:09:18.120197 containerd[1577]: time="2025-08-19T08:09:18.120146908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mn656,Uid:bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80,Namespace:kube-system,Attempt:0,} returns sandbox id \"9b2650909b167e01248946ebbc3326e65d9a66abafad7cb76b913d7d4474f39c\"" Aug 19 08:09:18.120994 kubelet[2705]: E0819 08:09:18.120957 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:09:18.123533 containerd[1577]: time="2025-08-19T08:09:18.123473805Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Aug 19 08:09:18.127607 containerd[1577]: time="2025-08-19T08:09:18.127574955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b6hkl,Uid:5003685b-b4fc-4eaa-8837-fa7abe2ff9cf,Namespace:kube-system,Attempt:0,} returns sandbox id \"be74f2b7bb4d2e641a31f3fcd0073fd32a3ec8efffc5413ad1fcd6f92b144d35\"" Aug 19 08:09:18.128786 kubelet[2705]: E0819 08:09:18.128742 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:09:18.131354 containerd[1577]: time="2025-08-19T08:09:18.131317227Z" level=info msg="CreateContainer within sandbox \"be74f2b7bb4d2e641a31f3fcd0073fd32a3ec8efffc5413ad1fcd6f92b144d35\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 19 08:09:18.142875 containerd[1577]: time="2025-08-19T08:09:18.142817145Z" level=info msg="Container 76f25493ecc7271fc4ba8c79b2bbb6b196c50232832ce7d6ec873ab7d09b6b0f: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:09:18.151368 containerd[1577]: time="2025-08-19T08:09:18.151327704Z" level=info msg="CreateContainer within sandbox \"be74f2b7bb4d2e641a31f3fcd0073fd32a3ec8efffc5413ad1fcd6f92b144d35\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"76f25493ecc7271fc4ba8c79b2bbb6b196c50232832ce7d6ec873ab7d09b6b0f\"" Aug 19 08:09:18.152112 containerd[1577]: time="2025-08-19T08:09:18.151796160Z" level=info msg="StartContainer for \"76f25493ecc7271fc4ba8c79b2bbb6b196c50232832ce7d6ec873ab7d09b6b0f\"" Aug 19 08:09:18.153463 containerd[1577]: time="2025-08-19T08:09:18.153436579Z" level=info msg="connecting to shim 76f25493ecc7271fc4ba8c79b2bbb6b196c50232832ce7d6ec873ab7d09b6b0f" address="unix:///run/containerd/s/6db068a65a84f9411936ffbd42eec33fad2d4ad64316afd02b03a55b1fd7ee50" protocol=ttrpc version=3 Aug 19 08:09:18.171322 kubelet[2705]: I0819 08:09:18.171281 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7bfdcc58-3a50-4307-876f-d489ae79afb2-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-8djq8\" (UID: \"7bfdcc58-3a50-4307-876f-d489ae79afb2\") " pod="kube-system/cilium-operator-6c4d7847fc-8djq8" Aug 19 08:09:18.171322 kubelet[2705]: I0819 08:09:18.171319 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxnr6\" (UniqueName: \"kubernetes.io/projected/7bfdcc58-3a50-4307-876f-d489ae79afb2-kube-api-access-fxnr6\") pod \"cilium-operator-6c4d7847fc-8djq8\" (UID: \"7bfdcc58-3a50-4307-876f-d489ae79afb2\") " pod="kube-system/cilium-operator-6c4d7847fc-8djq8" Aug 19 08:09:18.183234 systemd[1]: Started cri-containerd-76f25493ecc7271fc4ba8c79b2bbb6b196c50232832ce7d6ec873ab7d09b6b0f.scope - libcontainer container 76f25493ecc7271fc4ba8c79b2bbb6b196c50232832ce7d6ec873ab7d09b6b0f. Aug 19 08:09:18.229830 containerd[1577]: time="2025-08-19T08:09:18.229793876Z" level=info msg="StartContainer for \"76f25493ecc7271fc4ba8c79b2bbb6b196c50232832ce7d6ec873ab7d09b6b0f\" returns successfully" Aug 19 08:09:18.407323 kubelet[2705]: E0819 08:09:18.404336 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:09:18.407444 containerd[1577]: time="2025-08-19T08:09:18.406379003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-8djq8,Uid:7bfdcc58-3a50-4307-876f-d489ae79afb2,Namespace:kube-system,Attempt:0,}" Aug 19 08:09:18.432048 containerd[1577]: time="2025-08-19T08:09:18.431981091Z" level=info msg="connecting to shim eb10bfdd15542ba23fad298d4dbf4e61185aa24cb7cdebbb1c18369e2aa3e278" address="unix:///run/containerd/s/73460fe68537156be2638e7ca01111561eb9e39b6bdc29229d74d6282355048a" namespace=k8s.io protocol=ttrpc version=3 Aug 19 08:09:18.458241 systemd[1]: Started cri-containerd-eb10bfdd15542ba23fad298d4dbf4e61185aa24cb7cdebbb1c18369e2aa3e278.scope - libcontainer container eb10bfdd15542ba23fad298d4dbf4e61185aa24cb7cdebbb1c18369e2aa3e278. Aug 19 08:09:18.532738 containerd[1577]: time="2025-08-19T08:09:18.532686607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-8djq8,Uid:7bfdcc58-3a50-4307-876f-d489ae79afb2,Namespace:kube-system,Attempt:0,} returns sandbox id \"eb10bfdd15542ba23fad298d4dbf4e61185aa24cb7cdebbb1c18369e2aa3e278\"" Aug 19 08:09:18.533521 kubelet[2705]: E0819 08:09:18.533494 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:09:18.866962 kubelet[2705]: E0819 08:09:18.866874 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:09:21.779484 kubelet[2705]: E0819 08:09:21.779393 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:09:21.792321 kubelet[2705]: I0819 08:09:21.792199 2705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-b6hkl" podStartSLOduration=4.792176345 podStartE2EDuration="4.792176345s" podCreationTimestamp="2025-08-19 08:09:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-19 08:09:18.877261621 +0000 UTC m=+7.112369799" watchObservedRunningTime="2025-08-19 08:09:21.792176345 +0000 UTC m=+10.027284493" Aug 19 08:09:21.873903 kubelet[2705]: E0819 08:09:21.873856 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:09:22.875054 kubelet[2705]: E0819 08:09:22.874988 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:09:24.681281 kubelet[2705]: E0819 08:09:24.681239 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:09:24.783967 kubelet[2705]: E0819 08:09:24.783899 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:09:24.879253 kubelet[2705]: E0819 08:09:24.879208 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:09:25.978819 update_engine[1550]: I20250819 08:09:25.978726 1550 update_attempter.cc:509] Updating boot flags... Aug 19 08:09:29.372723 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2134882126.mount: Deactivated successfully. Aug 19 08:09:34.425364 containerd[1577]: time="2025-08-19T08:09:34.425299329Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:09:34.426188 containerd[1577]: time="2025-08-19T08:09:34.426113706Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Aug 19 08:09:34.427320 containerd[1577]: time="2025-08-19T08:09:34.427255312Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:09:34.428752 containerd[1577]: time="2025-08-19T08:09:34.428711753Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 16.30517126s" Aug 19 08:09:34.428752 containerd[1577]: time="2025-08-19T08:09:34.428749043Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Aug 19 08:09:34.435379 containerd[1577]: time="2025-08-19T08:09:34.435350370Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Aug 19 08:09:34.438805 containerd[1577]: time="2025-08-19T08:09:34.438756444Z" level=info msg="CreateContainer within sandbox \"9b2650909b167e01248946ebbc3326e65d9a66abafad7cb76b913d7d4474f39c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 19 08:09:34.450689 containerd[1577]: time="2025-08-19T08:09:34.450629727Z" level=info msg="Container ff7bf63dfa60768c18353d151b2eeb2087af8554c50c89192cd4d64999ff178b: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:09:34.457581 containerd[1577]: time="2025-08-19T08:09:34.457529889Z" level=info msg="CreateContainer within sandbox \"9b2650909b167e01248946ebbc3326e65d9a66abafad7cb76b913d7d4474f39c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ff7bf63dfa60768c18353d151b2eeb2087af8554c50c89192cd4d64999ff178b\"" Aug 19 08:09:34.458199 containerd[1577]: time="2025-08-19T08:09:34.458170028Z" level=info msg="StartContainer for \"ff7bf63dfa60768c18353d151b2eeb2087af8554c50c89192cd4d64999ff178b\"" Aug 19 08:09:34.459237 containerd[1577]: time="2025-08-19T08:09:34.459080487Z" level=info msg="connecting to shim ff7bf63dfa60768c18353d151b2eeb2087af8554c50c89192cd4d64999ff178b" address="unix:///run/containerd/s/1eaf4b584c81464bb7b2f35acc084282b98b1033da214d875d039267a898c626" protocol=ttrpc version=3 Aug 19 08:09:34.487282 systemd[1]: Started cri-containerd-ff7bf63dfa60768c18353d151b2eeb2087af8554c50c89192cd4d64999ff178b.scope - libcontainer container ff7bf63dfa60768c18353d151b2eeb2087af8554c50c89192cd4d64999ff178b. Aug 19 08:09:34.521880 containerd[1577]: time="2025-08-19T08:09:34.521830608Z" level=info msg="StartContainer for \"ff7bf63dfa60768c18353d151b2eeb2087af8554c50c89192cd4d64999ff178b\" returns successfully" Aug 19 08:09:34.532982 systemd[1]: cri-containerd-ff7bf63dfa60768c18353d151b2eeb2087af8554c50c89192cd4d64999ff178b.scope: Deactivated successfully. Aug 19 08:09:34.534939 containerd[1577]: time="2025-08-19T08:09:34.534889650Z" level=info msg="received exit event container_id:\"ff7bf63dfa60768c18353d151b2eeb2087af8554c50c89192cd4d64999ff178b\" id:\"ff7bf63dfa60768c18353d151b2eeb2087af8554c50c89192cd4d64999ff178b\" pid:3144 exited_at:{seconds:1755590974 nanos:534468185}" Aug 19 08:09:34.535077 containerd[1577]: time="2025-08-19T08:09:34.534955315Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ff7bf63dfa60768c18353d151b2eeb2087af8554c50c89192cd4d64999ff178b\" id:\"ff7bf63dfa60768c18353d151b2eeb2087af8554c50c89192cd4d64999ff178b\" pid:3144 exited_at:{seconds:1755590974 nanos:534468185}" Aug 19 08:09:34.555247 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ff7bf63dfa60768c18353d151b2eeb2087af8554c50c89192cd4d64999ff178b-rootfs.mount: Deactivated successfully. Aug 19 08:09:34.898816 kubelet[2705]: E0819 08:09:34.898745 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:09:34.902824 containerd[1577]: time="2025-08-19T08:09:34.902775561Z" level=info msg="CreateContainer within sandbox \"9b2650909b167e01248946ebbc3326e65d9a66abafad7cb76b913d7d4474f39c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 19 08:09:34.913316 containerd[1577]: time="2025-08-19T08:09:34.913274179Z" level=info msg="Container 90ae8b756cb55ef9a4bc966006daa91e17fee029f5baac06ec90d1991161b28c: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:09:34.919514 containerd[1577]: time="2025-08-19T08:09:34.919458128Z" level=info msg="CreateContainer within sandbox \"9b2650909b167e01248946ebbc3326e65d9a66abafad7cb76b913d7d4474f39c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"90ae8b756cb55ef9a4bc966006daa91e17fee029f5baac06ec90d1991161b28c\"" Aug 19 08:09:34.919912 containerd[1577]: time="2025-08-19T08:09:34.919883982Z" level=info msg="StartContainer for \"90ae8b756cb55ef9a4bc966006daa91e17fee029f5baac06ec90d1991161b28c\"" Aug 19 08:09:34.920806 containerd[1577]: time="2025-08-19T08:09:34.920767732Z" level=info msg="connecting to shim 90ae8b756cb55ef9a4bc966006daa91e17fee029f5baac06ec90d1991161b28c" address="unix:///run/containerd/s/1eaf4b584c81464bb7b2f35acc084282b98b1033da214d875d039267a898c626" protocol=ttrpc version=3 Aug 19 08:09:34.945228 systemd[1]: Started cri-containerd-90ae8b756cb55ef9a4bc966006daa91e17fee029f5baac06ec90d1991161b28c.scope - libcontainer container 90ae8b756cb55ef9a4bc966006daa91e17fee029f5baac06ec90d1991161b28c. Aug 19 08:09:34.977551 containerd[1577]: time="2025-08-19T08:09:34.977447517Z" level=info msg="StartContainer for \"90ae8b756cb55ef9a4bc966006daa91e17fee029f5baac06ec90d1991161b28c\" returns successfully" Aug 19 08:09:34.992153 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 19 08:09:34.992620 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 19 08:09:34.992816 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Aug 19 08:09:34.994309 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 19 08:09:34.995385 systemd[1]: cri-containerd-90ae8b756cb55ef9a4bc966006daa91e17fee029f5baac06ec90d1991161b28c.scope: Deactivated successfully. Aug 19 08:09:34.996674 containerd[1577]: time="2025-08-19T08:09:34.996403377Z" level=info msg="received exit event container_id:\"90ae8b756cb55ef9a4bc966006daa91e17fee029f5baac06ec90d1991161b28c\" id:\"90ae8b756cb55ef9a4bc966006daa91e17fee029f5baac06ec90d1991161b28c\" pid:3189 exited_at:{seconds:1755590974 nanos:995994475}" Aug 19 08:09:34.996674 containerd[1577]: time="2025-08-19T08:09:34.996459824Z" level=info msg="TaskExit event in podsandbox handler container_id:\"90ae8b756cb55ef9a4bc966006daa91e17fee029f5baac06ec90d1991161b28c\" id:\"90ae8b756cb55ef9a4bc966006daa91e17fee029f5baac06ec90d1991161b28c\" pid:3189 exited_at:{seconds:1755590974 nanos:995994475}" Aug 19 08:09:35.017808 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 19 08:09:35.771765 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3894034710.mount: Deactivated successfully. Aug 19 08:09:35.902490 kubelet[2705]: E0819 08:09:35.902439 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:09:35.905520 containerd[1577]: time="2025-08-19T08:09:35.905081359Z" level=info msg="CreateContainer within sandbox \"9b2650909b167e01248946ebbc3326e65d9a66abafad7cb76b913d7d4474f39c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 19 08:09:35.926956 containerd[1577]: time="2025-08-19T08:09:35.926905938Z" level=info msg="Container fb35c88c7854b0565dd765dab09d82053bfeac6fe1d8e882defd35a19a08f608: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:09:35.931259 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1046525531.mount: Deactivated successfully. Aug 19 08:09:35.937431 containerd[1577]: time="2025-08-19T08:09:35.937387414Z" level=info msg="CreateContainer within sandbox \"9b2650909b167e01248946ebbc3326e65d9a66abafad7cb76b913d7d4474f39c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"fb35c88c7854b0565dd765dab09d82053bfeac6fe1d8e882defd35a19a08f608\"" Aug 19 08:09:35.938260 containerd[1577]: time="2025-08-19T08:09:35.938236847Z" level=info msg="StartContainer for \"fb35c88c7854b0565dd765dab09d82053bfeac6fe1d8e882defd35a19a08f608\"" Aug 19 08:09:35.940073 containerd[1577]: time="2025-08-19T08:09:35.940041885Z" level=info msg="connecting to shim fb35c88c7854b0565dd765dab09d82053bfeac6fe1d8e882defd35a19a08f608" address="unix:///run/containerd/s/1eaf4b584c81464bb7b2f35acc084282b98b1033da214d875d039267a898c626" protocol=ttrpc version=3 Aug 19 08:09:35.965244 systemd[1]: Started cri-containerd-fb35c88c7854b0565dd765dab09d82053bfeac6fe1d8e882defd35a19a08f608.scope - libcontainer container fb35c88c7854b0565dd765dab09d82053bfeac6fe1d8e882defd35a19a08f608. Aug 19 08:09:36.019114 containerd[1577]: time="2025-08-19T08:09:36.019054265Z" level=info msg="StartContainer for \"fb35c88c7854b0565dd765dab09d82053bfeac6fe1d8e882defd35a19a08f608\" returns successfully" Aug 19 08:09:36.020384 systemd[1]: cri-containerd-fb35c88c7854b0565dd765dab09d82053bfeac6fe1d8e882defd35a19a08f608.scope: Deactivated successfully. Aug 19 08:09:36.021907 containerd[1577]: time="2025-08-19T08:09:36.021794184Z" level=info msg="received exit event container_id:\"fb35c88c7854b0565dd765dab09d82053bfeac6fe1d8e882defd35a19a08f608\" id:\"fb35c88c7854b0565dd765dab09d82053bfeac6fe1d8e882defd35a19a08f608\" pid:3248 exited_at:{seconds:1755590976 nanos:21077743}" Aug 19 08:09:36.022401 containerd[1577]: time="2025-08-19T08:09:36.022331368Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fb35c88c7854b0565dd765dab09d82053bfeac6fe1d8e882defd35a19a08f608\" id:\"fb35c88c7854b0565dd765dab09d82053bfeac6fe1d8e882defd35a19a08f608\" pid:3248 exited_at:{seconds:1755590976 nanos:21077743}" Aug 19 08:09:36.212197 containerd[1577]: time="2025-08-19T08:09:36.212116546Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:09:36.212816 containerd[1577]: time="2025-08-19T08:09:36.212773235Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Aug 19 08:09:36.214012 containerd[1577]: time="2025-08-19T08:09:36.213973640Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:09:36.215258 containerd[1577]: time="2025-08-19T08:09:36.215218038Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.779743102s" Aug 19 08:09:36.215258 containerd[1577]: time="2025-08-19T08:09:36.215250180Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Aug 19 08:09:36.217690 containerd[1577]: time="2025-08-19T08:09:36.217647322Z" level=info msg="CreateContainer within sandbox \"eb10bfdd15542ba23fad298d4dbf4e61185aa24cb7cdebbb1c18369e2aa3e278\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Aug 19 08:09:36.225374 containerd[1577]: time="2025-08-19T08:09:36.225328405Z" level=info msg="Container b72f1da27dfa902331524e736b4d579bcad61ba61cdd76e63be24a4f57af9865: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:09:36.231685 containerd[1577]: time="2025-08-19T08:09:36.231625047Z" level=info msg="CreateContainer within sandbox \"eb10bfdd15542ba23fad298d4dbf4e61185aa24cb7cdebbb1c18369e2aa3e278\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"b72f1da27dfa902331524e736b4d579bcad61ba61cdd76e63be24a4f57af9865\"" Aug 19 08:09:36.232239 containerd[1577]: time="2025-08-19T08:09:36.232192368Z" level=info msg="StartContainer for \"b72f1da27dfa902331524e736b4d579bcad61ba61cdd76e63be24a4f57af9865\"" Aug 19 08:09:36.233117 containerd[1577]: time="2025-08-19T08:09:36.233049655Z" level=info msg="connecting to shim b72f1da27dfa902331524e736b4d579bcad61ba61cdd76e63be24a4f57af9865" address="unix:///run/containerd/s/73460fe68537156be2638e7ca01111561eb9e39b6bdc29229d74d6282355048a" protocol=ttrpc version=3 Aug 19 08:09:36.256242 systemd[1]: Started cri-containerd-b72f1da27dfa902331524e736b4d579bcad61ba61cdd76e63be24a4f57af9865.scope - libcontainer container b72f1da27dfa902331524e736b4d579bcad61ba61cdd76e63be24a4f57af9865. Aug 19 08:09:36.286313 containerd[1577]: time="2025-08-19T08:09:36.286186981Z" level=info msg="StartContainer for \"b72f1da27dfa902331524e736b4d579bcad61ba61cdd76e63be24a4f57af9865\" returns successfully" Aug 19 08:09:36.909179 kubelet[2705]: E0819 08:09:36.909131 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:09:36.913286 kubelet[2705]: E0819 08:09:36.913260 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:09:36.915026 containerd[1577]: time="2025-08-19T08:09:36.914993233Z" level=info msg="CreateContainer within sandbox \"9b2650909b167e01248946ebbc3326e65d9a66abafad7cb76b913d7d4474f39c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 19 08:09:36.921470 kubelet[2705]: I0819 08:09:36.921057 2705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-8djq8" podStartSLOduration=1.239165791 podStartE2EDuration="18.9210438s" podCreationTimestamp="2025-08-19 08:09:18 +0000 UTC" firstStartedPulling="2025-08-19 08:09:18.534167611 +0000 UTC m=+6.769275759" lastFinishedPulling="2025-08-19 08:09:36.21604561 +0000 UTC m=+24.451153768" observedRunningTime="2025-08-19 08:09:36.92014806 +0000 UTC m=+25.155256218" watchObservedRunningTime="2025-08-19 08:09:36.9210438 +0000 UTC m=+25.156151958" Aug 19 08:09:36.934408 containerd[1577]: time="2025-08-19T08:09:36.934234500Z" level=info msg="Container 80df564c4b30e9de3b57e38dbd3f2a7ed802f08dd2ebc591b08e5216cb720cc8: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:09:36.943416 containerd[1577]: time="2025-08-19T08:09:36.943363545Z" level=info msg="CreateContainer within sandbox \"9b2650909b167e01248946ebbc3326e65d9a66abafad7cb76b913d7d4474f39c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"80df564c4b30e9de3b57e38dbd3f2a7ed802f08dd2ebc591b08e5216cb720cc8\"" Aug 19 08:09:36.944367 containerd[1577]: time="2025-08-19T08:09:36.944300554Z" level=info msg="StartContainer for \"80df564c4b30e9de3b57e38dbd3f2a7ed802f08dd2ebc591b08e5216cb720cc8\"" Aug 19 08:09:36.945890 containerd[1577]: time="2025-08-19T08:09:36.945851380Z" level=info msg="connecting to shim 80df564c4b30e9de3b57e38dbd3f2a7ed802f08dd2ebc591b08e5216cb720cc8" address="unix:///run/containerd/s/1eaf4b584c81464bb7b2f35acc084282b98b1033da214d875d039267a898c626" protocol=ttrpc version=3 Aug 19 08:09:36.977394 systemd[1]: Started cri-containerd-80df564c4b30e9de3b57e38dbd3f2a7ed802f08dd2ebc591b08e5216cb720cc8.scope - libcontainer container 80df564c4b30e9de3b57e38dbd3f2a7ed802f08dd2ebc591b08e5216cb720cc8. Aug 19 08:09:37.012110 systemd[1]: cri-containerd-80df564c4b30e9de3b57e38dbd3f2a7ed802f08dd2ebc591b08e5216cb720cc8.scope: Deactivated successfully. Aug 19 08:09:37.012655 containerd[1577]: time="2025-08-19T08:09:37.012609988Z" level=info msg="TaskExit event in podsandbox handler container_id:\"80df564c4b30e9de3b57e38dbd3f2a7ed802f08dd2ebc591b08e5216cb720cc8\" id:\"80df564c4b30e9de3b57e38dbd3f2a7ed802f08dd2ebc591b08e5216cb720cc8\" pid:3326 exited_at:{seconds:1755590977 nanos:12315262}" Aug 19 08:09:37.014367 containerd[1577]: time="2025-08-19T08:09:37.014323519Z" level=info msg="received exit event container_id:\"80df564c4b30e9de3b57e38dbd3f2a7ed802f08dd2ebc591b08e5216cb720cc8\" id:\"80df564c4b30e9de3b57e38dbd3f2a7ed802f08dd2ebc591b08e5216cb720cc8\" pid:3326 exited_at:{seconds:1755590977 nanos:12315262}" Aug 19 08:09:37.022373 containerd[1577]: time="2025-08-19T08:09:37.022337876Z" level=info msg="StartContainer for \"80df564c4b30e9de3b57e38dbd3f2a7ed802f08dd2ebc591b08e5216cb720cc8\" returns successfully" Aug 19 08:09:37.036035 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-80df564c4b30e9de3b57e38dbd3f2a7ed802f08dd2ebc591b08e5216cb720cc8-rootfs.mount: Deactivated successfully. Aug 19 08:09:37.917990 kubelet[2705]: E0819 08:09:37.917930 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:09:37.918679 kubelet[2705]: E0819 08:09:37.918078 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:09:37.920420 containerd[1577]: time="2025-08-19T08:09:37.920379694Z" level=info msg="CreateContainer within sandbox \"9b2650909b167e01248946ebbc3326e65d9a66abafad7cb76b913d7d4474f39c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 19 08:09:37.933395 containerd[1577]: time="2025-08-19T08:09:37.933334589Z" level=info msg="Container ef2a2511d75bd805b32bd72ded06595beed17f0b25807190791b6b30a9431f67: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:09:37.940195 containerd[1577]: time="2025-08-19T08:09:37.940144684Z" level=info msg="CreateContainer within sandbox \"9b2650909b167e01248946ebbc3326e65d9a66abafad7cb76b913d7d4474f39c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ef2a2511d75bd805b32bd72ded06595beed17f0b25807190791b6b30a9431f67\"" Aug 19 08:09:37.940632 containerd[1577]: time="2025-08-19T08:09:37.940603369Z" level=info msg="StartContainer for \"ef2a2511d75bd805b32bd72ded06595beed17f0b25807190791b6b30a9431f67\"" Aug 19 08:09:37.941529 containerd[1577]: time="2025-08-19T08:09:37.941476997Z" level=info msg="connecting to shim ef2a2511d75bd805b32bd72ded06595beed17f0b25807190791b6b30a9431f67" address="unix:///run/containerd/s/1eaf4b584c81464bb7b2f35acc084282b98b1033da214d875d039267a898c626" protocol=ttrpc version=3 Aug 19 08:09:37.967213 systemd[1]: Started cri-containerd-ef2a2511d75bd805b32bd72ded06595beed17f0b25807190791b6b30a9431f67.scope - libcontainer container ef2a2511d75bd805b32bd72ded06595beed17f0b25807190791b6b30a9431f67. Aug 19 08:09:38.005881 containerd[1577]: time="2025-08-19T08:09:38.005837396Z" level=info msg="StartContainer for \"ef2a2511d75bd805b32bd72ded06595beed17f0b25807190791b6b30a9431f67\" returns successfully" Aug 19 08:09:38.072264 containerd[1577]: time="2025-08-19T08:09:38.072217631Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ef2a2511d75bd805b32bd72ded06595beed17f0b25807190791b6b30a9431f67\" id:\"b102e25a6fa6d5e42cabe4332df08b600fb4b4b84dcd936c3becf1b970f80a26\" pid:3394 exited_at:{seconds:1755590978 nanos:71897398}" Aug 19 08:09:38.154732 kubelet[2705]: I0819 08:09:38.154686 2705 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Aug 19 08:09:38.186394 systemd[1]: Created slice kubepods-burstable-pod5dfdef0a_70a3_414c_9902_69304c8df22c.slice - libcontainer container kubepods-burstable-pod5dfdef0a_70a3_414c_9902_69304c8df22c.slice. Aug 19 08:09:38.197234 systemd[1]: Created slice kubepods-burstable-podc548ed44_ccb1_4098_b767_25ae7405f0c4.slice - libcontainer container kubepods-burstable-podc548ed44_ccb1_4098_b767_25ae7405f0c4.slice. Aug 19 08:09:38.200271 kubelet[2705]: I0819 08:09:38.200194 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c548ed44-ccb1-4098-b767-25ae7405f0c4-config-volume\") pod \"coredns-668d6bf9bc-6xsnh\" (UID: \"c548ed44-ccb1-4098-b767-25ae7405f0c4\") " pod="kube-system/coredns-668d6bf9bc-6xsnh" Aug 19 08:09:38.200271 kubelet[2705]: I0819 08:09:38.200241 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5dfdef0a-70a3-414c-9902-69304c8df22c-config-volume\") pod \"coredns-668d6bf9bc-89pdr\" (UID: \"5dfdef0a-70a3-414c-9902-69304c8df22c\") " pod="kube-system/coredns-668d6bf9bc-89pdr" Aug 19 08:09:38.200271 kubelet[2705]: I0819 08:09:38.200260 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pndgt\" (UniqueName: \"kubernetes.io/projected/5dfdef0a-70a3-414c-9902-69304c8df22c-kube-api-access-pndgt\") pod \"coredns-668d6bf9bc-89pdr\" (UID: \"5dfdef0a-70a3-414c-9902-69304c8df22c\") " pod="kube-system/coredns-668d6bf9bc-89pdr" Aug 19 08:09:38.200394 kubelet[2705]: I0819 08:09:38.200313 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrd74\" (UniqueName: \"kubernetes.io/projected/c548ed44-ccb1-4098-b767-25ae7405f0c4-kube-api-access-vrd74\") pod \"coredns-668d6bf9bc-6xsnh\" (UID: \"c548ed44-ccb1-4098-b767-25ae7405f0c4\") " pod="kube-system/coredns-668d6bf9bc-6xsnh" Aug 19 08:09:38.494527 kubelet[2705]: E0819 08:09:38.494406 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:09:38.495197 containerd[1577]: time="2025-08-19T08:09:38.495159147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-89pdr,Uid:5dfdef0a-70a3-414c-9902-69304c8df22c,Namespace:kube-system,Attempt:0,}" Aug 19 08:09:38.501161 kubelet[2705]: E0819 08:09:38.501129 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:09:38.501753 containerd[1577]: time="2025-08-19T08:09:38.501586905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6xsnh,Uid:c548ed44-ccb1-4098-b767-25ae7405f0c4,Namespace:kube-system,Attempt:0,}" Aug 19 08:09:38.955192 kubelet[2705]: E0819 08:09:38.955041 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:09:39.957211 kubelet[2705]: E0819 08:09:39.957169 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:09:40.264195 systemd-networkd[1498]: cilium_host: Link UP Aug 19 08:09:40.264693 systemd-networkd[1498]: cilium_net: Link UP Aug 19 08:09:40.264936 systemd-networkd[1498]: cilium_net: Gained carrier Aug 19 08:09:40.265140 systemd-networkd[1498]: cilium_host: Gained carrier Aug 19 08:09:40.373419 systemd-networkd[1498]: cilium_vxlan: Link UP Aug 19 08:09:40.373431 systemd-networkd[1498]: cilium_vxlan: Gained carrier Aug 19 08:09:40.589134 kernel: NET: Registered PF_ALG protocol family Aug 19 08:09:40.987833 kubelet[2705]: E0819 08:09:40.987471 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:09:41.082276 systemd-networkd[1498]: cilium_host: Gained IPv6LL Aug 19 08:09:41.146256 systemd-networkd[1498]: cilium_net: Gained IPv6LL Aug 19 08:09:41.259470 systemd-networkd[1498]: lxc_health: Link UP Aug 19 08:09:41.259788 systemd-networkd[1498]: lxc_health: Gained carrier Aug 19 08:09:41.558129 kernel: eth0: renamed from tmp9e928 Aug 19 08:09:41.560122 kernel: eth0: renamed from tmpdd930 Aug 19 08:09:41.562773 systemd-networkd[1498]: lxc8e50210c1769: Link UP Aug 19 08:09:41.563702 systemd-networkd[1498]: lxcb4b8e1e116c2: Link UP Aug 19 08:09:41.565699 systemd-networkd[1498]: lxcb4b8e1e116c2: Gained carrier Aug 19 08:09:41.568531 systemd-networkd[1498]: lxc8e50210c1769: Gained carrier Aug 19 08:09:42.002121 kubelet[2705]: E0819 08:09:42.001674 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:09:42.196469 kubelet[2705]: I0819 08:09:42.196119 2705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-mn656" podStartSLOduration=8.882935596 podStartE2EDuration="25.196079754s" podCreationTimestamp="2025-08-19 08:09:17 +0000 UTC" firstStartedPulling="2025-08-19 08:09:18.121967893 +0000 UTC m=+6.357076051" lastFinishedPulling="2025-08-19 08:09:34.435112061 +0000 UTC m=+22.670220209" observedRunningTime="2025-08-19 08:09:38.968578614 +0000 UTC m=+27.203686772" watchObservedRunningTime="2025-08-19 08:09:42.196079754 +0000 UTC m=+30.431187912" Aug 19 08:09:42.237186 systemd-networkd[1498]: cilium_vxlan: Gained IPv6LL Aug 19 08:09:42.874277 systemd-networkd[1498]: lxc_health: Gained IPv6LL Aug 19 08:09:43.044219 systemd[1]: Started sshd@7-10.0.0.49:22-10.0.0.1:49890.service - OpenSSH per-connection server daemon (10.0.0.1:49890). Aug 19 08:09:43.100821 sshd[3866]: Accepted publickey for core from 10.0.0.1 port 49890 ssh2: RSA SHA256:kecLVWRG1G7MHrHN/yG6X078KPWjs/jTMbEJqAmOzyM Aug 19 08:09:43.102417 sshd-session[3866]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:09:43.108575 systemd-logind[1548]: New session 8 of user core. Aug 19 08:09:43.115231 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 19 08:09:43.254874 sshd[3869]: Connection closed by 10.0.0.1 port 49890 Aug 19 08:09:43.255139 sshd-session[3866]: pam_unix(sshd:session): session closed for user core Aug 19 08:09:43.260170 systemd[1]: sshd@7-10.0.0.49:22-10.0.0.1:49890.service: Deactivated successfully. Aug 19 08:09:43.262350 systemd[1]: session-8.scope: Deactivated successfully. Aug 19 08:09:43.263333 systemd-logind[1548]: Session 8 logged out. Waiting for processes to exit. Aug 19 08:09:43.264889 systemd-logind[1548]: Removed session 8. Aug 19 08:09:43.386261 systemd-networkd[1498]: lxc8e50210c1769: Gained IPv6LL Aug 19 08:09:43.578295 systemd-networkd[1498]: lxcb4b8e1e116c2: Gained IPv6LL Aug 19 08:09:44.904117 containerd[1577]: time="2025-08-19T08:09:44.903705450Z" level=info msg="connecting to shim 9e9285b7d6a547b7e52f282be3fd8887c0e7ca0be6d131481522c28d287f4b3e" address="unix:///run/containerd/s/893c7fe23e59c36ca472b1bfff2fc725e702d67df76d614056deaab2c9c9fbf7" namespace=k8s.io protocol=ttrpc version=3 Aug 19 08:09:44.905045 containerd[1577]: time="2025-08-19T08:09:44.905002171Z" level=info msg="connecting to shim dd930a5d05f0d88aa0e7c9466f0c68b2890ac884533cf55d1f240a93e080632d" address="unix:///run/containerd/s/c1ce6c915a9d43a6d94518b705ab0ce6fe5fb5b17cdbe0c415112da7a19fc2bf" namespace=k8s.io protocol=ttrpc version=3 Aug 19 08:09:44.939392 systemd[1]: Started cri-containerd-9e9285b7d6a547b7e52f282be3fd8887c0e7ca0be6d131481522c28d287f4b3e.scope - libcontainer container 9e9285b7d6a547b7e52f282be3fd8887c0e7ca0be6d131481522c28d287f4b3e. Aug 19 08:09:44.945504 systemd[1]: Started cri-containerd-dd930a5d05f0d88aa0e7c9466f0c68b2890ac884533cf55d1f240a93e080632d.scope - libcontainer container dd930a5d05f0d88aa0e7c9466f0c68b2890ac884533cf55d1f240a93e080632d. Aug 19 08:09:44.963900 systemd-resolved[1417]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 19 08:09:44.966521 systemd-resolved[1417]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 19 08:09:45.002298 containerd[1577]: time="2025-08-19T08:09:45.002235249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6xsnh,Uid:c548ed44-ccb1-4098-b767-25ae7405f0c4,Namespace:kube-system,Attempt:0,} returns sandbox id \"dd930a5d05f0d88aa0e7c9466f0c68b2890ac884533cf55d1f240a93e080632d\"" Aug 19 08:09:45.002950 kubelet[2705]: E0819 08:09:45.002906 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:09:45.005902 containerd[1577]: time="2025-08-19T08:09:45.005869056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-89pdr,Uid:5dfdef0a-70a3-414c-9902-69304c8df22c,Namespace:kube-system,Attempt:0,} returns sandbox id \"9e9285b7d6a547b7e52f282be3fd8887c0e7ca0be6d131481522c28d287f4b3e\"" Aug 19 08:09:45.006465 kubelet[2705]: E0819 08:09:45.006434 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:09:45.010413 containerd[1577]: time="2025-08-19T08:09:45.010356148Z" level=info msg="CreateContainer within sandbox \"9e9285b7d6a547b7e52f282be3fd8887c0e7ca0be6d131481522c28d287f4b3e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 19 08:09:45.010535 containerd[1577]: time="2025-08-19T08:09:45.010429998Z" level=info msg="CreateContainer within sandbox \"dd930a5d05f0d88aa0e7c9466f0c68b2890ac884533cf55d1f240a93e080632d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 19 08:09:45.021062 containerd[1577]: time="2025-08-19T08:09:45.021009122Z" level=info msg="Container 186db4f269eaf57f4db43516741358c651cf53514432936e7d6c76ca3ec4e20f: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:09:45.025273 containerd[1577]: time="2025-08-19T08:09:45.025242978Z" level=info msg="Container ae9064c5bdb370be59a5784317d45cad9bae70e5ef7b7f0995588172759d1afe: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:09:45.031640 containerd[1577]: time="2025-08-19T08:09:45.031590641Z" level=info msg="CreateContainer within sandbox \"dd930a5d05f0d88aa0e7c9466f0c68b2890ac884533cf55d1f240a93e080632d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"186db4f269eaf57f4db43516741358c651cf53514432936e7d6c76ca3ec4e20f\"" Aug 19 08:09:45.032204 containerd[1577]: time="2025-08-19T08:09:45.032157127Z" level=info msg="StartContainer for \"186db4f269eaf57f4db43516741358c651cf53514432936e7d6c76ca3ec4e20f\"" Aug 19 08:09:45.033194 containerd[1577]: time="2025-08-19T08:09:45.033125019Z" level=info msg="connecting to shim 186db4f269eaf57f4db43516741358c651cf53514432936e7d6c76ca3ec4e20f" address="unix:///run/containerd/s/c1ce6c915a9d43a6d94518b705ab0ce6fe5fb5b17cdbe0c415112da7a19fc2bf" protocol=ttrpc version=3 Aug 19 08:09:45.034816 containerd[1577]: time="2025-08-19T08:09:45.034779681Z" level=info msg="CreateContainer within sandbox \"9e9285b7d6a547b7e52f282be3fd8887c0e7ca0be6d131481522c28d287f4b3e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ae9064c5bdb370be59a5784317d45cad9bae70e5ef7b7f0995588172759d1afe\"" Aug 19 08:09:45.035416 containerd[1577]: time="2025-08-19T08:09:45.035361666Z" level=info msg="StartContainer for \"ae9064c5bdb370be59a5784317d45cad9bae70e5ef7b7f0995588172759d1afe\"" Aug 19 08:09:45.036328 containerd[1577]: time="2025-08-19T08:09:45.036293089Z" level=info msg="connecting to shim ae9064c5bdb370be59a5784317d45cad9bae70e5ef7b7f0995588172759d1afe" address="unix:///run/containerd/s/893c7fe23e59c36ca472b1bfff2fc725e702d67df76d614056deaab2c9c9fbf7" protocol=ttrpc version=3 Aug 19 08:09:45.056236 systemd[1]: Started cri-containerd-186db4f269eaf57f4db43516741358c651cf53514432936e7d6c76ca3ec4e20f.scope - libcontainer container 186db4f269eaf57f4db43516741358c651cf53514432936e7d6c76ca3ec4e20f. Aug 19 08:09:45.066585 systemd[1]: Started cri-containerd-ae9064c5bdb370be59a5784317d45cad9bae70e5ef7b7f0995588172759d1afe.scope - libcontainer container ae9064c5bdb370be59a5784317d45cad9bae70e5ef7b7f0995588172759d1afe. Aug 19 08:09:45.092655 containerd[1577]: time="2025-08-19T08:09:45.092585173Z" level=info msg="StartContainer for \"186db4f269eaf57f4db43516741358c651cf53514432936e7d6c76ca3ec4e20f\" returns successfully" Aug 19 08:09:45.108102 containerd[1577]: time="2025-08-19T08:09:45.108033830Z" level=info msg="StartContainer for \"ae9064c5bdb370be59a5784317d45cad9bae70e5ef7b7f0995588172759d1afe\" returns successfully" Aug 19 08:09:45.883610 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount540488409.mount: Deactivated successfully. Aug 19 08:09:45.998154 kubelet[2705]: E0819 08:09:45.997750 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:09:46.000605 kubelet[2705]: E0819 08:09:46.000499 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:09:46.012069 kubelet[2705]: I0819 08:09:46.011990 2705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-6xsnh" podStartSLOduration=28.011965879 podStartE2EDuration="28.011965879s" podCreationTimestamp="2025-08-19 08:09:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-19 08:09:46.011567791 +0000 UTC m=+34.246675949" watchObservedRunningTime="2025-08-19 08:09:46.011965879 +0000 UTC m=+34.247074037" Aug 19 08:09:46.036417 kubelet[2705]: I0819 08:09:46.035113 2705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-89pdr" podStartSLOduration=28.035070661 podStartE2EDuration="28.035070661s" podCreationTimestamp="2025-08-19 08:09:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-19 08:09:46.022557333 +0000 UTC m=+34.257665481" watchObservedRunningTime="2025-08-19 08:09:46.035070661 +0000 UTC m=+34.270178819" Aug 19 08:09:46.134770 kubelet[2705]: I0819 08:09:46.134478 2705 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 19 08:09:46.135036 kubelet[2705]: E0819 08:09:46.134922 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:09:47.002921 kubelet[2705]: E0819 08:09:47.002864 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:09:47.002921 kubelet[2705]: E0819 08:09:47.002907 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:09:47.003140 kubelet[2705]: E0819 08:09:47.002917 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:09:48.004289 kubelet[2705]: E0819 08:09:48.004251 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:09:48.004751 kubelet[2705]: E0819 08:09:48.004310 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:09:48.271662 systemd[1]: Started sshd@8-10.0.0.49:22-10.0.0.1:38924.service - OpenSSH per-connection server daemon (10.0.0.1:38924). Aug 19 08:09:48.331859 sshd[4058]: Accepted publickey for core from 10.0.0.1 port 38924 ssh2: RSA SHA256:kecLVWRG1G7MHrHN/yG6X078KPWjs/jTMbEJqAmOzyM Aug 19 08:09:48.333414 sshd-session[4058]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:09:48.338232 systemd-logind[1548]: New session 9 of user core. Aug 19 08:09:48.349281 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 19 08:09:48.465074 sshd[4061]: Connection closed by 10.0.0.1 port 38924 Aug 19 08:09:48.465518 sshd-session[4058]: pam_unix(sshd:session): session closed for user core Aug 19 08:09:48.469200 systemd[1]: sshd@8-10.0.0.49:22-10.0.0.1:38924.service: Deactivated successfully. Aug 19 08:09:48.471700 systemd[1]: session-9.scope: Deactivated successfully. Aug 19 08:09:48.473433 systemd-logind[1548]: Session 9 logged out. Waiting for processes to exit. Aug 19 08:09:48.474805 systemd-logind[1548]: Removed session 9. Aug 19 08:09:53.485740 systemd[1]: Started sshd@9-10.0.0.49:22-10.0.0.1:38936.service - OpenSSH per-connection server daemon (10.0.0.1:38936). Aug 19 08:09:53.535965 sshd[4079]: Accepted publickey for core from 10.0.0.1 port 38936 ssh2: RSA SHA256:kecLVWRG1G7MHrHN/yG6X078KPWjs/jTMbEJqAmOzyM Aug 19 08:09:53.538077 sshd-session[4079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:09:53.542590 systemd-logind[1548]: New session 10 of user core. Aug 19 08:09:53.553967 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 19 08:09:53.665938 sshd[4082]: Connection closed by 10.0.0.1 port 38936 Aug 19 08:09:53.666327 sshd-session[4079]: pam_unix(sshd:session): session closed for user core Aug 19 08:09:53.670547 systemd[1]: sshd@9-10.0.0.49:22-10.0.0.1:38936.service: Deactivated successfully. Aug 19 08:09:53.672722 systemd[1]: session-10.scope: Deactivated successfully. Aug 19 08:09:53.673547 systemd-logind[1548]: Session 10 logged out. Waiting for processes to exit. Aug 19 08:09:53.674670 systemd-logind[1548]: Removed session 10. Aug 19 08:09:58.681924 systemd[1]: Started sshd@10-10.0.0.49:22-10.0.0.1:35102.service - OpenSSH per-connection server daemon (10.0.0.1:35102). Aug 19 08:09:58.731593 sshd[4096]: Accepted publickey for core from 10.0.0.1 port 35102 ssh2: RSA SHA256:kecLVWRG1G7MHrHN/yG6X078KPWjs/jTMbEJqAmOzyM Aug 19 08:09:58.733493 sshd-session[4096]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:09:58.738232 systemd-logind[1548]: New session 11 of user core. Aug 19 08:09:58.756342 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 19 08:09:58.875014 sshd[4099]: Connection closed by 10.0.0.1 port 35102 Aug 19 08:09:58.875369 sshd-session[4096]: pam_unix(sshd:session): session closed for user core Aug 19 08:09:58.879512 systemd[1]: sshd@10-10.0.0.49:22-10.0.0.1:35102.service: Deactivated successfully. Aug 19 08:09:58.881467 systemd[1]: session-11.scope: Deactivated successfully. Aug 19 08:09:58.882346 systemd-logind[1548]: Session 11 logged out. Waiting for processes to exit. Aug 19 08:09:58.883527 systemd-logind[1548]: Removed session 11. Aug 19 08:10:03.888484 systemd[1]: Started sshd@11-10.0.0.49:22-10.0.0.1:35118.service - OpenSSH per-connection server daemon (10.0.0.1:35118). Aug 19 08:10:03.953988 sshd[4114]: Accepted publickey for core from 10.0.0.1 port 35118 ssh2: RSA SHA256:kecLVWRG1G7MHrHN/yG6X078KPWjs/jTMbEJqAmOzyM Aug 19 08:10:03.956062 sshd-session[4114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:10:03.961407 systemd-logind[1548]: New session 12 of user core. Aug 19 08:10:03.971288 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 19 08:10:04.088817 sshd[4117]: Connection closed by 10.0.0.1 port 35118 Aug 19 08:10:04.089368 sshd-session[4114]: pam_unix(sshd:session): session closed for user core Aug 19 08:10:04.099800 systemd[1]: sshd@11-10.0.0.49:22-10.0.0.1:35118.service: Deactivated successfully. Aug 19 08:10:04.101919 systemd[1]: session-12.scope: Deactivated successfully. Aug 19 08:10:04.102849 systemd-logind[1548]: Session 12 logged out. Waiting for processes to exit. Aug 19 08:10:04.105731 systemd[1]: Started sshd@12-10.0.0.49:22-10.0.0.1:35134.service - OpenSSH per-connection server daemon (10.0.0.1:35134). Aug 19 08:10:04.106437 systemd-logind[1548]: Removed session 12. Aug 19 08:10:04.160648 sshd[4131]: Accepted publickey for core from 10.0.0.1 port 35134 ssh2: RSA SHA256:kecLVWRG1G7MHrHN/yG6X078KPWjs/jTMbEJqAmOzyM Aug 19 08:10:04.162394 sshd-session[4131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:10:04.167259 systemd-logind[1548]: New session 13 of user core. Aug 19 08:10:04.173245 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 19 08:10:04.323950 sshd[4134]: Connection closed by 10.0.0.1 port 35134 Aug 19 08:10:04.325233 sshd-session[4131]: pam_unix(sshd:session): session closed for user core Aug 19 08:10:04.335482 systemd[1]: sshd@12-10.0.0.49:22-10.0.0.1:35134.service: Deactivated successfully. Aug 19 08:10:04.337862 systemd[1]: session-13.scope: Deactivated successfully. Aug 19 08:10:04.341554 systemd-logind[1548]: Session 13 logged out. Waiting for processes to exit. Aug 19 08:10:04.345693 systemd[1]: Started sshd@13-10.0.0.49:22-10.0.0.1:35146.service - OpenSSH per-connection server daemon (10.0.0.1:35146). Aug 19 08:10:04.347698 systemd-logind[1548]: Removed session 13. Aug 19 08:10:04.392132 sshd[4145]: Accepted publickey for core from 10.0.0.1 port 35146 ssh2: RSA SHA256:kecLVWRG1G7MHrHN/yG6X078KPWjs/jTMbEJqAmOzyM Aug 19 08:10:04.394250 sshd-session[4145]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:10:04.399587 systemd-logind[1548]: New session 14 of user core. Aug 19 08:10:04.409238 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 19 08:10:04.522657 sshd[4148]: Connection closed by 10.0.0.1 port 35146 Aug 19 08:10:04.523006 sshd-session[4145]: pam_unix(sshd:session): session closed for user core Aug 19 08:10:04.526939 systemd[1]: sshd@13-10.0.0.49:22-10.0.0.1:35146.service: Deactivated successfully. Aug 19 08:10:04.528995 systemd[1]: session-14.scope: Deactivated successfully. Aug 19 08:10:04.531239 systemd-logind[1548]: Session 14 logged out. Waiting for processes to exit. Aug 19 08:10:04.532341 systemd-logind[1548]: Removed session 14. Aug 19 08:10:09.550218 systemd[1]: Started sshd@14-10.0.0.49:22-10.0.0.1:36804.service - OpenSSH per-connection server daemon (10.0.0.1:36804). Aug 19 08:10:09.605527 sshd[4162]: Accepted publickey for core from 10.0.0.1 port 36804 ssh2: RSA SHA256:kecLVWRG1G7MHrHN/yG6X078KPWjs/jTMbEJqAmOzyM Aug 19 08:10:09.606983 sshd-session[4162]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:10:09.611995 systemd-logind[1548]: New session 15 of user core. Aug 19 08:10:09.626238 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 19 08:10:09.745511 sshd[4165]: Connection closed by 10.0.0.1 port 36804 Aug 19 08:10:09.745869 sshd-session[4162]: pam_unix(sshd:session): session closed for user core Aug 19 08:10:09.750313 systemd[1]: sshd@14-10.0.0.49:22-10.0.0.1:36804.service: Deactivated successfully. Aug 19 08:10:09.752448 systemd[1]: session-15.scope: Deactivated successfully. Aug 19 08:10:09.753487 systemd-logind[1548]: Session 15 logged out. Waiting for processes to exit. Aug 19 08:10:09.754786 systemd-logind[1548]: Removed session 15. Aug 19 08:10:14.768923 systemd[1]: Started sshd@15-10.0.0.49:22-10.0.0.1:36806.service - OpenSSH per-connection server daemon (10.0.0.1:36806). Aug 19 08:10:14.821837 sshd[4181]: Accepted publickey for core from 10.0.0.1 port 36806 ssh2: RSA SHA256:kecLVWRG1G7MHrHN/yG6X078KPWjs/jTMbEJqAmOzyM Aug 19 08:10:14.823435 sshd-session[4181]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:10:14.827977 systemd-logind[1548]: New session 16 of user core. Aug 19 08:10:14.838261 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 19 08:10:14.948933 sshd[4184]: Connection closed by 10.0.0.1 port 36806 Aug 19 08:10:14.949354 sshd-session[4181]: pam_unix(sshd:session): session closed for user core Aug 19 08:10:14.958059 systemd[1]: sshd@15-10.0.0.49:22-10.0.0.1:36806.service: Deactivated successfully. Aug 19 08:10:14.960251 systemd[1]: session-16.scope: Deactivated successfully. Aug 19 08:10:14.961196 systemd-logind[1548]: Session 16 logged out. Waiting for processes to exit. Aug 19 08:10:14.963871 systemd[1]: Started sshd@16-10.0.0.49:22-10.0.0.1:36808.service - OpenSSH per-connection server daemon (10.0.0.1:36808). Aug 19 08:10:14.964717 systemd-logind[1548]: Removed session 16. Aug 19 08:10:15.021748 sshd[4197]: Accepted publickey for core from 10.0.0.1 port 36808 ssh2: RSA SHA256:kecLVWRG1G7MHrHN/yG6X078KPWjs/jTMbEJqAmOzyM Aug 19 08:10:15.023543 sshd-session[4197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:10:15.028128 systemd-logind[1548]: New session 17 of user core. Aug 19 08:10:15.037223 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 19 08:10:15.326594 sshd[4200]: Connection closed by 10.0.0.1 port 36808 Aug 19 08:10:15.326989 sshd-session[4197]: pam_unix(sshd:session): session closed for user core Aug 19 08:10:15.339027 systemd[1]: sshd@16-10.0.0.49:22-10.0.0.1:36808.service: Deactivated successfully. Aug 19 08:10:15.341390 systemd[1]: session-17.scope: Deactivated successfully. Aug 19 08:10:15.342311 systemd-logind[1548]: Session 17 logged out. Waiting for processes to exit. Aug 19 08:10:15.346374 systemd[1]: Started sshd@17-10.0.0.49:22-10.0.0.1:36818.service - OpenSSH per-connection server daemon (10.0.0.1:36818). Aug 19 08:10:15.347047 systemd-logind[1548]: Removed session 17. Aug 19 08:10:15.399270 sshd[4212]: Accepted publickey for core from 10.0.0.1 port 36818 ssh2: RSA SHA256:kecLVWRG1G7MHrHN/yG6X078KPWjs/jTMbEJqAmOzyM Aug 19 08:10:15.400779 sshd-session[4212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:10:15.405280 systemd-logind[1548]: New session 18 of user core. Aug 19 08:10:15.417323 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 19 08:10:16.207474 sshd[4215]: Connection closed by 10.0.0.1 port 36818 Aug 19 08:10:16.208199 sshd-session[4212]: pam_unix(sshd:session): session closed for user core Aug 19 08:10:16.221017 systemd[1]: sshd@17-10.0.0.49:22-10.0.0.1:36818.service: Deactivated successfully. Aug 19 08:10:16.224719 systemd[1]: session-18.scope: Deactivated successfully. Aug 19 08:10:16.225889 systemd-logind[1548]: Session 18 logged out. Waiting for processes to exit. Aug 19 08:10:16.232322 systemd[1]: Started sshd@18-10.0.0.49:22-10.0.0.1:36822.service - OpenSSH per-connection server daemon (10.0.0.1:36822). Aug 19 08:10:16.234686 systemd-logind[1548]: Removed session 18. Aug 19 08:10:16.283604 sshd[4234]: Accepted publickey for core from 10.0.0.1 port 36822 ssh2: RSA SHA256:kecLVWRG1G7MHrHN/yG6X078KPWjs/jTMbEJqAmOzyM Aug 19 08:10:16.285037 sshd-session[4234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:10:16.289772 systemd-logind[1548]: New session 19 of user core. Aug 19 08:10:16.299357 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 19 08:10:16.791040 sshd[4237]: Connection closed by 10.0.0.1 port 36822 Aug 19 08:10:16.791456 sshd-session[4234]: pam_unix(sshd:session): session closed for user core Aug 19 08:10:16.805886 systemd[1]: sshd@18-10.0.0.49:22-10.0.0.1:36822.service: Deactivated successfully. Aug 19 08:10:16.807795 systemd[1]: session-19.scope: Deactivated successfully. Aug 19 08:10:16.808721 systemd-logind[1548]: Session 19 logged out. Waiting for processes to exit. Aug 19 08:10:16.811313 systemd[1]: Started sshd@19-10.0.0.49:22-10.0.0.1:36826.service - OpenSSH per-connection server daemon (10.0.0.1:36826). Aug 19 08:10:16.812060 systemd-logind[1548]: Removed session 19. Aug 19 08:10:16.870061 sshd[4249]: Accepted publickey for core from 10.0.0.1 port 36826 ssh2: RSA SHA256:kecLVWRG1G7MHrHN/yG6X078KPWjs/jTMbEJqAmOzyM Aug 19 08:10:16.871893 sshd-session[4249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:10:16.877262 systemd-logind[1548]: New session 20 of user core. Aug 19 08:10:16.882247 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 19 08:10:17.006952 sshd[4252]: Connection closed by 10.0.0.1 port 36826 Aug 19 08:10:17.007506 sshd-session[4249]: pam_unix(sshd:session): session closed for user core Aug 19 08:10:17.013892 systemd[1]: sshd@19-10.0.0.49:22-10.0.0.1:36826.service: Deactivated successfully. Aug 19 08:10:17.015974 systemd[1]: session-20.scope: Deactivated successfully. Aug 19 08:10:17.016933 systemd-logind[1548]: Session 20 logged out. Waiting for processes to exit. Aug 19 08:10:17.018394 systemd-logind[1548]: Removed session 20. Aug 19 08:10:20.841406 kubelet[2705]: E0819 08:10:20.841356 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:10:22.024937 systemd[1]: Started sshd@20-10.0.0.49:22-10.0.0.1:58746.service - OpenSSH per-connection server daemon (10.0.0.1:58746). Aug 19 08:10:22.077434 sshd[4267]: Accepted publickey for core from 10.0.0.1 port 58746 ssh2: RSA SHA256:kecLVWRG1G7MHrHN/yG6X078KPWjs/jTMbEJqAmOzyM Aug 19 08:10:22.079142 sshd-session[4267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:10:22.083681 systemd-logind[1548]: New session 21 of user core. Aug 19 08:10:22.094249 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 19 08:10:22.203457 sshd[4270]: Connection closed by 10.0.0.1 port 58746 Aug 19 08:10:22.203849 sshd-session[4267]: pam_unix(sshd:session): session closed for user core Aug 19 08:10:22.207856 systemd[1]: sshd@20-10.0.0.49:22-10.0.0.1:58746.service: Deactivated successfully. Aug 19 08:10:22.209878 systemd[1]: session-21.scope: Deactivated successfully. Aug 19 08:10:22.210739 systemd-logind[1548]: Session 21 logged out. Waiting for processes to exit. Aug 19 08:10:22.211990 systemd-logind[1548]: Removed session 21. Aug 19 08:10:27.226052 systemd[1]: Started sshd@21-10.0.0.49:22-10.0.0.1:58762.service - OpenSSH per-connection server daemon (10.0.0.1:58762). Aug 19 08:10:27.277021 sshd[4285]: Accepted publickey for core from 10.0.0.1 port 58762 ssh2: RSA SHA256:kecLVWRG1G7MHrHN/yG6X078KPWjs/jTMbEJqAmOzyM Aug 19 08:10:27.278571 sshd-session[4285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:10:27.283576 systemd-logind[1548]: New session 22 of user core. Aug 19 08:10:27.292268 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 19 08:10:27.510178 sshd[4288]: Connection closed by 10.0.0.1 port 58762 Aug 19 08:10:27.510482 sshd-session[4285]: pam_unix(sshd:session): session closed for user core Aug 19 08:10:27.515254 systemd[1]: sshd@21-10.0.0.49:22-10.0.0.1:58762.service: Deactivated successfully. Aug 19 08:10:27.517179 systemd[1]: session-22.scope: Deactivated successfully. Aug 19 08:10:27.518071 systemd-logind[1548]: Session 22 logged out. Waiting for processes to exit. Aug 19 08:10:27.519378 systemd-logind[1548]: Removed session 22. Aug 19 08:10:30.840677 kubelet[2705]: E0819 08:10:30.840596 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:10:32.528804 systemd[1]: Started sshd@22-10.0.0.49:22-10.0.0.1:44590.service - OpenSSH per-connection server daemon (10.0.0.1:44590). Aug 19 08:10:32.591585 sshd[4301]: Accepted publickey for core from 10.0.0.1 port 44590 ssh2: RSA SHA256:kecLVWRG1G7MHrHN/yG6X078KPWjs/jTMbEJqAmOzyM Aug 19 08:10:32.592946 sshd-session[4301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:10:32.597844 systemd-logind[1548]: New session 23 of user core. Aug 19 08:10:32.608260 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 19 08:10:32.722005 sshd[4304]: Connection closed by 10.0.0.1 port 44590 Aug 19 08:10:32.722411 sshd-session[4301]: pam_unix(sshd:session): session closed for user core Aug 19 08:10:32.727573 systemd[1]: sshd@22-10.0.0.49:22-10.0.0.1:44590.service: Deactivated successfully. Aug 19 08:10:32.730299 systemd[1]: session-23.scope: Deactivated successfully. Aug 19 08:10:32.731124 systemd-logind[1548]: Session 23 logged out. Waiting for processes to exit. Aug 19 08:10:32.732908 systemd-logind[1548]: Removed session 23. Aug 19 08:10:33.843770 kubelet[2705]: E0819 08:10:33.843719 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:10:37.739371 systemd[1]: Started sshd@23-10.0.0.49:22-10.0.0.1:44592.service - OpenSSH per-connection server daemon (10.0.0.1:44592). Aug 19 08:10:37.793866 sshd[4319]: Accepted publickey for core from 10.0.0.1 port 44592 ssh2: RSA SHA256:kecLVWRG1G7MHrHN/yG6X078KPWjs/jTMbEJqAmOzyM Aug 19 08:10:37.796295 sshd-session[4319]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:10:37.801498 systemd-logind[1548]: New session 24 of user core. Aug 19 08:10:37.810381 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 19 08:10:37.923756 sshd[4322]: Connection closed by 10.0.0.1 port 44592 Aug 19 08:10:37.924281 sshd-session[4319]: pam_unix(sshd:session): session closed for user core Aug 19 08:10:37.937335 systemd[1]: sshd@23-10.0.0.49:22-10.0.0.1:44592.service: Deactivated successfully. Aug 19 08:10:37.939292 systemd[1]: session-24.scope: Deactivated successfully. Aug 19 08:10:37.940162 systemd-logind[1548]: Session 24 logged out. Waiting for processes to exit. Aug 19 08:10:37.943448 systemd[1]: Started sshd@24-10.0.0.49:22-10.0.0.1:44604.service - OpenSSH per-connection server daemon (10.0.0.1:44604). Aug 19 08:10:37.944335 systemd-logind[1548]: Removed session 24. Aug 19 08:10:38.006838 sshd[4335]: Accepted publickey for core from 10.0.0.1 port 44604 ssh2: RSA SHA256:kecLVWRG1G7MHrHN/yG6X078KPWjs/jTMbEJqAmOzyM Aug 19 08:10:38.008615 sshd-session[4335]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:10:38.013556 systemd-logind[1548]: New session 25 of user core. Aug 19 08:10:38.023271 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 19 08:10:39.694873 containerd[1577]: time="2025-08-19T08:10:39.694360861Z" level=info msg="StopContainer for \"b72f1da27dfa902331524e736b4d579bcad61ba61cdd76e63be24a4f57af9865\" with timeout 30 (s)" Aug 19 08:10:39.701151 containerd[1577]: time="2025-08-19T08:10:39.701084634Z" level=info msg="Stop container \"b72f1da27dfa902331524e736b4d579bcad61ba61cdd76e63be24a4f57af9865\" with signal terminated" Aug 19 08:10:39.713884 systemd[1]: cri-containerd-b72f1da27dfa902331524e736b4d579bcad61ba61cdd76e63be24a4f57af9865.scope: Deactivated successfully. Aug 19 08:10:39.715917 containerd[1577]: time="2025-08-19T08:10:39.715859286Z" level=info msg="received exit event container_id:\"b72f1da27dfa902331524e736b4d579bcad61ba61cdd76e63be24a4f57af9865\" id:\"b72f1da27dfa902331524e736b4d579bcad61ba61cdd76e63be24a4f57af9865\" pid:3289 exited_at:{seconds:1755591039 nanos:715546888}" Aug 19 08:10:39.716160 containerd[1577]: time="2025-08-19T08:10:39.716102912Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b72f1da27dfa902331524e736b4d579bcad61ba61cdd76e63be24a4f57af9865\" id:\"b72f1da27dfa902331524e736b4d579bcad61ba61cdd76e63be24a4f57af9865\" pid:3289 exited_at:{seconds:1755591039 nanos:715546888}" Aug 19 08:10:39.728074 containerd[1577]: time="2025-08-19T08:10:39.728012587Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 19 08:10:39.736432 containerd[1577]: time="2025-08-19T08:10:39.736377848Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ef2a2511d75bd805b32bd72ded06595beed17f0b25807190791b6b30a9431f67\" id:\"9524af20f8a5096c132e5c2f7bc28067c6f7383aa5f3cf563871c7685a865cdf\" pid:4366 exited_at:{seconds:1755591039 nanos:735819581}" Aug 19 08:10:39.739519 containerd[1577]: time="2025-08-19T08:10:39.739452625Z" level=info msg="StopContainer for \"ef2a2511d75bd805b32bd72ded06595beed17f0b25807190791b6b30a9431f67\" with timeout 2 (s)" Aug 19 08:10:39.739809 containerd[1577]: time="2025-08-19T08:10:39.739764481Z" level=info msg="Stop container \"ef2a2511d75bd805b32bd72ded06595beed17f0b25807190791b6b30a9431f67\" with signal terminated" Aug 19 08:10:39.741693 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b72f1da27dfa902331524e736b4d579bcad61ba61cdd76e63be24a4f57af9865-rootfs.mount: Deactivated successfully. Aug 19 08:10:39.747854 systemd-networkd[1498]: lxc_health: Link DOWN Aug 19 08:10:39.747866 systemd-networkd[1498]: lxc_health: Lost carrier Aug 19 08:10:39.771537 systemd[1]: cri-containerd-ef2a2511d75bd805b32bd72ded06595beed17f0b25807190791b6b30a9431f67.scope: Deactivated successfully. Aug 19 08:10:39.771905 systemd[1]: cri-containerd-ef2a2511d75bd805b32bd72ded06595beed17f0b25807190791b6b30a9431f67.scope: Consumed 6.536s CPU time, 123.9M memory peak, 224K read from disk, 13.3M written to disk. Aug 19 08:10:39.773664 containerd[1577]: time="2025-08-19T08:10:39.773631830Z" level=info msg="received exit event container_id:\"ef2a2511d75bd805b32bd72ded06595beed17f0b25807190791b6b30a9431f67\" id:\"ef2a2511d75bd805b32bd72ded06595beed17f0b25807190791b6b30a9431f67\" pid:3363 exited_at:{seconds:1755591039 nanos:773409524}" Aug 19 08:10:39.773945 containerd[1577]: time="2025-08-19T08:10:39.773708236Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ef2a2511d75bd805b32bd72ded06595beed17f0b25807190791b6b30a9431f67\" id:\"ef2a2511d75bd805b32bd72ded06595beed17f0b25807190791b6b30a9431f67\" pid:3363 exited_at:{seconds:1755591039 nanos:773409524}" Aug 19 08:10:39.798182 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ef2a2511d75bd805b32bd72ded06595beed17f0b25807190791b6b30a9431f67-rootfs.mount: Deactivated successfully. Aug 19 08:10:39.811810 containerd[1577]: time="2025-08-19T08:10:39.811731999Z" level=info msg="StopContainer for \"b72f1da27dfa902331524e736b4d579bcad61ba61cdd76e63be24a4f57af9865\" returns successfully" Aug 19 08:10:39.814728 containerd[1577]: time="2025-08-19T08:10:39.814690354Z" level=info msg="StopPodSandbox for \"eb10bfdd15542ba23fad298d4dbf4e61185aa24cb7cdebbb1c18369e2aa3e278\"" Aug 19 08:10:39.814792 containerd[1577]: time="2025-08-19T08:10:39.814774996Z" level=info msg="Container to stop \"b72f1da27dfa902331524e736b4d579bcad61ba61cdd76e63be24a4f57af9865\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 19 08:10:39.823313 systemd[1]: cri-containerd-eb10bfdd15542ba23fad298d4dbf4e61185aa24cb7cdebbb1c18369e2aa3e278.scope: Deactivated successfully. Aug 19 08:10:39.824081 containerd[1577]: time="2025-08-19T08:10:39.824036649Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eb10bfdd15542ba23fad298d4dbf4e61185aa24cb7cdebbb1c18369e2aa3e278\" id:\"eb10bfdd15542ba23fad298d4dbf4e61185aa24cb7cdebbb1c18369e2aa3e278\" pid:2985 exit_status:137 exited_at:{seconds:1755591039 nanos:823748850}" Aug 19 08:10:39.852049 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eb10bfdd15542ba23fad298d4dbf4e61185aa24cb7cdebbb1c18369e2aa3e278-rootfs.mount: Deactivated successfully. Aug 19 08:10:39.965296 containerd[1577]: time="2025-08-19T08:10:39.965243930Z" level=info msg="StopContainer for \"ef2a2511d75bd805b32bd72ded06595beed17f0b25807190791b6b30a9431f67\" returns successfully" Aug 19 08:10:39.965999 containerd[1577]: time="2025-08-19T08:10:39.965917277Z" level=info msg="StopPodSandbox for \"9b2650909b167e01248946ebbc3326e65d9a66abafad7cb76b913d7d4474f39c\"" Aug 19 08:10:39.966181 containerd[1577]: time="2025-08-19T08:10:39.966025955Z" level=info msg="Container to stop \"80df564c4b30e9de3b57e38dbd3f2a7ed802f08dd2ebc591b08e5216cb720cc8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 19 08:10:39.966181 containerd[1577]: time="2025-08-19T08:10:39.966042927Z" level=info msg="Container to stop \"ef2a2511d75bd805b32bd72ded06595beed17f0b25807190791b6b30a9431f67\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 19 08:10:39.966181 containerd[1577]: time="2025-08-19T08:10:39.966052135Z" level=info msg="Container to stop \"ff7bf63dfa60768c18353d151b2eeb2087af8554c50c89192cd4d64999ff178b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 19 08:10:39.966181 containerd[1577]: time="2025-08-19T08:10:39.966069287Z" level=info msg="Container to stop \"90ae8b756cb55ef9a4bc966006daa91e17fee029f5baac06ec90d1991161b28c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 19 08:10:39.966181 containerd[1577]: time="2025-08-19T08:10:39.966077974Z" level=info msg="Container to stop \"fb35c88c7854b0565dd765dab09d82053bfeac6fe1d8e882defd35a19a08f608\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 19 08:10:39.973138 containerd[1577]: time="2025-08-19T08:10:39.973070841Z" level=info msg="shim disconnected" id=eb10bfdd15542ba23fad298d4dbf4e61185aa24cb7cdebbb1c18369e2aa3e278 namespace=k8s.io Aug 19 08:10:39.973138 containerd[1577]: time="2025-08-19T08:10:39.973128030Z" level=warning msg="cleaning up after shim disconnected" id=eb10bfdd15542ba23fad298d4dbf4e61185aa24cb7cdebbb1c18369e2aa3e278 namespace=k8s.io Aug 19 08:10:39.973497 systemd[1]: cri-containerd-9b2650909b167e01248946ebbc3326e65d9a66abafad7cb76b913d7d4474f39c.scope: Deactivated successfully. Aug 19 08:10:39.977860 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-eb10bfdd15542ba23fad298d4dbf4e61185aa24cb7cdebbb1c18369e2aa3e278-shm.mount: Deactivated successfully. Aug 19 08:10:39.996169 containerd[1577]: time="2025-08-19T08:10:39.973136557Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 19 08:10:40.001769 containerd[1577]: time="2025-08-19T08:10:40.001699896Z" level=info msg="received exit event sandbox_id:\"eb10bfdd15542ba23fad298d4dbf4e61185aa24cb7cdebbb1c18369e2aa3e278\" exit_status:137 exited_at:{seconds:1755591039 nanos:823748850}" Aug 19 08:10:40.004619 containerd[1577]: time="2025-08-19T08:10:40.004581582Z" level=info msg="TearDown network for sandbox \"eb10bfdd15542ba23fad298d4dbf4e61185aa24cb7cdebbb1c18369e2aa3e278\" successfully" Aug 19 08:10:40.004619 containerd[1577]: time="2025-08-19T08:10:40.004612722Z" level=info msg="StopPodSandbox for \"eb10bfdd15542ba23fad298d4dbf4e61185aa24cb7cdebbb1c18369e2aa3e278\" returns successfully" Aug 19 08:10:40.014733 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9b2650909b167e01248946ebbc3326e65d9a66abafad7cb76b913d7d4474f39c-rootfs.mount: Deactivated successfully. Aug 19 08:10:40.018522 containerd[1577]: time="2025-08-19T08:10:40.018462703Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9b2650909b167e01248946ebbc3326e65d9a66abafad7cb76b913d7d4474f39c\" id:\"9b2650909b167e01248946ebbc3326e65d9a66abafad7cb76b913d7d4474f39c\" pid:2856 exit_status:137 exited_at:{seconds:1755591039 nanos:980334455}" Aug 19 08:10:40.020573 containerd[1577]: time="2025-08-19T08:10:40.020515886Z" level=info msg="received exit event sandbox_id:\"9b2650909b167e01248946ebbc3326e65d9a66abafad7cb76b913d7d4474f39c\" exit_status:137 exited_at:{seconds:1755591039 nanos:980334455}" Aug 19 08:10:40.020573 containerd[1577]: time="2025-08-19T08:10:40.020654872Z" level=info msg="shim disconnected" id=9b2650909b167e01248946ebbc3326e65d9a66abafad7cb76b913d7d4474f39c namespace=k8s.io Aug 19 08:10:40.020573 containerd[1577]: time="2025-08-19T08:10:40.020688336Z" level=warning msg="cleaning up after shim disconnected" id=9b2650909b167e01248946ebbc3326e65d9a66abafad7cb76b913d7d4474f39c namespace=k8s.io Aug 19 08:10:40.020573 containerd[1577]: time="2025-08-19T08:10:40.020696712Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 19 08:10:40.022272 containerd[1577]: time="2025-08-19T08:10:40.022226635Z" level=info msg="TearDown network for sandbox \"9b2650909b167e01248946ebbc3326e65d9a66abafad7cb76b913d7d4474f39c\" successfully" Aug 19 08:10:40.022272 containerd[1577]: time="2025-08-19T08:10:40.022252865Z" level=info msg="StopPodSandbox for \"9b2650909b167e01248946ebbc3326e65d9a66abafad7cb76b913d7d4474f39c\" returns successfully" Aug 19 08:10:40.096349 kubelet[2705]: I0819 08:10:40.096275 2705 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80-etc-cni-netd\") pod \"bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80\" (UID: \"bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80\") " Aug 19 08:10:40.096349 kubelet[2705]: I0819 08:10:40.096345 2705 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80-host-proc-sys-kernel\") pod \"bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80\" (UID: \"bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80\") " Aug 19 08:10:40.097254 kubelet[2705]: I0819 08:10:40.096370 2705 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80-cilium-cgroup\") pod \"bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80\" (UID: \"bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80\") " Aug 19 08:10:40.097254 kubelet[2705]: I0819 08:10:40.096387 2705 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80-host-proc-sys-net\") pod \"bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80\" (UID: \"bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80\") " Aug 19 08:10:40.097254 kubelet[2705]: I0819 08:10:40.096431 2705 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80-clustermesh-secrets\") pod \"bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80\" (UID: \"bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80\") " Aug 19 08:10:40.097254 kubelet[2705]: I0819 08:10:40.096458 2705 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxnr6\" (UniqueName: \"kubernetes.io/projected/7bfdcc58-3a50-4307-876f-d489ae79afb2-kube-api-access-fxnr6\") pod \"7bfdcc58-3a50-4307-876f-d489ae79afb2\" (UID: \"7bfdcc58-3a50-4307-876f-d489ae79afb2\") " Aug 19 08:10:40.097254 kubelet[2705]: I0819 08:10:40.096477 2705 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80-lib-modules\") pod \"bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80\" (UID: \"bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80\") " Aug 19 08:10:40.097254 kubelet[2705]: I0819 08:10:40.096470 2705 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80" (UID: "bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 19 08:10:40.097526 kubelet[2705]: I0819 08:10:40.096496 2705 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80-hubble-tls\") pod \"bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80\" (UID: \"bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80\") " Aug 19 08:10:40.097526 kubelet[2705]: I0819 08:10:40.096615 2705 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80-bpf-maps\") pod \"bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80\" (UID: \"bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80\") " Aug 19 08:10:40.097526 kubelet[2705]: I0819 08:10:40.096651 2705 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwcqb\" (UniqueName: \"kubernetes.io/projected/bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80-kube-api-access-lwcqb\") pod \"bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80\" (UID: \"bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80\") " Aug 19 08:10:40.097526 kubelet[2705]: I0819 08:10:40.096688 2705 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7bfdcc58-3a50-4307-876f-d489ae79afb2-cilium-config-path\") pod \"7bfdcc58-3a50-4307-876f-d489ae79afb2\" (UID: \"7bfdcc58-3a50-4307-876f-d489ae79afb2\") " Aug 19 08:10:40.097526 kubelet[2705]: I0819 08:10:40.096709 2705 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80-cilium-run\") pod \"bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80\" (UID: \"bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80\") " Aug 19 08:10:40.097526 kubelet[2705]: I0819 08:10:40.096738 2705 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80-cilium-config-path\") pod \"bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80\" (UID: \"bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80\") " Aug 19 08:10:40.097750 kubelet[2705]: I0819 08:10:40.096758 2705 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80-cni-path\") pod \"bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80\" (UID: \"bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80\") " Aug 19 08:10:40.097750 kubelet[2705]: I0819 08:10:40.096778 2705 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80-hostproc\") pod \"bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80\" (UID: \"bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80\") " Aug 19 08:10:40.097750 kubelet[2705]: I0819 08:10:40.096796 2705 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80-xtables-lock\") pod \"bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80\" (UID: \"bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80\") " Aug 19 08:10:40.097750 kubelet[2705]: I0819 08:10:40.096862 2705 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Aug 19 08:10:40.097750 kubelet[2705]: I0819 08:10:40.096891 2705 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80" (UID: "bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 19 08:10:40.097750 kubelet[2705]: I0819 08:10:40.096916 2705 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80" (UID: "bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 19 08:10:40.097957 kubelet[2705]: I0819 08:10:40.097542 2705 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80" (UID: "bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 19 08:10:40.097957 kubelet[2705]: I0819 08:10:40.097578 2705 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80" (UID: "bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 19 08:10:40.097957 kubelet[2705]: I0819 08:10:40.097600 2705 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80" (UID: "bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 19 08:10:40.101659 kubelet[2705]: I0819 08:10:40.101610 2705 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bfdcc58-3a50-4307-876f-d489ae79afb2-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7bfdcc58-3a50-4307-876f-d489ae79afb2" (UID: "7bfdcc58-3a50-4307-876f-d489ae79afb2"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 19 08:10:40.102328 kubelet[2705]: I0819 08:10:40.102294 2705 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80" (UID: "bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 19 08:10:40.102390 kubelet[2705]: I0819 08:10:40.102336 2705 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80" (UID: "bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 19 08:10:40.102711 kubelet[2705]: I0819 08:10:40.102647 2705 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80-cni-path" (OuterVolumeSpecName: "cni-path") pod "bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80" (UID: "bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 19 08:10:40.103021 kubelet[2705]: I0819 08:10:40.102998 2705 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80-hostproc" (OuterVolumeSpecName: "hostproc") pod "bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80" (UID: "bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 19 08:10:40.103694 kubelet[2705]: I0819 08:10:40.103662 2705 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80-kube-api-access-lwcqb" (OuterVolumeSpecName: "kube-api-access-lwcqb") pod "bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80" (UID: "bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80"). InnerVolumeSpecName "kube-api-access-lwcqb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 19 08:10:40.104026 kubelet[2705]: I0819 08:10:40.103923 2705 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80" (UID: "bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 19 08:10:40.104655 kubelet[2705]: I0819 08:10:40.104615 2705 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80" (UID: "bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 19 08:10:40.106223 kubelet[2705]: I0819 08:10:40.106176 2705 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bfdcc58-3a50-4307-876f-d489ae79afb2-kube-api-access-fxnr6" (OuterVolumeSpecName: "kube-api-access-fxnr6") pod "7bfdcc58-3a50-4307-876f-d489ae79afb2" (UID: "7bfdcc58-3a50-4307-876f-d489ae79afb2"). InnerVolumeSpecName "kube-api-access-fxnr6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 19 08:10:40.106747 kubelet[2705]: I0819 08:10:40.106717 2705 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80" (UID: "bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 19 08:10:40.147127 kubelet[2705]: I0819 08:10:40.146781 2705 scope.go:117] "RemoveContainer" containerID="b72f1da27dfa902331524e736b4d579bcad61ba61cdd76e63be24a4f57af9865" Aug 19 08:10:40.151051 containerd[1577]: time="2025-08-19T08:10:40.150999899Z" level=info msg="RemoveContainer for \"b72f1da27dfa902331524e736b4d579bcad61ba61cdd76e63be24a4f57af9865\"" Aug 19 08:10:40.153920 systemd[1]: Removed slice kubepods-besteffort-pod7bfdcc58_3a50_4307_876f_d489ae79afb2.slice - libcontainer container kubepods-besteffort-pod7bfdcc58_3a50_4307_876f_d489ae79afb2.slice. Aug 19 08:10:40.166388 systemd[1]: Removed slice kubepods-burstable-podbf0f0808_84bb_4aad_b3a6_c95dc3ec4c80.slice - libcontainer container kubepods-burstable-podbf0f0808_84bb_4aad_b3a6_c95dc3ec4c80.slice. Aug 19 08:10:40.166848 systemd[1]: kubepods-burstable-podbf0f0808_84bb_4aad_b3a6_c95dc3ec4c80.slice: Consumed 6.652s CPU time, 124.3M memory peak, 228K read from disk, 13.3M written to disk. Aug 19 08:10:40.170739 containerd[1577]: time="2025-08-19T08:10:40.170622721Z" level=info msg="RemoveContainer for \"b72f1da27dfa902331524e736b4d579bcad61ba61cdd76e63be24a4f57af9865\" returns successfully" Aug 19 08:10:40.171213 kubelet[2705]: I0819 08:10:40.171165 2705 scope.go:117] "RemoveContainer" containerID="b72f1da27dfa902331524e736b4d579bcad61ba61cdd76e63be24a4f57af9865" Aug 19 08:10:40.172981 containerd[1577]: time="2025-08-19T08:10:40.172904180Z" level=error msg="ContainerStatus for \"b72f1da27dfa902331524e736b4d579bcad61ba61cdd76e63be24a4f57af9865\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b72f1da27dfa902331524e736b4d579bcad61ba61cdd76e63be24a4f57af9865\": not found" Aug 19 08:10:40.173305 kubelet[2705]: E0819 08:10:40.173257 2705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b72f1da27dfa902331524e736b4d579bcad61ba61cdd76e63be24a4f57af9865\": not found" containerID="b72f1da27dfa902331524e736b4d579bcad61ba61cdd76e63be24a4f57af9865" Aug 19 08:10:40.173538 kubelet[2705]: I0819 08:10:40.173323 2705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b72f1da27dfa902331524e736b4d579bcad61ba61cdd76e63be24a4f57af9865"} err="failed to get container status \"b72f1da27dfa902331524e736b4d579bcad61ba61cdd76e63be24a4f57af9865\": rpc error: code = NotFound desc = an error occurred when try to find container \"b72f1da27dfa902331524e736b4d579bcad61ba61cdd76e63be24a4f57af9865\": not found" Aug 19 08:10:40.173538 kubelet[2705]: I0819 08:10:40.173514 2705 scope.go:117] "RemoveContainer" containerID="ef2a2511d75bd805b32bd72ded06595beed17f0b25807190791b6b30a9431f67" Aug 19 08:10:40.175424 containerd[1577]: time="2025-08-19T08:10:40.175394187Z" level=info msg="RemoveContainer for \"ef2a2511d75bd805b32bd72ded06595beed17f0b25807190791b6b30a9431f67\"" Aug 19 08:10:40.184159 containerd[1577]: time="2025-08-19T08:10:40.184072859Z" level=info msg="RemoveContainer for \"ef2a2511d75bd805b32bd72ded06595beed17f0b25807190791b6b30a9431f67\" returns successfully" Aug 19 08:10:40.184627 kubelet[2705]: I0819 08:10:40.184562 2705 scope.go:117] "RemoveContainer" containerID="80df564c4b30e9de3b57e38dbd3f2a7ed802f08dd2ebc591b08e5216cb720cc8" Aug 19 08:10:40.187253 containerd[1577]: time="2025-08-19T08:10:40.187204182Z" level=info msg="RemoveContainer for \"80df564c4b30e9de3b57e38dbd3f2a7ed802f08dd2ebc591b08e5216cb720cc8\"" Aug 19 08:10:40.194069 containerd[1577]: time="2025-08-19T08:10:40.193999935Z" level=info msg="RemoveContainer for \"80df564c4b30e9de3b57e38dbd3f2a7ed802f08dd2ebc591b08e5216cb720cc8\" returns successfully" Aug 19 08:10:40.194334 kubelet[2705]: I0819 08:10:40.194294 2705 scope.go:117] "RemoveContainer" containerID="fb35c88c7854b0565dd765dab09d82053bfeac6fe1d8e882defd35a19a08f608" Aug 19 08:10:40.197026 containerd[1577]: time="2025-08-19T08:10:40.196882543Z" level=info msg="RemoveContainer for \"fb35c88c7854b0565dd765dab09d82053bfeac6fe1d8e882defd35a19a08f608\"" Aug 19 08:10:40.197210 kubelet[2705]: I0819 08:10:40.197108 2705 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80-cilium-run\") on node \"localhost\" DevicePath \"\"" Aug 19 08:10:40.197210 kubelet[2705]: I0819 08:10:40.197144 2705 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Aug 19 08:10:40.197210 kubelet[2705]: I0819 08:10:40.197158 2705 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7bfdcc58-3a50-4307-876f-d489ae79afb2-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Aug 19 08:10:40.197210 kubelet[2705]: I0819 08:10:40.197170 2705 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80-xtables-lock\") on node \"localhost\" DevicePath \"\"" Aug 19 08:10:40.197210 kubelet[2705]: I0819 08:10:40.197182 2705 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80-cni-path\") on node \"localhost\" DevicePath \"\"" Aug 19 08:10:40.197210 kubelet[2705]: I0819 08:10:40.197193 2705 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80-hostproc\") on node \"localhost\" DevicePath \"\"" Aug 19 08:10:40.197210 kubelet[2705]: I0819 08:10:40.197204 2705 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Aug 19 08:10:40.197210 kubelet[2705]: I0819 08:10:40.197214 2705 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Aug 19 08:10:40.197475 kubelet[2705]: I0819 08:10:40.197225 2705 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Aug 19 08:10:40.197475 kubelet[2705]: I0819 08:10:40.197236 2705 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Aug 19 08:10:40.197475 kubelet[2705]: I0819 08:10:40.197248 2705 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fxnr6\" (UniqueName: \"kubernetes.io/projected/7bfdcc58-3a50-4307-876f-d489ae79afb2-kube-api-access-fxnr6\") on node \"localhost\" DevicePath \"\"" Aug 19 08:10:40.197475 kubelet[2705]: I0819 08:10:40.197265 2705 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80-lib-modules\") on node \"localhost\" DevicePath \"\"" Aug 19 08:10:40.197475 kubelet[2705]: I0819 08:10:40.197276 2705 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80-hubble-tls\") on node \"localhost\" DevicePath \"\"" Aug 19 08:10:40.197475 kubelet[2705]: I0819 08:10:40.197289 2705 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80-bpf-maps\") on node \"localhost\" DevicePath \"\"" Aug 19 08:10:40.197475 kubelet[2705]: I0819 08:10:40.197301 2705 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lwcqb\" (UniqueName: \"kubernetes.io/projected/bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80-kube-api-access-lwcqb\") on node \"localhost\" DevicePath \"\"" Aug 19 08:10:40.202870 containerd[1577]: time="2025-08-19T08:10:40.202825036Z" level=info msg="RemoveContainer for \"fb35c88c7854b0565dd765dab09d82053bfeac6fe1d8e882defd35a19a08f608\" returns successfully" Aug 19 08:10:40.203320 kubelet[2705]: I0819 08:10:40.203256 2705 scope.go:117] "RemoveContainer" containerID="90ae8b756cb55ef9a4bc966006daa91e17fee029f5baac06ec90d1991161b28c" Aug 19 08:10:40.205420 containerd[1577]: time="2025-08-19T08:10:40.205356684Z" level=info msg="RemoveContainer for \"90ae8b756cb55ef9a4bc966006daa91e17fee029f5baac06ec90d1991161b28c\"" Aug 19 08:10:40.211048 containerd[1577]: time="2025-08-19T08:10:40.210986069Z" level=info msg="RemoveContainer for \"90ae8b756cb55ef9a4bc966006daa91e17fee029f5baac06ec90d1991161b28c\" returns successfully" Aug 19 08:10:40.211387 kubelet[2705]: I0819 08:10:40.211347 2705 scope.go:117] "RemoveContainer" containerID="ff7bf63dfa60768c18353d151b2eeb2087af8554c50c89192cd4d64999ff178b" Aug 19 08:10:40.213066 containerd[1577]: time="2025-08-19T08:10:40.213034262Z" level=info msg="RemoveContainer for \"ff7bf63dfa60768c18353d151b2eeb2087af8554c50c89192cd4d64999ff178b\"" Aug 19 08:10:40.218199 containerd[1577]: time="2025-08-19T08:10:40.218009308Z" level=info msg="RemoveContainer for \"ff7bf63dfa60768c18353d151b2eeb2087af8554c50c89192cd4d64999ff178b\" returns successfully" Aug 19 08:10:40.218338 kubelet[2705]: I0819 08:10:40.218265 2705 scope.go:117] "RemoveContainer" containerID="ef2a2511d75bd805b32bd72ded06595beed17f0b25807190791b6b30a9431f67" Aug 19 08:10:40.219320 containerd[1577]: time="2025-08-19T08:10:40.219147372Z" level=error msg="ContainerStatus for \"ef2a2511d75bd805b32bd72ded06595beed17f0b25807190791b6b30a9431f67\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ef2a2511d75bd805b32bd72ded06595beed17f0b25807190791b6b30a9431f67\": not found" Aug 19 08:10:40.219501 kubelet[2705]: E0819 08:10:40.219350 2705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ef2a2511d75bd805b32bd72ded06595beed17f0b25807190791b6b30a9431f67\": not found" containerID="ef2a2511d75bd805b32bd72ded06595beed17f0b25807190791b6b30a9431f67" Aug 19 08:10:40.219501 kubelet[2705]: I0819 08:10:40.219394 2705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ef2a2511d75bd805b32bd72ded06595beed17f0b25807190791b6b30a9431f67"} err="failed to get container status \"ef2a2511d75bd805b32bd72ded06595beed17f0b25807190791b6b30a9431f67\": rpc error: code = NotFound desc = an error occurred when try to find container \"ef2a2511d75bd805b32bd72ded06595beed17f0b25807190791b6b30a9431f67\": not found" Aug 19 08:10:40.219501 kubelet[2705]: I0819 08:10:40.219452 2705 scope.go:117] "RemoveContainer" containerID="80df564c4b30e9de3b57e38dbd3f2a7ed802f08dd2ebc591b08e5216cb720cc8" Aug 19 08:10:40.219746 containerd[1577]: time="2025-08-19T08:10:40.219710869Z" level=error msg="ContainerStatus for \"80df564c4b30e9de3b57e38dbd3f2a7ed802f08dd2ebc591b08e5216cb720cc8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"80df564c4b30e9de3b57e38dbd3f2a7ed802f08dd2ebc591b08e5216cb720cc8\": not found" Aug 19 08:10:40.220037 kubelet[2705]: E0819 08:10:40.219930 2705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"80df564c4b30e9de3b57e38dbd3f2a7ed802f08dd2ebc591b08e5216cb720cc8\": not found" containerID="80df564c4b30e9de3b57e38dbd3f2a7ed802f08dd2ebc591b08e5216cb720cc8" Aug 19 08:10:40.220037 kubelet[2705]: I0819 08:10:40.220008 2705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"80df564c4b30e9de3b57e38dbd3f2a7ed802f08dd2ebc591b08e5216cb720cc8"} err="failed to get container status \"80df564c4b30e9de3b57e38dbd3f2a7ed802f08dd2ebc591b08e5216cb720cc8\": rpc error: code = NotFound desc = an error occurred when try to find container \"80df564c4b30e9de3b57e38dbd3f2a7ed802f08dd2ebc591b08e5216cb720cc8\": not found" Aug 19 08:10:40.220242 kubelet[2705]: I0819 08:10:40.220031 2705 scope.go:117] "RemoveContainer" containerID="fb35c88c7854b0565dd765dab09d82053bfeac6fe1d8e882defd35a19a08f608" Aug 19 08:10:40.220513 containerd[1577]: time="2025-08-19T08:10:40.220436025Z" level=error msg="ContainerStatus for \"fb35c88c7854b0565dd765dab09d82053bfeac6fe1d8e882defd35a19a08f608\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fb35c88c7854b0565dd765dab09d82053bfeac6fe1d8e882defd35a19a08f608\": not found" Aug 19 08:10:40.220833 kubelet[2705]: E0819 08:10:40.220789 2705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fb35c88c7854b0565dd765dab09d82053bfeac6fe1d8e882defd35a19a08f608\": not found" containerID="fb35c88c7854b0565dd765dab09d82053bfeac6fe1d8e882defd35a19a08f608" Aug 19 08:10:40.221148 kubelet[2705]: I0819 08:10:40.220956 2705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fb35c88c7854b0565dd765dab09d82053bfeac6fe1d8e882defd35a19a08f608"} err="failed to get container status \"fb35c88c7854b0565dd765dab09d82053bfeac6fe1d8e882defd35a19a08f608\": rpc error: code = NotFound desc = an error occurred when try to find container \"fb35c88c7854b0565dd765dab09d82053bfeac6fe1d8e882defd35a19a08f608\": not found" Aug 19 08:10:40.221148 kubelet[2705]: I0819 08:10:40.221055 2705 scope.go:117] "RemoveContainer" containerID="90ae8b756cb55ef9a4bc966006daa91e17fee029f5baac06ec90d1991161b28c" Aug 19 08:10:40.221562 containerd[1577]: time="2025-08-19T08:10:40.221446655Z" level=error msg="ContainerStatus for \"90ae8b756cb55ef9a4bc966006daa91e17fee029f5baac06ec90d1991161b28c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"90ae8b756cb55ef9a4bc966006daa91e17fee029f5baac06ec90d1991161b28c\": not found" Aug 19 08:10:40.221659 kubelet[2705]: E0819 08:10:40.221623 2705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"90ae8b756cb55ef9a4bc966006daa91e17fee029f5baac06ec90d1991161b28c\": not found" containerID="90ae8b756cb55ef9a4bc966006daa91e17fee029f5baac06ec90d1991161b28c" Aug 19 08:10:40.221720 kubelet[2705]: I0819 08:10:40.221667 2705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"90ae8b756cb55ef9a4bc966006daa91e17fee029f5baac06ec90d1991161b28c"} err="failed to get container status \"90ae8b756cb55ef9a4bc966006daa91e17fee029f5baac06ec90d1991161b28c\": rpc error: code = NotFound desc = an error occurred when try to find container \"90ae8b756cb55ef9a4bc966006daa91e17fee029f5baac06ec90d1991161b28c\": not found" Aug 19 08:10:40.221720 kubelet[2705]: I0819 08:10:40.221686 2705 scope.go:117] "RemoveContainer" containerID="ff7bf63dfa60768c18353d151b2eeb2087af8554c50c89192cd4d64999ff178b" Aug 19 08:10:40.221910 containerd[1577]: time="2025-08-19T08:10:40.221865505Z" level=error msg="ContainerStatus for \"ff7bf63dfa60768c18353d151b2eeb2087af8554c50c89192cd4d64999ff178b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ff7bf63dfa60768c18353d151b2eeb2087af8554c50c89192cd4d64999ff178b\": not found" Aug 19 08:10:40.222111 kubelet[2705]: E0819 08:10:40.222038 2705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ff7bf63dfa60768c18353d151b2eeb2087af8554c50c89192cd4d64999ff178b\": not found" containerID="ff7bf63dfa60768c18353d151b2eeb2087af8554c50c89192cd4d64999ff178b" Aug 19 08:10:40.222111 kubelet[2705]: I0819 08:10:40.222074 2705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ff7bf63dfa60768c18353d151b2eeb2087af8554c50c89192cd4d64999ff178b"} err="failed to get container status \"ff7bf63dfa60768c18353d151b2eeb2087af8554c50c89192cd4d64999ff178b\": rpc error: code = NotFound desc = an error occurred when try to find container \"ff7bf63dfa60768c18353d151b2eeb2087af8554c50c89192cd4d64999ff178b\": not found" Aug 19 08:10:40.741204 systemd[1]: var-lib-kubelet-pods-7bfdcc58\x2d3a50\x2d4307\x2d876f\x2dd489ae79afb2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfxnr6.mount: Deactivated successfully. Aug 19 08:10:40.741338 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9b2650909b167e01248946ebbc3326e65d9a66abafad7cb76b913d7d4474f39c-shm.mount: Deactivated successfully. Aug 19 08:10:40.741416 systemd[1]: var-lib-kubelet-pods-bf0f0808\x2d84bb\x2d4aad\x2db3a6\x2dc95dc3ec4c80-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlwcqb.mount: Deactivated successfully. Aug 19 08:10:40.741499 systemd[1]: var-lib-kubelet-pods-bf0f0808\x2d84bb\x2d4aad\x2db3a6\x2dc95dc3ec4c80-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Aug 19 08:10:40.741577 systemd[1]: var-lib-kubelet-pods-bf0f0808\x2d84bb\x2d4aad\x2db3a6\x2dc95dc3ec4c80-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Aug 19 08:10:40.840695 kubelet[2705]: E0819 08:10:40.840646 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:10:41.399428 sshd[4338]: Connection closed by 10.0.0.1 port 44604 Aug 19 08:10:41.400174 sshd-session[4335]: pam_unix(sshd:session): session closed for user core Aug 19 08:10:41.418013 systemd[1]: sshd@24-10.0.0.49:22-10.0.0.1:44604.service: Deactivated successfully. Aug 19 08:10:41.420149 systemd[1]: session-25.scope: Deactivated successfully. Aug 19 08:10:41.421000 systemd-logind[1548]: Session 25 logged out. Waiting for processes to exit. Aug 19 08:10:41.423827 systemd[1]: Started sshd@25-10.0.0.49:22-10.0.0.1:39590.service - OpenSSH per-connection server daemon (10.0.0.1:39590). Aug 19 08:10:41.424472 systemd-logind[1548]: Removed session 25. Aug 19 08:10:41.481252 sshd[4491]: Accepted publickey for core from 10.0.0.1 port 39590 ssh2: RSA SHA256:kecLVWRG1G7MHrHN/yG6X078KPWjs/jTMbEJqAmOzyM Aug 19 08:10:41.483317 sshd-session[4491]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:10:41.488364 systemd-logind[1548]: New session 26 of user core. Aug 19 08:10:41.495215 systemd[1]: Started session-26.scope - Session 26 of User core. Aug 19 08:10:41.843129 kubelet[2705]: I0819 08:10:41.842875 2705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bfdcc58-3a50-4307-876f-d489ae79afb2" path="/var/lib/kubelet/pods/7bfdcc58-3a50-4307-876f-d489ae79afb2/volumes" Aug 19 08:10:41.844172 kubelet[2705]: I0819 08:10:41.844143 2705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80" path="/var/lib/kubelet/pods/bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80/volumes" Aug 19 08:10:41.909386 kubelet[2705]: E0819 08:10:41.909328 2705 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 19 08:10:42.393417 sshd[4494]: Connection closed by 10.0.0.1 port 39590 Aug 19 08:10:42.394190 sshd-session[4491]: pam_unix(sshd:session): session closed for user core Aug 19 08:10:42.408701 systemd[1]: sshd@25-10.0.0.49:22-10.0.0.1:39590.service: Deactivated successfully. Aug 19 08:10:42.414869 systemd[1]: session-26.scope: Deactivated successfully. Aug 19 08:10:42.416266 systemd-logind[1548]: Session 26 logged out. Waiting for processes to exit. Aug 19 08:10:42.420045 systemd-logind[1548]: Removed session 26. Aug 19 08:10:42.425470 systemd[1]: Started sshd@26-10.0.0.49:22-10.0.0.1:39596.service - OpenSSH per-connection server daemon (10.0.0.1:39596). Aug 19 08:10:42.430802 kubelet[2705]: I0819 08:10:42.430757 2705 memory_manager.go:355] "RemoveStaleState removing state" podUID="bf0f0808-84bb-4aad-b3a6-c95dc3ec4c80" containerName="cilium-agent" Aug 19 08:10:42.430802 kubelet[2705]: I0819 08:10:42.430790 2705 memory_manager.go:355] "RemoveStaleState removing state" podUID="7bfdcc58-3a50-4307-876f-d489ae79afb2" containerName="cilium-operator" Aug 19 08:10:42.444176 systemd[1]: Created slice kubepods-burstable-pod3b4facce_421f_4427_a71b_5656e7ec5e4e.slice - libcontainer container kubepods-burstable-pod3b4facce_421f_4427_a71b_5656e7ec5e4e.slice. Aug 19 08:10:42.483407 sshd[4506]: Accepted publickey for core from 10.0.0.1 port 39596 ssh2: RSA SHA256:kecLVWRG1G7MHrHN/yG6X078KPWjs/jTMbEJqAmOzyM Aug 19 08:10:42.485071 sshd-session[4506]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:10:42.490009 systemd-logind[1548]: New session 27 of user core. Aug 19 08:10:42.500482 systemd[1]: Started session-27.scope - Session 27 of User core. Aug 19 08:10:42.510660 kubelet[2705]: I0819 08:10:42.510590 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3b4facce-421f-4427-a71b-5656e7ec5e4e-lib-modules\") pod \"cilium-6nszs\" (UID: \"3b4facce-421f-4427-a71b-5656e7ec5e4e\") " pod="kube-system/cilium-6nszs" Aug 19 08:10:42.510660 kubelet[2705]: I0819 08:10:42.510651 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3b4facce-421f-4427-a71b-5656e7ec5e4e-cilium-ipsec-secrets\") pod \"cilium-6nszs\" (UID: \"3b4facce-421f-4427-a71b-5656e7ec5e4e\") " pod="kube-system/cilium-6nszs" Aug 19 08:10:42.510830 kubelet[2705]: I0819 08:10:42.510674 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3b4facce-421f-4427-a71b-5656e7ec5e4e-host-proc-sys-net\") pod \"cilium-6nszs\" (UID: \"3b4facce-421f-4427-a71b-5656e7ec5e4e\") " pod="kube-system/cilium-6nszs" Aug 19 08:10:42.510830 kubelet[2705]: I0819 08:10:42.510697 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3b4facce-421f-4427-a71b-5656e7ec5e4e-hostproc\") pod \"cilium-6nszs\" (UID: \"3b4facce-421f-4427-a71b-5656e7ec5e4e\") " pod="kube-system/cilium-6nszs" Aug 19 08:10:42.510830 kubelet[2705]: I0819 08:10:42.510734 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3b4facce-421f-4427-a71b-5656e7ec5e4e-host-proc-sys-kernel\") pod \"cilium-6nszs\" (UID: \"3b4facce-421f-4427-a71b-5656e7ec5e4e\") " pod="kube-system/cilium-6nszs" Aug 19 08:10:42.510830 kubelet[2705]: I0819 08:10:42.510777 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3b4facce-421f-4427-a71b-5656e7ec5e4e-cilium-run\") pod \"cilium-6nszs\" (UID: \"3b4facce-421f-4427-a71b-5656e7ec5e4e\") " pod="kube-system/cilium-6nszs" Aug 19 08:10:42.510830 kubelet[2705]: I0819 08:10:42.510805 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3b4facce-421f-4427-a71b-5656e7ec5e4e-xtables-lock\") pod \"cilium-6nszs\" (UID: \"3b4facce-421f-4427-a71b-5656e7ec5e4e\") " pod="kube-system/cilium-6nszs" Aug 19 08:10:42.510830 kubelet[2705]: I0819 08:10:42.510826 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3b4facce-421f-4427-a71b-5656e7ec5e4e-clustermesh-secrets\") pod \"cilium-6nszs\" (UID: \"3b4facce-421f-4427-a71b-5656e7ec5e4e\") " pod="kube-system/cilium-6nszs" Aug 19 08:10:42.511033 kubelet[2705]: I0819 08:10:42.510849 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3b4facce-421f-4427-a71b-5656e7ec5e4e-hubble-tls\") pod \"cilium-6nszs\" (UID: \"3b4facce-421f-4427-a71b-5656e7ec5e4e\") " pod="kube-system/cilium-6nszs" Aug 19 08:10:42.511033 kubelet[2705]: I0819 08:10:42.510872 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5s8gf\" (UniqueName: \"kubernetes.io/projected/3b4facce-421f-4427-a71b-5656e7ec5e4e-kube-api-access-5s8gf\") pod \"cilium-6nszs\" (UID: \"3b4facce-421f-4427-a71b-5656e7ec5e4e\") " pod="kube-system/cilium-6nszs" Aug 19 08:10:42.511033 kubelet[2705]: I0819 08:10:42.510896 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3b4facce-421f-4427-a71b-5656e7ec5e4e-bpf-maps\") pod \"cilium-6nszs\" (UID: \"3b4facce-421f-4427-a71b-5656e7ec5e4e\") " pod="kube-system/cilium-6nszs" Aug 19 08:10:42.511033 kubelet[2705]: I0819 08:10:42.510915 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3b4facce-421f-4427-a71b-5656e7ec5e4e-cilium-cgroup\") pod \"cilium-6nszs\" (UID: \"3b4facce-421f-4427-a71b-5656e7ec5e4e\") " pod="kube-system/cilium-6nszs" Aug 19 08:10:42.511033 kubelet[2705]: I0819 08:10:42.510949 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3b4facce-421f-4427-a71b-5656e7ec5e4e-etc-cni-netd\") pod \"cilium-6nszs\" (UID: \"3b4facce-421f-4427-a71b-5656e7ec5e4e\") " pod="kube-system/cilium-6nszs" Aug 19 08:10:42.511033 kubelet[2705]: I0819 08:10:42.510970 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3b4facce-421f-4427-a71b-5656e7ec5e4e-cni-path\") pod \"cilium-6nszs\" (UID: \"3b4facce-421f-4427-a71b-5656e7ec5e4e\") " pod="kube-system/cilium-6nszs" Aug 19 08:10:42.511257 kubelet[2705]: I0819 08:10:42.511002 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3b4facce-421f-4427-a71b-5656e7ec5e4e-cilium-config-path\") pod \"cilium-6nszs\" (UID: \"3b4facce-421f-4427-a71b-5656e7ec5e4e\") " pod="kube-system/cilium-6nszs" Aug 19 08:10:42.555402 sshd[4510]: Connection closed by 10.0.0.1 port 39596 Aug 19 08:10:42.555894 sshd-session[4506]: pam_unix(sshd:session): session closed for user core Aug 19 08:10:42.576703 systemd[1]: sshd@26-10.0.0.49:22-10.0.0.1:39596.service: Deactivated successfully. Aug 19 08:10:42.579249 systemd[1]: session-27.scope: Deactivated successfully. Aug 19 08:10:42.580403 systemd-logind[1548]: Session 27 logged out. Waiting for processes to exit. Aug 19 08:10:42.582718 systemd-logind[1548]: Removed session 27. Aug 19 08:10:42.584264 systemd[1]: Started sshd@27-10.0.0.49:22-10.0.0.1:39600.service - OpenSSH per-connection server daemon (10.0.0.1:39600). Aug 19 08:10:42.656846 sshd[4517]: Accepted publickey for core from 10.0.0.1 port 39600 ssh2: RSA SHA256:kecLVWRG1G7MHrHN/yG6X078KPWjs/jTMbEJqAmOzyM Aug 19 08:10:42.658654 sshd-session[4517]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:10:42.663335 systemd-logind[1548]: New session 28 of user core. Aug 19 08:10:42.671251 systemd[1]: Started session-28.scope - Session 28 of User core. Aug 19 08:10:42.748772 kubelet[2705]: E0819 08:10:42.748703 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:10:42.749444 containerd[1577]: time="2025-08-19T08:10:42.749347801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6nszs,Uid:3b4facce-421f-4427-a71b-5656e7ec5e4e,Namespace:kube-system,Attempt:0,}" Aug 19 08:10:42.952730 containerd[1577]: time="2025-08-19T08:10:42.952606551Z" level=info msg="connecting to shim fa5bc5e9626c472425a8da239531b53c6cd8078574eec5c214226df973715fd2" address="unix:///run/containerd/s/7decf29aa8432f3fb1ad7c9aeda425712a20372b60956a09c43325f3868e068a" namespace=k8s.io protocol=ttrpc version=3 Aug 19 08:10:42.981253 systemd[1]: Started cri-containerd-fa5bc5e9626c472425a8da239531b53c6cd8078574eec5c214226df973715fd2.scope - libcontainer container fa5bc5e9626c472425a8da239531b53c6cd8078574eec5c214226df973715fd2. Aug 19 08:10:43.062865 containerd[1577]: time="2025-08-19T08:10:43.062805411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6nszs,Uid:3b4facce-421f-4427-a71b-5656e7ec5e4e,Namespace:kube-system,Attempt:0,} returns sandbox id \"fa5bc5e9626c472425a8da239531b53c6cd8078574eec5c214226df973715fd2\"" Aug 19 08:10:43.063703 kubelet[2705]: E0819 08:10:43.063664 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:10:43.067752 containerd[1577]: time="2025-08-19T08:10:43.067713496Z" level=info msg="CreateContainer within sandbox \"fa5bc5e9626c472425a8da239531b53c6cd8078574eec5c214226df973715fd2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 19 08:10:43.221314 containerd[1577]: time="2025-08-19T08:10:43.221247771Z" level=info msg="Container ed2d035add4ef6ec2515b880a4190182f0963cdd149d4af3badacbe6b3ad6916: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:10:43.514735 containerd[1577]: time="2025-08-19T08:10:43.514586755Z" level=info msg="CreateContainer within sandbox \"fa5bc5e9626c472425a8da239531b53c6cd8078574eec5c214226df973715fd2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ed2d035add4ef6ec2515b880a4190182f0963cdd149d4af3badacbe6b3ad6916\"" Aug 19 08:10:43.515582 containerd[1577]: time="2025-08-19T08:10:43.515488064Z" level=info msg="StartContainer for \"ed2d035add4ef6ec2515b880a4190182f0963cdd149d4af3badacbe6b3ad6916\"" Aug 19 08:10:43.516512 containerd[1577]: time="2025-08-19T08:10:43.516455179Z" level=info msg="connecting to shim ed2d035add4ef6ec2515b880a4190182f0963cdd149d4af3badacbe6b3ad6916" address="unix:///run/containerd/s/7decf29aa8432f3fb1ad7c9aeda425712a20372b60956a09c43325f3868e068a" protocol=ttrpc version=3 Aug 19 08:10:43.542350 systemd[1]: Started cri-containerd-ed2d035add4ef6ec2515b880a4190182f0963cdd149d4af3badacbe6b3ad6916.scope - libcontainer container ed2d035add4ef6ec2515b880a4190182f0963cdd149d4af3badacbe6b3ad6916. Aug 19 08:10:43.634227 systemd[1]: cri-containerd-ed2d035add4ef6ec2515b880a4190182f0963cdd149d4af3badacbe6b3ad6916.scope: Deactivated successfully. Aug 19 08:10:43.635445 containerd[1577]: time="2025-08-19T08:10:43.635390429Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ed2d035add4ef6ec2515b880a4190182f0963cdd149d4af3badacbe6b3ad6916\" id:\"ed2d035add4ef6ec2515b880a4190182f0963cdd149d4af3badacbe6b3ad6916\" pid:4591 exited_at:{seconds:1755591043 nanos:634899752}" Aug 19 08:10:43.683997 containerd[1577]: time="2025-08-19T08:10:43.683874236Z" level=info msg="received exit event container_id:\"ed2d035add4ef6ec2515b880a4190182f0963cdd149d4af3badacbe6b3ad6916\" id:\"ed2d035add4ef6ec2515b880a4190182f0963cdd149d4af3badacbe6b3ad6916\" pid:4591 exited_at:{seconds:1755591043 nanos:634899752}" Aug 19 08:10:43.685881 containerd[1577]: time="2025-08-19T08:10:43.685745416Z" level=info msg="StartContainer for \"ed2d035add4ef6ec2515b880a4190182f0963cdd149d4af3badacbe6b3ad6916\" returns successfully" Aug 19 08:10:43.707486 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ed2d035add4ef6ec2515b880a4190182f0963cdd149d4af3badacbe6b3ad6916-rootfs.mount: Deactivated successfully. Aug 19 08:10:44.168586 kubelet[2705]: E0819 08:10:44.168530 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:10:44.171305 containerd[1577]: time="2025-08-19T08:10:44.171266097Z" level=info msg="CreateContainer within sandbox \"fa5bc5e9626c472425a8da239531b53c6cd8078574eec5c214226df973715fd2\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 19 08:10:44.286686 kubelet[2705]: I0819 08:10:44.286596 2705 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-19T08:10:44Z","lastTransitionTime":"2025-08-19T08:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Aug 19 08:10:44.332276 containerd[1577]: time="2025-08-19T08:10:44.332213655Z" level=info msg="Container 1cfe4981deab2fa82ccd03c762fa09ec61f419c0bd9338f8da65b6db4839dcc6: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:10:44.464949 containerd[1577]: time="2025-08-19T08:10:44.464912930Z" level=info msg="CreateContainer within sandbox \"fa5bc5e9626c472425a8da239531b53c6cd8078574eec5c214226df973715fd2\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1cfe4981deab2fa82ccd03c762fa09ec61f419c0bd9338f8da65b6db4839dcc6\"" Aug 19 08:10:44.465475 containerd[1577]: time="2025-08-19T08:10:44.465374871Z" level=info msg="StartContainer for \"1cfe4981deab2fa82ccd03c762fa09ec61f419c0bd9338f8da65b6db4839dcc6\"" Aug 19 08:10:44.466235 containerd[1577]: time="2025-08-19T08:10:44.466211476Z" level=info msg="connecting to shim 1cfe4981deab2fa82ccd03c762fa09ec61f419c0bd9338f8da65b6db4839dcc6" address="unix:///run/containerd/s/7decf29aa8432f3fb1ad7c9aeda425712a20372b60956a09c43325f3868e068a" protocol=ttrpc version=3 Aug 19 08:10:44.488288 systemd[1]: Started cri-containerd-1cfe4981deab2fa82ccd03c762fa09ec61f419c0bd9338f8da65b6db4839dcc6.scope - libcontainer container 1cfe4981deab2fa82ccd03c762fa09ec61f419c0bd9338f8da65b6db4839dcc6. Aug 19 08:10:44.533517 containerd[1577]: time="2025-08-19T08:10:44.533463038Z" level=info msg="StartContainer for \"1cfe4981deab2fa82ccd03c762fa09ec61f419c0bd9338f8da65b6db4839dcc6\" returns successfully" Aug 19 08:10:44.537917 systemd[1]: cri-containerd-1cfe4981deab2fa82ccd03c762fa09ec61f419c0bd9338f8da65b6db4839dcc6.scope: Deactivated successfully. Aug 19 08:10:44.539500 containerd[1577]: time="2025-08-19T08:10:44.539453774Z" level=info msg="received exit event container_id:\"1cfe4981deab2fa82ccd03c762fa09ec61f419c0bd9338f8da65b6db4839dcc6\" id:\"1cfe4981deab2fa82ccd03c762fa09ec61f419c0bd9338f8da65b6db4839dcc6\" pid:4637 exited_at:{seconds:1755591044 nanos:539204909}" Aug 19 08:10:44.539667 containerd[1577]: time="2025-08-19T08:10:44.539635440Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1cfe4981deab2fa82ccd03c762fa09ec61f419c0bd9338f8da65b6db4839dcc6\" id:\"1cfe4981deab2fa82ccd03c762fa09ec61f419c0bd9338f8da65b6db4839dcc6\" pid:4637 exited_at:{seconds:1755591044 nanos:539204909}" Aug 19 08:10:45.174349 kubelet[2705]: E0819 08:10:45.174312 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:10:45.177215 containerd[1577]: time="2025-08-19T08:10:45.177175638Z" level=info msg="CreateContainer within sandbox \"fa5bc5e9626c472425a8da239531b53c6cd8078574eec5c214226df973715fd2\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 19 08:10:45.369431 containerd[1577]: time="2025-08-19T08:10:45.369362143Z" level=info msg="Container 56c280b57fe058d7a5df8dbf9284c87391bd6c43114ea3071e7d664456258238: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:10:45.382879 containerd[1577]: time="2025-08-19T08:10:45.382817526Z" level=info msg="CreateContainer within sandbox \"fa5bc5e9626c472425a8da239531b53c6cd8078574eec5c214226df973715fd2\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"56c280b57fe058d7a5df8dbf9284c87391bd6c43114ea3071e7d664456258238\"" Aug 19 08:10:45.383489 containerd[1577]: time="2025-08-19T08:10:45.383440674Z" level=info msg="StartContainer for \"56c280b57fe058d7a5df8dbf9284c87391bd6c43114ea3071e7d664456258238\"" Aug 19 08:10:45.385406 containerd[1577]: time="2025-08-19T08:10:45.385351676Z" level=info msg="connecting to shim 56c280b57fe058d7a5df8dbf9284c87391bd6c43114ea3071e7d664456258238" address="unix:///run/containerd/s/7decf29aa8432f3fb1ad7c9aeda425712a20372b60956a09c43325f3868e068a" protocol=ttrpc version=3 Aug 19 08:10:45.419574 systemd[1]: Started cri-containerd-56c280b57fe058d7a5df8dbf9284c87391bd6c43114ea3071e7d664456258238.scope - libcontainer container 56c280b57fe058d7a5df8dbf9284c87391bd6c43114ea3071e7d664456258238. Aug 19 08:10:45.462878 systemd[1]: cri-containerd-56c280b57fe058d7a5df8dbf9284c87391bd6c43114ea3071e7d664456258238.scope: Deactivated successfully. Aug 19 08:10:45.463146 containerd[1577]: time="2025-08-19T08:10:45.462878294Z" level=info msg="StartContainer for \"56c280b57fe058d7a5df8dbf9284c87391bd6c43114ea3071e7d664456258238\" returns successfully" Aug 19 08:10:45.464914 containerd[1577]: time="2025-08-19T08:10:45.464857127Z" level=info msg="received exit event container_id:\"56c280b57fe058d7a5df8dbf9284c87391bd6c43114ea3071e7d664456258238\" id:\"56c280b57fe058d7a5df8dbf9284c87391bd6c43114ea3071e7d664456258238\" pid:4681 exited_at:{seconds:1755591045 nanos:464513452}" Aug 19 08:10:45.465107 containerd[1577]: time="2025-08-19T08:10:45.465027782Z" level=info msg="TaskExit event in podsandbox handler container_id:\"56c280b57fe058d7a5df8dbf9284c87391bd6c43114ea3071e7d664456258238\" id:\"56c280b57fe058d7a5df8dbf9284c87391bd6c43114ea3071e7d664456258238\" pid:4681 exited_at:{seconds:1755591045 nanos:464513452}" Aug 19 08:10:45.494306 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-56c280b57fe058d7a5df8dbf9284c87391bd6c43114ea3071e7d664456258238-rootfs.mount: Deactivated successfully. Aug 19 08:10:46.179517 kubelet[2705]: E0819 08:10:46.179469 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:10:46.182124 containerd[1577]: time="2025-08-19T08:10:46.181905438Z" level=info msg="CreateContainer within sandbox \"fa5bc5e9626c472425a8da239531b53c6cd8078574eec5c214226df973715fd2\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 19 08:10:46.194389 containerd[1577]: time="2025-08-19T08:10:46.193572842Z" level=info msg="Container 5e4e42c85b4be713a9e5cd049dad10f9b7a2d8e6ff6a06bac03b01f78823669a: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:10:46.206910 containerd[1577]: time="2025-08-19T08:10:46.206855945Z" level=info msg="CreateContainer within sandbox \"fa5bc5e9626c472425a8da239531b53c6cd8078574eec5c214226df973715fd2\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5e4e42c85b4be713a9e5cd049dad10f9b7a2d8e6ff6a06bac03b01f78823669a\"" Aug 19 08:10:46.212577 containerd[1577]: time="2025-08-19T08:10:46.212511202Z" level=info msg="StartContainer for \"5e4e42c85b4be713a9e5cd049dad10f9b7a2d8e6ff6a06bac03b01f78823669a\"" Aug 19 08:10:46.215016 containerd[1577]: time="2025-08-19T08:10:46.214944879Z" level=info msg="connecting to shim 5e4e42c85b4be713a9e5cd049dad10f9b7a2d8e6ff6a06bac03b01f78823669a" address="unix:///run/containerd/s/7decf29aa8432f3fb1ad7c9aeda425712a20372b60956a09c43325f3868e068a" protocol=ttrpc version=3 Aug 19 08:10:46.267358 systemd[1]: Started cri-containerd-5e4e42c85b4be713a9e5cd049dad10f9b7a2d8e6ff6a06bac03b01f78823669a.scope - libcontainer container 5e4e42c85b4be713a9e5cd049dad10f9b7a2d8e6ff6a06bac03b01f78823669a. Aug 19 08:10:46.296362 systemd[1]: cri-containerd-5e4e42c85b4be713a9e5cd049dad10f9b7a2d8e6ff6a06bac03b01f78823669a.scope: Deactivated successfully. Aug 19 08:10:46.297269 containerd[1577]: time="2025-08-19T08:10:46.297190110Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5e4e42c85b4be713a9e5cd049dad10f9b7a2d8e6ff6a06bac03b01f78823669a\" id:\"5e4e42c85b4be713a9e5cd049dad10f9b7a2d8e6ff6a06bac03b01f78823669a\" pid:4719 exited_at:{seconds:1755591046 nanos:296520995}" Aug 19 08:10:46.297836 containerd[1577]: time="2025-08-19T08:10:46.297793760Z" level=info msg="received exit event container_id:\"5e4e42c85b4be713a9e5cd049dad10f9b7a2d8e6ff6a06bac03b01f78823669a\" id:\"5e4e42c85b4be713a9e5cd049dad10f9b7a2d8e6ff6a06bac03b01f78823669a\" pid:4719 exited_at:{seconds:1755591046 nanos:296520995}" Aug 19 08:10:46.306216 containerd[1577]: time="2025-08-19T08:10:46.306172266Z" level=info msg="StartContainer for \"5e4e42c85b4be713a9e5cd049dad10f9b7a2d8e6ff6a06bac03b01f78823669a\" returns successfully" Aug 19 08:10:46.320587 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5e4e42c85b4be713a9e5cd049dad10f9b7a2d8e6ff6a06bac03b01f78823669a-rootfs.mount: Deactivated successfully. Aug 19 08:10:46.910326 kubelet[2705]: E0819 08:10:46.910267 2705 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 19 08:10:47.186241 kubelet[2705]: E0819 08:10:47.185863 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:10:47.188977 containerd[1577]: time="2025-08-19T08:10:47.188225133Z" level=info msg="CreateContainer within sandbox \"fa5bc5e9626c472425a8da239531b53c6cd8078574eec5c214226df973715fd2\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 19 08:10:47.200552 containerd[1577]: time="2025-08-19T08:10:47.200496770Z" level=info msg="Container 94af3f99b50057be404d0ab9aafabda2e68e0f146c115a14f715d9f15bed8e7d: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:10:47.208675 containerd[1577]: time="2025-08-19T08:10:47.208629833Z" level=info msg="CreateContainer within sandbox \"fa5bc5e9626c472425a8da239531b53c6cd8078574eec5c214226df973715fd2\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"94af3f99b50057be404d0ab9aafabda2e68e0f146c115a14f715d9f15bed8e7d\"" Aug 19 08:10:47.210175 containerd[1577]: time="2025-08-19T08:10:47.209256587Z" level=info msg="StartContainer for \"94af3f99b50057be404d0ab9aafabda2e68e0f146c115a14f715d9f15bed8e7d\"" Aug 19 08:10:47.210175 containerd[1577]: time="2025-08-19T08:10:47.210066790Z" level=info msg="connecting to shim 94af3f99b50057be404d0ab9aafabda2e68e0f146c115a14f715d9f15bed8e7d" address="unix:///run/containerd/s/7decf29aa8432f3fb1ad7c9aeda425712a20372b60956a09c43325f3868e068a" protocol=ttrpc version=3 Aug 19 08:10:47.235260 systemd[1]: Started cri-containerd-94af3f99b50057be404d0ab9aafabda2e68e0f146c115a14f715d9f15bed8e7d.scope - libcontainer container 94af3f99b50057be404d0ab9aafabda2e68e0f146c115a14f715d9f15bed8e7d. Aug 19 08:10:47.278121 containerd[1577]: time="2025-08-19T08:10:47.278049230Z" level=info msg="StartContainer for \"94af3f99b50057be404d0ab9aafabda2e68e0f146c115a14f715d9f15bed8e7d\" returns successfully" Aug 19 08:10:47.351507 containerd[1577]: time="2025-08-19T08:10:47.351437118Z" level=info msg="TaskExit event in podsandbox handler container_id:\"94af3f99b50057be404d0ab9aafabda2e68e0f146c115a14f715d9f15bed8e7d\" id:\"ecee5add0bde5b44b78ff0770424dba82e729b53ab0edc54c9a47c50db8c199f\" pid:4787 exited_at:{seconds:1755591047 nanos:351041575}" Aug 19 08:10:47.822140 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Aug 19 08:10:48.201420 kubelet[2705]: E0819 08:10:48.201287 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:10:49.203281 kubelet[2705]: E0819 08:10:49.203228 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:10:49.280537 containerd[1577]: time="2025-08-19T08:10:49.280483137Z" level=info msg="TaskExit event in podsandbox handler container_id:\"94af3f99b50057be404d0ab9aafabda2e68e0f146c115a14f715d9f15bed8e7d\" id:\"fe1e2020f6c67aa643b7ed523489342df94843a2629d3dc721834df7a53a54ed\" pid:4863 exit_status:1 exited_at:{seconds:1755591049 nanos:280129915}" Aug 19 08:10:50.204469 kubelet[2705]: E0819 08:10:50.204428 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:10:51.113714 systemd-networkd[1498]: lxc_health: Link UP Aug 19 08:10:51.114025 systemd-networkd[1498]: lxc_health: Gained carrier Aug 19 08:10:51.412911 containerd[1577]: time="2025-08-19T08:10:51.412519258Z" level=info msg="TaskExit event in podsandbox handler container_id:\"94af3f99b50057be404d0ab9aafabda2e68e0f146c115a14f715d9f15bed8e7d\" id:\"a947e0664878d8cf5f484684b0e9e1c147cb9e34239130f6ad8ef20c860cf2f2\" pid:5315 exited_at:{seconds:1755591051 nanos:411221880}" Aug 19 08:10:52.250394 systemd-networkd[1498]: lxc_health: Gained IPv6LL Aug 19 08:10:52.750555 kubelet[2705]: E0819 08:10:52.750503 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:10:52.764923 kubelet[2705]: I0819 08:10:52.764616 2705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-6nszs" podStartSLOduration=10.764598444 podStartE2EDuration="10.764598444s" podCreationTimestamp="2025-08-19 08:10:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-19 08:10:48.215627493 +0000 UTC m=+96.450735651" watchObservedRunningTime="2025-08-19 08:10:52.764598444 +0000 UTC m=+100.999706602" Aug 19 08:10:53.210577 kubelet[2705]: E0819 08:10:53.210389 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:10:53.645419 containerd[1577]: time="2025-08-19T08:10:53.645329541Z" level=info msg="TaskExit event in podsandbox handler container_id:\"94af3f99b50057be404d0ab9aafabda2e68e0f146c115a14f715d9f15bed8e7d\" id:\"84d95e4c408071660f187a92a4e89b13e9f03ef64ab1bdf6fc909edfd30cf081\" pid:5355 exited_at:{seconds:1755591053 nanos:644568434}" Aug 19 08:10:54.212183 kubelet[2705]: E0819 08:10:54.212141 2705 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:10:55.731436 containerd[1577]: time="2025-08-19T08:10:55.731375367Z" level=info msg="TaskExit event in podsandbox handler container_id:\"94af3f99b50057be404d0ab9aafabda2e68e0f146c115a14f715d9f15bed8e7d\" id:\"b5507452aae50df964b53ea2fe2ca728bdc65b3a490410b544b3f51ea7c35225\" pid:5385 exited_at:{seconds:1755591055 nanos:730950490}" Aug 19 08:10:55.733285 kubelet[2705]: E0819 08:10:55.733239 2705 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:36718->127.0.0.1:43883: write tcp 127.0.0.1:36718->127.0.0.1:43883: write: broken pipe Aug 19 08:10:57.913746 containerd[1577]: time="2025-08-19T08:10:57.913694745Z" level=info msg="TaskExit event in podsandbox handler container_id:\"94af3f99b50057be404d0ab9aafabda2e68e0f146c115a14f715d9f15bed8e7d\" id:\"b46ae39152313e863d0be712989a9c5382443a4c5e0eefd12acfe49607213e9e\" pid:5409 exited_at:{seconds:1755591057 nanos:913313140}" Aug 19 08:10:57.923224 sshd[4526]: Connection closed by 10.0.0.1 port 39600 Aug 19 08:10:57.989304 sshd-session[4517]: pam_unix(sshd:session): session closed for user core Aug 19 08:10:57.994357 systemd[1]: sshd@27-10.0.0.49:22-10.0.0.1:39600.service: Deactivated successfully. Aug 19 08:10:57.996672 systemd[1]: session-28.scope: Deactivated successfully. Aug 19 08:10:57.997670 systemd-logind[1548]: Session 28 logged out. Waiting for processes to exit. Aug 19 08:10:57.999243 systemd-logind[1548]: Removed session 28.