Aug 19 08:15:17.851156 kernel: Linux version 6.12.41-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Mon Aug 18 22:19:37 -00 2025 Aug 19 08:15:17.851184 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=cc23dd01793203541561c15ffc568736bb5dae0d652141296dd11bf777bdf42f Aug 19 08:15:17.851196 kernel: BIOS-provided physical RAM map: Aug 19 08:15:17.851203 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000002ffff] usable Aug 19 08:15:17.851209 kernel: BIOS-e820: [mem 0x0000000000030000-0x000000000004ffff] reserved Aug 19 08:15:17.851216 kernel: BIOS-e820: [mem 0x0000000000050000-0x000000000009efff] usable Aug 19 08:15:17.851223 kernel: BIOS-e820: [mem 0x000000000009f000-0x000000000009ffff] reserved Aug 19 08:15:17.851230 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009b8ecfff] usable Aug 19 08:15:17.851239 kernel: BIOS-e820: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Aug 19 08:15:17.851246 kernel: BIOS-e820: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Aug 19 08:15:17.851252 kernel: BIOS-e820: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Aug 19 08:15:17.851262 kernel: BIOS-e820: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Aug 19 08:15:17.851268 kernel: BIOS-e820: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Aug 19 08:15:17.851275 kernel: BIOS-e820: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Aug 19 08:15:17.851283 kernel: BIOS-e820: [mem 0x000000009bfb7000-0x000000009bffffff] usable Aug 19 08:15:17.851290 kernel: BIOS-e820: [mem 0x000000009c000000-0x000000009cffffff] reserved Aug 19 08:15:17.851302 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Aug 19 08:15:17.851310 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Aug 19 08:15:17.851319 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Aug 19 08:15:17.851326 kernel: NX (Execute Disable) protection: active Aug 19 08:15:17.851335 kernel: APIC: Static calls initialized Aug 19 08:15:17.851343 kernel: e820: update [mem 0x9a13f018-0x9a148c57] usable ==> usable Aug 19 08:15:17.851350 kernel: e820: update [mem 0x9a102018-0x9a13ee57] usable ==> usable Aug 19 08:15:17.851357 kernel: extended physical RAM map: Aug 19 08:15:17.851364 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000002ffff] usable Aug 19 08:15:17.851371 kernel: reserve setup_data: [mem 0x0000000000030000-0x000000000004ffff] reserved Aug 19 08:15:17.851379 kernel: reserve setup_data: [mem 0x0000000000050000-0x000000000009efff] usable Aug 19 08:15:17.851388 kernel: reserve setup_data: [mem 0x000000000009f000-0x000000000009ffff] reserved Aug 19 08:15:17.851396 kernel: reserve setup_data: [mem 0x0000000000100000-0x000000009a102017] usable Aug 19 08:15:17.851403 kernel: reserve setup_data: [mem 0x000000009a102018-0x000000009a13ee57] usable Aug 19 08:15:17.851410 kernel: reserve setup_data: [mem 0x000000009a13ee58-0x000000009a13f017] usable Aug 19 08:15:17.851417 kernel: reserve setup_data: [mem 0x000000009a13f018-0x000000009a148c57] usable Aug 19 08:15:17.851424 kernel: reserve setup_data: [mem 0x000000009a148c58-0x000000009b8ecfff] usable Aug 19 08:15:17.851431 kernel: reserve setup_data: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Aug 19 08:15:17.851438 kernel: reserve setup_data: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Aug 19 08:15:17.851445 kernel: reserve setup_data: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Aug 19 08:15:17.851452 kernel: reserve setup_data: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Aug 19 08:15:17.851459 kernel: reserve setup_data: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Aug 19 08:15:17.851468 kernel: reserve setup_data: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Aug 19 08:15:17.851476 kernel: reserve setup_data: [mem 0x000000009bfb7000-0x000000009bffffff] usable Aug 19 08:15:17.851486 kernel: reserve setup_data: [mem 0x000000009c000000-0x000000009cffffff] reserved Aug 19 08:15:17.851494 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Aug 19 08:15:17.851501 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Aug 19 08:15:17.851508 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Aug 19 08:15:17.851518 kernel: efi: EFI v2.7 by EDK II Aug 19 08:15:17.851525 kernel: efi: SMBIOS=0x9b9d5000 ACPI=0x9bb7e000 ACPI 2.0=0x9bb7e014 MEMATTR=0x9a1af018 RNG=0x9bb73018 Aug 19 08:15:17.851533 kernel: random: crng init done Aug 19 08:15:17.851540 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Aug 19 08:15:17.851547 kernel: secureboot: Secure boot enabled Aug 19 08:15:17.851555 kernel: SMBIOS 2.8 present. Aug 19 08:15:17.851562 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Aug 19 08:15:17.851569 kernel: DMI: Memory slots populated: 1/1 Aug 19 08:15:17.851577 kernel: Hypervisor detected: KVM Aug 19 08:15:17.851584 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 19 08:15:17.851591 kernel: kvm-clock: using sched offset of 6913045078 cycles Aug 19 08:15:17.851602 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 19 08:15:17.851609 kernel: tsc: Detected 2794.750 MHz processor Aug 19 08:15:17.851617 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 19 08:15:17.851625 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 19 08:15:17.851632 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x400000000 Aug 19 08:15:17.851639 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Aug 19 08:15:17.851649 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 19 08:15:17.851659 kernel: Using GB pages for direct mapping Aug 19 08:15:17.851668 kernel: ACPI: Early table checksum verification disabled Aug 19 08:15:17.851678 kernel: ACPI: RSDP 0x000000009BB7E014 000024 (v02 BOCHS ) Aug 19 08:15:17.851686 kernel: ACPI: XSDT 0x000000009BB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Aug 19 08:15:17.851693 kernel: ACPI: FACP 0x000000009BB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Aug 19 08:15:17.851711 kernel: ACPI: DSDT 0x000000009BB7A000 002237 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 19 08:15:17.851722 kernel: ACPI: FACS 0x000000009BBDD000 000040 Aug 19 08:15:17.851729 kernel: ACPI: APIC 0x000000009BB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 19 08:15:17.851747 kernel: ACPI: HPET 0x000000009BB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 19 08:15:17.851764 kernel: ACPI: MCFG 0x000000009BB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 19 08:15:17.851788 kernel: ACPI: WAET 0x000000009BB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 19 08:15:17.851801 kernel: ACPI: BGRT 0x000000009BB74000 000038 (v01 INTEL EDK2 00000002 01000013) Aug 19 08:15:17.851808 kernel: ACPI: Reserving FACP table memory at [mem 0x9bb79000-0x9bb790f3] Aug 19 08:15:17.851816 kernel: ACPI: Reserving DSDT table memory at [mem 0x9bb7a000-0x9bb7c236] Aug 19 08:15:17.851823 kernel: ACPI: Reserving FACS table memory at [mem 0x9bbdd000-0x9bbdd03f] Aug 19 08:15:17.851831 kernel: ACPI: Reserving APIC table memory at [mem 0x9bb78000-0x9bb7808f] Aug 19 08:15:17.851847 kernel: ACPI: Reserving HPET table memory at [mem 0x9bb77000-0x9bb77037] Aug 19 08:15:17.851864 kernel: ACPI: Reserving MCFG table memory at [mem 0x9bb76000-0x9bb7603b] Aug 19 08:15:17.851881 kernel: ACPI: Reserving WAET table memory at [mem 0x9bb75000-0x9bb75027] Aug 19 08:15:17.851889 kernel: ACPI: Reserving BGRT table memory at [mem 0x9bb74000-0x9bb74037] Aug 19 08:15:17.851900 kernel: No NUMA configuration found Aug 19 08:15:17.851908 kernel: Faking a node at [mem 0x0000000000000000-0x000000009bffffff] Aug 19 08:15:17.851916 kernel: NODE_DATA(0) allocated [mem 0x9bf57dc0-0x9bf5efff] Aug 19 08:15:17.851923 kernel: Zone ranges: Aug 19 08:15:17.851931 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 19 08:15:17.851939 kernel: DMA32 [mem 0x0000000001000000-0x000000009bffffff] Aug 19 08:15:17.851946 kernel: Normal empty Aug 19 08:15:17.851953 kernel: Device empty Aug 19 08:15:17.851961 kernel: Movable zone start for each node Aug 19 08:15:17.851970 kernel: Early memory node ranges Aug 19 08:15:17.852013 kernel: node 0: [mem 0x0000000000001000-0x000000000002ffff] Aug 19 08:15:17.852021 kernel: node 0: [mem 0x0000000000050000-0x000000000009efff] Aug 19 08:15:17.852028 kernel: node 0: [mem 0x0000000000100000-0x000000009b8ecfff] Aug 19 08:15:17.852036 kernel: node 0: [mem 0x000000009bbff000-0x000000009bfb0fff] Aug 19 08:15:17.852044 kernel: node 0: [mem 0x000000009bfb7000-0x000000009bffffff] Aug 19 08:15:17.852052 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009bffffff] Aug 19 08:15:17.852059 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 19 08:15:17.852067 kernel: On node 0, zone DMA: 32 pages in unavailable ranges Aug 19 08:15:17.852078 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Aug 19 08:15:17.852085 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Aug 19 08:15:17.852093 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Aug 19 08:15:17.852101 kernel: On node 0, zone DMA32: 16384 pages in unavailable ranges Aug 19 08:15:17.852108 kernel: ACPI: PM-Timer IO Port: 0x608 Aug 19 08:15:17.852116 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 19 08:15:17.852124 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Aug 19 08:15:17.852131 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Aug 19 08:15:17.852138 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 19 08:15:17.852151 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 19 08:15:17.852159 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 19 08:15:17.852166 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 19 08:15:17.852174 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 19 08:15:17.852181 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Aug 19 08:15:17.852189 kernel: TSC deadline timer available Aug 19 08:15:17.852196 kernel: CPU topo: Max. logical packages: 1 Aug 19 08:15:17.852204 kernel: CPU topo: Max. logical dies: 1 Aug 19 08:15:17.852214 kernel: CPU topo: Max. dies per package: 1 Aug 19 08:15:17.852228 kernel: CPU topo: Max. threads per core: 1 Aug 19 08:15:17.852236 kernel: CPU topo: Num. cores per package: 4 Aug 19 08:15:17.852244 kernel: CPU topo: Num. threads per package: 4 Aug 19 08:15:17.852254 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Aug 19 08:15:17.852264 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Aug 19 08:15:17.852272 kernel: kvm-guest: KVM setup pv remote TLB flush Aug 19 08:15:17.852280 kernel: kvm-guest: setup PV sched yield Aug 19 08:15:17.852288 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Aug 19 08:15:17.852298 kernel: Booting paravirtualized kernel on KVM Aug 19 08:15:17.852306 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 19 08:15:17.852314 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Aug 19 08:15:17.852322 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Aug 19 08:15:17.852331 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Aug 19 08:15:17.852340 kernel: pcpu-alloc: [0] 0 1 2 3 Aug 19 08:15:17.852348 kernel: kvm-guest: PV spinlocks enabled Aug 19 08:15:17.852358 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Aug 19 08:15:17.852368 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=cc23dd01793203541561c15ffc568736bb5dae0d652141296dd11bf777bdf42f Aug 19 08:15:17.852378 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 19 08:15:17.852386 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 19 08:15:17.852394 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 19 08:15:17.852402 kernel: Fallback order for Node 0: 0 Aug 19 08:15:17.852410 kernel: Built 1 zonelists, mobility grouping on. Total pages: 638054 Aug 19 08:15:17.852417 kernel: Policy zone: DMA32 Aug 19 08:15:17.852425 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 19 08:15:17.852433 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Aug 19 08:15:17.852443 kernel: ftrace: allocating 40101 entries in 157 pages Aug 19 08:15:17.852451 kernel: ftrace: allocated 157 pages with 5 groups Aug 19 08:15:17.852459 kernel: Dynamic Preempt: voluntary Aug 19 08:15:17.852466 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 19 08:15:17.852479 kernel: rcu: RCU event tracing is enabled. Aug 19 08:15:17.852487 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Aug 19 08:15:17.852495 kernel: Trampoline variant of Tasks RCU enabled. Aug 19 08:15:17.852503 kernel: Rude variant of Tasks RCU enabled. Aug 19 08:15:17.852511 kernel: Tracing variant of Tasks RCU enabled. Aug 19 08:15:17.852519 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 19 08:15:17.852529 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Aug 19 08:15:17.852537 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Aug 19 08:15:17.852545 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Aug 19 08:15:17.852556 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Aug 19 08:15:17.852564 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Aug 19 08:15:17.852572 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 19 08:15:17.852580 kernel: Console: colour dummy device 80x25 Aug 19 08:15:17.852588 kernel: printk: legacy console [ttyS0] enabled Aug 19 08:15:17.852598 kernel: ACPI: Core revision 20240827 Aug 19 08:15:17.852606 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Aug 19 08:15:17.852614 kernel: APIC: Switch to symmetric I/O mode setup Aug 19 08:15:17.852622 kernel: x2apic enabled Aug 19 08:15:17.852630 kernel: APIC: Switched APIC routing to: physical x2apic Aug 19 08:15:17.852638 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Aug 19 08:15:17.852646 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Aug 19 08:15:17.852653 kernel: kvm-guest: setup PV IPIs Aug 19 08:15:17.852661 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Aug 19 08:15:17.852672 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Aug 19 08:15:17.852680 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Aug 19 08:15:17.852688 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Aug 19 08:15:17.852696 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Aug 19 08:15:17.852703 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Aug 19 08:15:17.852714 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 19 08:15:17.852722 kernel: Spectre V2 : Mitigation: Retpolines Aug 19 08:15:17.852730 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 19 08:15:17.852738 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Aug 19 08:15:17.852748 kernel: RETBleed: Mitigation: untrained return thunk Aug 19 08:15:17.852756 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 19 08:15:17.852764 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Aug 19 08:15:17.852772 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Aug 19 08:15:17.852781 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Aug 19 08:15:17.852789 kernel: x86/bugs: return thunk changed Aug 19 08:15:17.852796 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Aug 19 08:15:17.852804 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 19 08:15:17.852814 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 19 08:15:17.852822 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 19 08:15:17.852830 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 19 08:15:17.852838 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Aug 19 08:15:17.852846 kernel: Freeing SMP alternatives memory: 32K Aug 19 08:15:17.852854 kernel: pid_max: default: 32768 minimum: 301 Aug 19 08:15:17.852862 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Aug 19 08:15:17.852874 kernel: landlock: Up and running. Aug 19 08:15:17.852881 kernel: SELinux: Initializing. Aug 19 08:15:17.852892 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 19 08:15:17.852900 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 19 08:15:17.852908 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Aug 19 08:15:17.852915 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Aug 19 08:15:17.852923 kernel: ... version: 0 Aug 19 08:15:17.852931 kernel: ... bit width: 48 Aug 19 08:15:17.852942 kernel: ... generic registers: 6 Aug 19 08:15:17.852950 kernel: ... value mask: 0000ffffffffffff Aug 19 08:15:17.852958 kernel: ... max period: 00007fffffffffff Aug 19 08:15:17.852969 kernel: ... fixed-purpose events: 0 Aug 19 08:15:17.852996 kernel: ... event mask: 000000000000003f Aug 19 08:15:17.853004 kernel: signal: max sigframe size: 1776 Aug 19 08:15:17.853012 kernel: rcu: Hierarchical SRCU implementation. Aug 19 08:15:17.853020 kernel: rcu: Max phase no-delay instances is 400. Aug 19 08:15:17.853028 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Aug 19 08:15:17.853036 kernel: smp: Bringing up secondary CPUs ... Aug 19 08:15:17.853044 kernel: smpboot: x86: Booting SMP configuration: Aug 19 08:15:17.853052 kernel: .... node #0, CPUs: #1 #2 #3 Aug 19 08:15:17.853059 kernel: smp: Brought up 1 node, 4 CPUs Aug 19 08:15:17.853070 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Aug 19 08:15:17.853078 kernel: Memory: 2409216K/2552216K available (14336K kernel code, 2430K rwdata, 9960K rodata, 54040K init, 2928K bss, 137064K reserved, 0K cma-reserved) Aug 19 08:15:17.853086 kernel: devtmpfs: initialized Aug 19 08:15:17.853094 kernel: x86/mm: Memory block size: 128MB Aug 19 08:15:17.853102 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bb7f000-0x9bbfefff] (524288 bytes) Aug 19 08:15:17.853110 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bfb5000-0x9bfb6fff] (8192 bytes) Aug 19 08:15:17.853118 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 19 08:15:17.853126 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Aug 19 08:15:17.853136 kernel: pinctrl core: initialized pinctrl subsystem Aug 19 08:15:17.853144 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 19 08:15:17.853151 kernel: audit: initializing netlink subsys (disabled) Aug 19 08:15:17.853159 kernel: audit: type=2000 audit(1755591314.114:1): state=initialized audit_enabled=0 res=1 Aug 19 08:15:17.853167 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 19 08:15:17.853175 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 19 08:15:17.853183 kernel: cpuidle: using governor menu Aug 19 08:15:17.853191 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 19 08:15:17.853199 kernel: dca service started, version 1.12.1 Aug 19 08:15:17.853209 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Aug 19 08:15:17.853217 kernel: PCI: Using configuration type 1 for base access Aug 19 08:15:17.853224 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 19 08:15:17.853232 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 19 08:15:17.853240 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Aug 19 08:15:17.853248 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 19 08:15:17.853256 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 19 08:15:17.853264 kernel: ACPI: Added _OSI(Module Device) Aug 19 08:15:17.853271 kernel: ACPI: Added _OSI(Processor Device) Aug 19 08:15:17.853281 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 19 08:15:17.853289 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 19 08:15:17.853297 kernel: ACPI: Interpreter enabled Aug 19 08:15:17.853305 kernel: ACPI: PM: (supports S0 S5) Aug 19 08:15:17.853314 kernel: ACPI: Using IOAPIC for interrupt routing Aug 19 08:15:17.853323 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 19 08:15:17.853332 kernel: PCI: Using E820 reservations for host bridge windows Aug 19 08:15:17.853341 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Aug 19 08:15:17.853349 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 19 08:15:17.853598 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 19 08:15:17.853787 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Aug 19 08:15:17.853918 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Aug 19 08:15:17.853929 kernel: PCI host bridge to bus 0000:00 Aug 19 08:15:17.854092 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 19 08:15:17.854206 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 19 08:15:17.854322 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 19 08:15:17.854432 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Aug 19 08:15:17.854542 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Aug 19 08:15:17.854669 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Aug 19 08:15:17.854783 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 19 08:15:17.854937 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Aug 19 08:15:17.855125 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Aug 19 08:15:17.855257 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Aug 19 08:15:17.855382 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Aug 19 08:15:17.855502 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Aug 19 08:15:17.855622 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 19 08:15:17.855791 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Aug 19 08:15:17.855968 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Aug 19 08:15:17.856137 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Aug 19 08:15:17.856266 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Aug 19 08:15:17.856408 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Aug 19 08:15:17.856531 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Aug 19 08:15:17.856652 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Aug 19 08:15:17.856843 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Aug 19 08:15:17.857014 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Aug 19 08:15:17.857147 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Aug 19 08:15:17.857268 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Aug 19 08:15:17.857388 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Aug 19 08:15:17.857509 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Aug 19 08:15:17.857647 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Aug 19 08:15:17.857769 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Aug 19 08:15:17.857901 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Aug 19 08:15:17.858076 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Aug 19 08:15:17.858202 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Aug 19 08:15:17.858342 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Aug 19 08:15:17.858465 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Aug 19 08:15:17.858476 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 19 08:15:17.858484 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 19 08:15:17.858492 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 19 08:15:17.858500 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 19 08:15:17.858513 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Aug 19 08:15:17.858520 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Aug 19 08:15:17.858528 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Aug 19 08:15:17.858536 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Aug 19 08:15:17.858544 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Aug 19 08:15:17.858552 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Aug 19 08:15:17.858560 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Aug 19 08:15:17.858568 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Aug 19 08:15:17.858575 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Aug 19 08:15:17.858586 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Aug 19 08:15:17.858594 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Aug 19 08:15:17.858602 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Aug 19 08:15:17.858609 kernel: iommu: Default domain type: Translated Aug 19 08:15:17.858617 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 19 08:15:17.858625 kernel: efivars: Registered efivars operations Aug 19 08:15:17.858633 kernel: PCI: Using ACPI for IRQ routing Aug 19 08:15:17.858641 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 19 08:15:17.858649 kernel: e820: reserve RAM buffer [mem 0x0009f000-0x0009ffff] Aug 19 08:15:17.858658 kernel: e820: reserve RAM buffer [mem 0x9a102018-0x9bffffff] Aug 19 08:15:17.858666 kernel: e820: reserve RAM buffer [mem 0x9a13f018-0x9bffffff] Aug 19 08:15:17.858674 kernel: e820: reserve RAM buffer [mem 0x9b8ed000-0x9bffffff] Aug 19 08:15:17.858682 kernel: e820: reserve RAM buffer [mem 0x9bfb1000-0x9bffffff] Aug 19 08:15:17.858804 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Aug 19 08:15:17.858924 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Aug 19 08:15:17.859101 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 19 08:15:17.859114 kernel: vgaarb: loaded Aug 19 08:15:17.859126 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Aug 19 08:15:17.859135 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Aug 19 08:15:17.859142 kernel: clocksource: Switched to clocksource kvm-clock Aug 19 08:15:17.859150 kernel: VFS: Disk quotas dquot_6.6.0 Aug 19 08:15:17.859158 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 19 08:15:17.859166 kernel: pnp: PnP ACPI init Aug 19 08:15:17.859311 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Aug 19 08:15:17.859323 kernel: pnp: PnP ACPI: found 6 devices Aug 19 08:15:17.859334 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 19 08:15:17.859342 kernel: NET: Registered PF_INET protocol family Aug 19 08:15:17.859350 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 19 08:15:17.859358 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 19 08:15:17.859366 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 19 08:15:17.859374 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 19 08:15:17.859382 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Aug 19 08:15:17.859390 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 19 08:15:17.859398 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 19 08:15:17.859409 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 19 08:15:17.859417 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 19 08:15:17.859425 kernel: NET: Registered PF_XDP protocol family Aug 19 08:15:17.859549 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Aug 19 08:15:17.859671 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Aug 19 08:15:17.859784 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 19 08:15:17.859895 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 19 08:15:17.860049 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 19 08:15:17.860166 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Aug 19 08:15:17.860282 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Aug 19 08:15:17.860393 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Aug 19 08:15:17.860404 kernel: PCI: CLS 0 bytes, default 64 Aug 19 08:15:17.860412 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Aug 19 08:15:17.860420 kernel: Initialise system trusted keyrings Aug 19 08:15:17.860428 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 19 08:15:17.860436 kernel: Key type asymmetric registered Aug 19 08:15:17.860444 kernel: Asymmetric key parser 'x509' registered Aug 19 08:15:17.860455 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Aug 19 08:15:17.860478 kernel: io scheduler mq-deadline registered Aug 19 08:15:17.860488 kernel: io scheduler kyber registered Aug 19 08:15:17.860496 kernel: io scheduler bfq registered Aug 19 08:15:17.860504 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 19 08:15:17.860513 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Aug 19 08:15:17.860521 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Aug 19 08:15:17.860529 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Aug 19 08:15:17.860537 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 19 08:15:17.860548 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 19 08:15:17.860556 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 19 08:15:17.860564 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 19 08:15:17.860572 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 19 08:15:17.860732 kernel: rtc_cmos 00:04: RTC can wake from S4 Aug 19 08:15:17.860744 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 19 08:15:17.860858 kernel: rtc_cmos 00:04: registered as rtc0 Aug 19 08:15:17.860997 kernel: rtc_cmos 00:04: setting system clock to 2025-08-19T08:15:17 UTC (1755591317) Aug 19 08:15:17.861121 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Aug 19 08:15:17.861133 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Aug 19 08:15:17.861141 kernel: efifb: probing for efifb Aug 19 08:15:17.861149 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Aug 19 08:15:17.861157 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Aug 19 08:15:17.861166 kernel: efifb: scrolling: redraw Aug 19 08:15:17.861177 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Aug 19 08:15:17.861185 kernel: Console: switching to colour frame buffer device 160x50 Aug 19 08:15:17.861193 kernel: fb0: EFI VGA frame buffer device Aug 19 08:15:17.861204 kernel: pstore: Using crash dump compression: deflate Aug 19 08:15:17.861212 kernel: pstore: Registered efi_pstore as persistent store backend Aug 19 08:15:17.861222 kernel: NET: Registered PF_INET6 protocol family Aug 19 08:15:17.861231 kernel: Segment Routing with IPv6 Aug 19 08:15:17.861239 kernel: In-situ OAM (IOAM) with IPv6 Aug 19 08:15:17.861249 kernel: NET: Registered PF_PACKET protocol family Aug 19 08:15:17.861257 kernel: Key type dns_resolver registered Aug 19 08:15:17.861265 kernel: IPI shorthand broadcast: enabled Aug 19 08:15:17.861273 kernel: sched_clock: Marking stable (4081002383, 144021056)->(4335097340, -110073901) Aug 19 08:15:17.861281 kernel: registered taskstats version 1 Aug 19 08:15:17.861289 kernel: Loading compiled-in X.509 certificates Aug 19 08:15:17.861298 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.41-flatcar: 93a065b103c00d4b81cc5822e4e7f9674e63afaf' Aug 19 08:15:17.861306 kernel: Demotion targets for Node 0: null Aug 19 08:15:17.861314 kernel: Key type .fscrypt registered Aug 19 08:15:17.861325 kernel: Key type fscrypt-provisioning registered Aug 19 08:15:17.861335 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 19 08:15:17.861343 kernel: ima: Allocated hash algorithm: sha1 Aug 19 08:15:17.861353 kernel: ima: No architecture policies found Aug 19 08:15:17.861362 kernel: clk: Disabling unused clocks Aug 19 08:15:17.861370 kernel: Warning: unable to open an initial console. Aug 19 08:15:17.861378 kernel: Freeing unused kernel image (initmem) memory: 54040K Aug 19 08:15:17.861387 kernel: Write protecting the kernel read-only data: 24576k Aug 19 08:15:17.861395 kernel: Freeing unused kernel image (rodata/data gap) memory: 280K Aug 19 08:15:17.861405 kernel: Run /init as init process Aug 19 08:15:17.861413 kernel: with arguments: Aug 19 08:15:17.861421 kernel: /init Aug 19 08:15:17.861429 kernel: with environment: Aug 19 08:15:17.861437 kernel: HOME=/ Aug 19 08:15:17.861445 kernel: TERM=linux Aug 19 08:15:17.861453 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 19 08:15:17.861465 systemd[1]: Successfully made /usr/ read-only. Aug 19 08:15:17.861479 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 19 08:15:17.861488 systemd[1]: Detected virtualization kvm. Aug 19 08:15:17.861497 systemd[1]: Detected architecture x86-64. Aug 19 08:15:17.861505 systemd[1]: Running in initrd. Aug 19 08:15:17.861513 systemd[1]: No hostname configured, using default hostname. Aug 19 08:15:17.861522 systemd[1]: Hostname set to . Aug 19 08:15:17.861531 systemd[1]: Initializing machine ID from VM UUID. Aug 19 08:15:17.861539 systemd[1]: Queued start job for default target initrd.target. Aug 19 08:15:17.861550 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 19 08:15:17.861559 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 19 08:15:17.861568 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 19 08:15:17.861577 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 19 08:15:17.861586 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 19 08:15:17.861595 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 19 08:15:17.861607 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 19 08:15:17.861616 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 19 08:15:17.861625 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 19 08:15:17.861633 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 19 08:15:17.861642 systemd[1]: Reached target paths.target - Path Units. Aug 19 08:15:17.861651 systemd[1]: Reached target slices.target - Slice Units. Aug 19 08:15:17.861659 systemd[1]: Reached target swap.target - Swaps. Aug 19 08:15:17.861668 systemd[1]: Reached target timers.target - Timer Units. Aug 19 08:15:17.861677 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 19 08:15:17.861688 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 19 08:15:17.861696 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 19 08:15:17.861707 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Aug 19 08:15:17.861716 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 19 08:15:17.861724 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 19 08:15:17.861733 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 19 08:15:17.861741 systemd[1]: Reached target sockets.target - Socket Units. Aug 19 08:15:17.861750 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 19 08:15:17.861761 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 19 08:15:17.861770 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 19 08:15:17.861779 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Aug 19 08:15:17.861788 systemd[1]: Starting systemd-fsck-usr.service... Aug 19 08:15:17.861796 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 19 08:15:17.861805 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 19 08:15:17.861814 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 19 08:15:17.861822 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 19 08:15:17.861834 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 19 08:15:17.861863 systemd-journald[220]: Collecting audit messages is disabled. Aug 19 08:15:17.861885 systemd[1]: Finished systemd-fsck-usr.service. Aug 19 08:15:17.861894 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 19 08:15:17.861904 systemd-journald[220]: Journal started Aug 19 08:15:17.861923 systemd-journald[220]: Runtime Journal (/run/log/journal/3bb1b996077841e09aad34a196ed8081) is 6M, max 48.2M, 42.2M free. Aug 19 08:15:17.854657 systemd-modules-load[221]: Inserted module 'overlay' Aug 19 08:15:17.865006 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 19 08:15:17.868297 systemd[1]: Started systemd-journald.service - Journal Service. Aug 19 08:15:17.871659 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 19 08:15:17.876247 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 19 08:15:17.883012 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 19 08:15:17.884677 systemd-modules-load[221]: Inserted module 'br_netfilter' Aug 19 08:15:17.885684 kernel: Bridge firewalling registered Aug 19 08:15:17.890186 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 19 08:15:17.890884 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 19 08:15:17.893142 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 19 08:15:17.898661 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 19 08:15:17.899891 systemd-tmpfiles[239]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Aug 19 08:15:17.905000 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 19 08:15:17.908699 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 19 08:15:17.923558 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 19 08:15:17.924907 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 19 08:15:17.928610 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 19 08:15:17.934643 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 19 08:15:17.951098 dracut-cmdline[258]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=cc23dd01793203541561c15ffc568736bb5dae0d652141296dd11bf777bdf42f Aug 19 08:15:17.979888 systemd-resolved[262]: Positive Trust Anchors: Aug 19 08:15:17.979919 systemd-resolved[262]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 19 08:15:17.979952 systemd-resolved[262]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 19 08:15:17.984743 systemd-resolved[262]: Defaulting to hostname 'linux'. Aug 19 08:15:17.986538 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 19 08:15:17.990649 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 19 08:15:18.089022 kernel: SCSI subsystem initialized Aug 19 08:15:18.099009 kernel: Loading iSCSI transport class v2.0-870. Aug 19 08:15:18.109006 kernel: iscsi: registered transport (tcp) Aug 19 08:15:18.134018 kernel: iscsi: registered transport (qla4xxx) Aug 19 08:15:18.134054 kernel: QLogic iSCSI HBA Driver Aug 19 08:15:18.155293 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 19 08:15:18.176681 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 19 08:15:18.177310 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 19 08:15:18.238260 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 19 08:15:18.240522 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 19 08:15:18.306018 kernel: raid6: avx2x4 gen() 30153 MB/s Aug 19 08:15:18.323008 kernel: raid6: avx2x2 gen() 31006 MB/s Aug 19 08:15:18.340060 kernel: raid6: avx2x1 gen() 25848 MB/s Aug 19 08:15:18.340086 kernel: raid6: using algorithm avx2x2 gen() 31006 MB/s Aug 19 08:15:18.358079 kernel: raid6: .... xor() 19733 MB/s, rmw enabled Aug 19 08:15:18.358110 kernel: raid6: using avx2x2 recovery algorithm Aug 19 08:15:18.379014 kernel: xor: automatically using best checksumming function avx Aug 19 08:15:18.554026 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 19 08:15:18.565566 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 19 08:15:18.568216 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 19 08:15:18.604236 systemd-udevd[472]: Using default interface naming scheme 'v255'. Aug 19 08:15:18.612768 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 19 08:15:18.614162 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 19 08:15:18.640025 dracut-pre-trigger[477]: rd.md=0: removing MD RAID activation Aug 19 08:15:18.675695 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 19 08:15:18.708140 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 19 08:15:18.791886 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 19 08:15:18.819097 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 19 08:15:18.850664 kernel: cryptd: max_cpu_qlen set to 1000 Aug 19 08:15:18.850731 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Aug 19 08:15:18.850954 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Aug 19 08:15:18.856540 kernel: AES CTR mode by8 optimization enabled Aug 19 08:15:18.865111 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 19 08:15:18.865153 kernel: GPT:9289727 != 19775487 Aug 19 08:15:18.865168 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 19 08:15:18.866159 kernel: GPT:9289727 != 19775487 Aug 19 08:15:18.866186 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 19 08:15:18.867242 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 19 08:15:18.876021 kernel: libata version 3.00 loaded. Aug 19 08:15:18.883027 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Aug 19 08:15:18.896337 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 19 08:15:18.897134 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 19 08:15:18.900592 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 19 08:15:18.902514 kernel: ahci 0000:00:1f.2: version 3.0 Aug 19 08:15:18.902752 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Aug 19 08:15:18.906165 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Aug 19 08:15:18.906347 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Aug 19 08:15:18.906490 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Aug 19 08:15:18.907328 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 19 08:15:18.910131 kernel: scsi host0: ahci Aug 19 08:15:18.910490 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 19 08:15:18.912569 kernel: scsi host1: ahci Aug 19 08:15:18.912769 kernel: scsi host2: ahci Aug 19 08:15:18.914023 kernel: scsi host3: ahci Aug 19 08:15:18.914265 kernel: scsi host4: ahci Aug 19 08:15:18.924767 kernel: scsi host5: ahci Aug 19 08:15:18.924330 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 19 08:15:18.933011 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 0 Aug 19 08:15:18.933036 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 0 Aug 19 08:15:18.933049 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 0 Aug 19 08:15:18.933060 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 0 Aug 19 08:15:18.933071 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 0 Aug 19 08:15:18.933081 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 0 Aug 19 08:15:18.924464 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 19 08:15:18.933277 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 19 08:15:18.954069 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Aug 19 08:15:18.968701 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Aug 19 08:15:18.977676 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 19 08:15:18.984570 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Aug 19 08:15:18.984810 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Aug 19 08:15:18.990001 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 19 08:15:18.991751 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 19 08:15:19.024853 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 19 08:15:19.156809 disk-uuid[635]: Primary Header is updated. Aug 19 08:15:19.156809 disk-uuid[635]: Secondary Entries is updated. Aug 19 08:15:19.156809 disk-uuid[635]: Secondary Header is updated. Aug 19 08:15:19.160339 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 19 08:15:19.165010 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 19 08:15:19.243163 kernel: ata6: SATA link down (SStatus 0 SControl 300) Aug 19 08:15:19.243217 kernel: ata1: SATA link down (SStatus 0 SControl 300) Aug 19 08:15:19.244025 kernel: ata4: SATA link down (SStatus 0 SControl 300) Aug 19 08:15:19.244086 kernel: ata2: SATA link down (SStatus 0 SControl 300) Aug 19 08:15:19.245006 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Aug 19 08:15:19.245997 kernel: ata5: SATA link down (SStatus 0 SControl 300) Aug 19 08:15:19.247010 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Aug 19 08:15:19.247030 kernel: ata3.00: applying bridge limits Aug 19 08:15:19.248002 kernel: ata3.00: configured for UDMA/100 Aug 19 08:15:19.250004 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Aug 19 08:15:19.306546 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Aug 19 08:15:19.306831 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Aug 19 08:15:19.327002 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Aug 19 08:15:19.720831 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 19 08:15:19.723425 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 19 08:15:19.725830 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 19 08:15:19.728083 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 19 08:15:19.730865 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 19 08:15:19.771466 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 19 08:15:20.167660 disk-uuid[640]: The operation has completed successfully. Aug 19 08:15:20.168800 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 19 08:15:20.199579 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 19 08:15:20.199702 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 19 08:15:20.238293 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 19 08:15:20.261569 sh[668]: Success Aug 19 08:15:20.281416 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 19 08:15:20.281463 kernel: device-mapper: uevent: version 1.0.3 Aug 19 08:15:20.282502 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Aug 19 08:15:20.292004 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Aug 19 08:15:20.323886 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 19 08:15:20.328450 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 19 08:15:20.344154 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 19 08:15:20.351619 kernel: BTRFS: device fsid 99050df3-5e04-4f37-acde-dec46aab7896 devid 1 transid 39 /dev/mapper/usr (253:0) scanned by mount (680) Aug 19 08:15:20.351654 kernel: BTRFS info (device dm-0): first mount of filesystem 99050df3-5e04-4f37-acde-dec46aab7896 Aug 19 08:15:20.351666 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 19 08:15:20.352462 kernel: BTRFS info (device dm-0): using free-space-tree Aug 19 08:15:20.359068 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 19 08:15:20.361563 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Aug 19 08:15:20.363830 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 19 08:15:20.364884 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 19 08:15:20.367617 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 19 08:15:20.393828 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (712) Aug 19 08:15:20.393862 kernel: BTRFS info (device vda6): first mount of filesystem 43dd0637-5e0b-4b8d-a544-a82ca0652f6f Aug 19 08:15:20.393874 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 19 08:15:20.394701 kernel: BTRFS info (device vda6): using free-space-tree Aug 19 08:15:20.405015 kernel: BTRFS info (device vda6): last unmount of filesystem 43dd0637-5e0b-4b8d-a544-a82ca0652f6f Aug 19 08:15:20.407095 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 19 08:15:20.409046 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 19 08:15:20.633738 ignition[756]: Ignition 2.21.0 Aug 19 08:15:20.633752 ignition[756]: Stage: fetch-offline Aug 19 08:15:20.633801 ignition[756]: no configs at "/usr/lib/ignition/base.d" Aug 19 08:15:20.633812 ignition[756]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 19 08:15:20.633944 ignition[756]: parsed url from cmdline: "" Aug 19 08:15:20.633949 ignition[756]: no config URL provided Aug 19 08:15:20.633954 ignition[756]: reading system config file "/usr/lib/ignition/user.ign" Aug 19 08:15:20.633964 ignition[756]: no config at "/usr/lib/ignition/user.ign" Aug 19 08:15:20.634007 ignition[756]: op(1): [started] loading QEMU firmware config module Aug 19 08:15:20.634012 ignition[756]: op(1): executing: "modprobe" "qemu_fw_cfg" Aug 19 08:15:20.643625 ignition[756]: op(1): [finished] loading QEMU firmware config module Aug 19 08:15:20.691600 ignition[756]: parsing config with SHA512: fd3f81a5a64ed976985dd1470530c1f14fe780714e32dbc10e4529a52bce1c1106ec4fb5c115f6d963031243d323227e684d820df48a7c07c34d05aa4d391121 Aug 19 08:15:20.702646 unknown[756]: fetched base config from "system" Aug 19 08:15:20.702662 unknown[756]: fetched user config from "qemu" Aug 19 08:15:20.703098 ignition[756]: fetch-offline: fetch-offline passed Aug 19 08:15:20.703167 ignition[756]: Ignition finished successfully Aug 19 08:15:20.706272 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 19 08:15:20.708835 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 19 08:15:20.712546 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 19 08:15:20.764864 systemd-networkd[860]: lo: Link UP Aug 19 08:15:20.764872 systemd-networkd[860]: lo: Gained carrier Aug 19 08:15:20.766614 systemd-networkd[860]: Enumeration completed Aug 19 08:15:20.767095 systemd-networkd[860]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 19 08:15:20.767099 systemd-networkd[860]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 19 08:15:20.768119 systemd-networkd[860]: eth0: Link UP Aug 19 08:15:20.768179 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 19 08:15:20.768274 systemd-networkd[860]: eth0: Gained carrier Aug 19 08:15:20.768284 systemd-networkd[860]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 19 08:15:20.771877 systemd[1]: Reached target network.target - Network. Aug 19 08:15:20.773885 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Aug 19 08:15:20.774930 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 19 08:15:20.789030 systemd-networkd[860]: eth0: DHCPv4 address 10.0.0.113/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 19 08:15:20.819212 ignition[863]: Ignition 2.21.0 Aug 19 08:15:20.819225 ignition[863]: Stage: kargs Aug 19 08:15:20.819593 ignition[863]: no configs at "/usr/lib/ignition/base.d" Aug 19 08:15:20.819606 ignition[863]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 19 08:15:20.821613 ignition[863]: kargs: kargs passed Aug 19 08:15:20.821760 ignition[863]: Ignition finished successfully Aug 19 08:15:20.827635 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 19 08:15:20.830625 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 19 08:15:20.917124 systemd-resolved[262]: Detected conflict on linux IN A 10.0.0.113 Aug 19 08:15:20.917145 systemd-resolved[262]: Hostname conflict, changing published hostname from 'linux' to 'linux4'. Aug 19 08:15:20.921175 ignition[872]: Ignition 2.21.0 Aug 19 08:15:20.921188 ignition[872]: Stage: disks Aug 19 08:15:20.921355 ignition[872]: no configs at "/usr/lib/ignition/base.d" Aug 19 08:15:20.921367 ignition[872]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 19 08:15:20.922337 ignition[872]: disks: disks passed Aug 19 08:15:20.922390 ignition[872]: Ignition finished successfully Aug 19 08:15:20.928322 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 19 08:15:20.928781 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 19 08:15:20.929264 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 19 08:15:20.929599 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 19 08:15:20.929946 systemd[1]: Reached target sysinit.target - System Initialization. Aug 19 08:15:20.930419 systemd[1]: Reached target basic.target - Basic System. Aug 19 08:15:20.931688 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 19 08:15:20.960080 systemd-fsck[882]: ROOT: clean, 15/553520 files, 52789/553472 blocks Aug 19 08:15:21.112818 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 19 08:15:21.115650 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 19 08:15:21.299031 kernel: EXT4-fs (vda9): mounted filesystem 41966107-04fa-426e-9830-6b4efa50e27b r/w with ordered data mode. Quota mode: none. Aug 19 08:15:21.299826 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 19 08:15:21.301311 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 19 08:15:21.304323 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 19 08:15:21.306167 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 19 08:15:21.307265 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 19 08:15:21.307307 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 19 08:15:21.307331 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 19 08:15:21.325162 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 19 08:15:21.327549 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 19 08:15:21.332054 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (890) Aug 19 08:15:21.332090 kernel: BTRFS info (device vda6): first mount of filesystem 43dd0637-5e0b-4b8d-a544-a82ca0652f6f Aug 19 08:15:21.333711 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 19 08:15:21.333737 kernel: BTRFS info (device vda6): using free-space-tree Aug 19 08:15:21.338266 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 19 08:15:21.545876 initrd-setup-root[914]: cut: /sysroot/etc/passwd: No such file or directory Aug 19 08:15:21.551854 initrd-setup-root[921]: cut: /sysroot/etc/group: No such file or directory Aug 19 08:15:21.557196 initrd-setup-root[928]: cut: /sysroot/etc/shadow: No such file or directory Aug 19 08:15:21.562916 initrd-setup-root[935]: cut: /sysroot/etc/gshadow: No such file or directory Aug 19 08:15:21.676609 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 19 08:15:21.679383 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 19 08:15:21.680913 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 19 08:15:21.701926 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 19 08:15:21.703049 kernel: BTRFS info (device vda6): last unmount of filesystem 43dd0637-5e0b-4b8d-a544-a82ca0652f6f Aug 19 08:15:21.726243 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 19 08:15:21.752050 ignition[1003]: INFO : Ignition 2.21.0 Aug 19 08:15:21.752050 ignition[1003]: INFO : Stage: mount Aug 19 08:15:21.755465 ignition[1003]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 19 08:15:21.757419 ignition[1003]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 19 08:15:21.760275 ignition[1003]: INFO : mount: mount passed Aug 19 08:15:21.761127 ignition[1003]: INFO : Ignition finished successfully Aug 19 08:15:21.764209 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 19 08:15:21.765804 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 19 08:15:21.797037 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 19 08:15:21.853015 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1016) Aug 19 08:15:21.853058 kernel: BTRFS info (device vda6): first mount of filesystem 43dd0637-5e0b-4b8d-a544-a82ca0652f6f Aug 19 08:15:21.855375 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 19 08:15:21.855403 kernel: BTRFS info (device vda6): using free-space-tree Aug 19 08:15:21.859724 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 19 08:15:21.903154 ignition[1033]: INFO : Ignition 2.21.0 Aug 19 08:15:21.903154 ignition[1033]: INFO : Stage: files Aug 19 08:15:21.905110 ignition[1033]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 19 08:15:21.905110 ignition[1033]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 19 08:15:21.908604 ignition[1033]: DEBUG : files: compiled without relabeling support, skipping Aug 19 08:15:21.910467 ignition[1033]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 19 08:15:21.910467 ignition[1033]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 19 08:15:21.915756 ignition[1033]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 19 08:15:21.917412 ignition[1033]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 19 08:15:21.919678 unknown[1033]: wrote ssh authorized keys file for user: core Aug 19 08:15:21.921381 ignition[1033]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 19 08:15:21.923182 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 19 08:15:21.923182 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Aug 19 08:15:22.010820 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 19 08:15:22.406077 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 19 08:15:22.406077 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 19 08:15:22.434422 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Aug 19 08:15:22.659204 systemd-networkd[860]: eth0: Gained IPv6LL Aug 19 08:15:22.695320 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Aug 19 08:15:22.892365 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 19 08:15:22.894378 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Aug 19 08:15:22.894378 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Aug 19 08:15:22.894378 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 19 08:15:22.894378 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 19 08:15:22.894378 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 19 08:15:22.894378 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 19 08:15:22.894378 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 19 08:15:22.894378 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 19 08:15:22.907968 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 19 08:15:22.907968 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 19 08:15:22.907968 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 19 08:15:22.907968 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 19 08:15:22.907968 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 19 08:15:22.907968 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Aug 19 08:15:23.146026 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Aug 19 08:15:23.840514 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 19 08:15:23.840514 ignition[1033]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Aug 19 08:15:23.844324 ignition[1033]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 19 08:15:24.134075 ignition[1033]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 19 08:15:24.134075 ignition[1033]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Aug 19 08:15:24.134075 ignition[1033]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Aug 19 08:15:24.138999 ignition[1033]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 19 08:15:24.138999 ignition[1033]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 19 08:15:24.138999 ignition[1033]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Aug 19 08:15:24.138999 ignition[1033]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Aug 19 08:15:24.162297 ignition[1033]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Aug 19 08:15:24.168504 ignition[1033]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Aug 19 08:15:24.170316 ignition[1033]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Aug 19 08:15:24.170316 ignition[1033]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Aug 19 08:15:24.173113 ignition[1033]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Aug 19 08:15:24.173113 ignition[1033]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 19 08:15:24.173113 ignition[1033]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 19 08:15:24.173113 ignition[1033]: INFO : files: files passed Aug 19 08:15:24.173113 ignition[1033]: INFO : Ignition finished successfully Aug 19 08:15:24.181004 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 19 08:15:24.183966 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 19 08:15:24.186919 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 19 08:15:24.204116 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 19 08:15:24.204247 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 19 08:15:24.207233 initrd-setup-root-after-ignition[1062]: grep: /sysroot/oem/oem-release: No such file or directory Aug 19 08:15:24.210380 initrd-setup-root-after-ignition[1064]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 19 08:15:24.210380 initrd-setup-root-after-ignition[1064]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 19 08:15:24.214773 initrd-setup-root-after-ignition[1068]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 19 08:15:24.212545 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 19 08:15:24.215258 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 19 08:15:24.217495 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 19 08:15:24.267393 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 19 08:15:24.268533 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 19 08:15:24.271499 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 19 08:15:24.271802 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 19 08:15:24.272394 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 19 08:15:24.273396 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 19 08:15:24.305251 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 19 08:15:24.326738 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 19 08:15:24.352510 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 19 08:15:24.353298 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 19 08:15:24.353631 systemd[1]: Stopped target timers.target - Timer Units. Aug 19 08:15:24.353966 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 19 08:15:24.354093 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 19 08:15:24.354821 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 19 08:15:24.355329 systemd[1]: Stopped target basic.target - Basic System. Aug 19 08:15:24.355658 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 19 08:15:24.356014 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 19 08:15:24.356501 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 19 08:15:24.356821 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Aug 19 08:15:24.357327 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 19 08:15:24.357643 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 19 08:15:24.357992 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 19 08:15:24.358449 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 19 08:15:24.358767 systemd[1]: Stopped target swap.target - Swaps. Aug 19 08:15:24.359249 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 19 08:15:24.359361 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 19 08:15:24.384228 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 19 08:15:24.384552 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 19 08:15:24.386493 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 19 08:15:24.388472 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 19 08:15:24.389064 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 19 08:15:24.389184 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 19 08:15:24.392660 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 19 08:15:24.392788 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 19 08:15:24.395461 systemd[1]: Stopped target paths.target - Path Units. Aug 19 08:15:24.397486 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 19 08:15:24.402035 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 19 08:15:24.442404 systemd[1]: Stopped target slices.target - Slice Units. Aug 19 08:15:24.445371 systemd[1]: Stopped target sockets.target - Socket Units. Aug 19 08:15:24.445664 systemd[1]: iscsid.socket: Deactivated successfully. Aug 19 08:15:24.445782 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 19 08:15:24.448383 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 19 08:15:24.448496 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 19 08:15:24.450449 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 19 08:15:24.450580 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 19 08:15:24.452290 systemd[1]: ignition-files.service: Deactivated successfully. Aug 19 08:15:24.452395 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 19 08:15:24.455324 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 19 08:15:24.456366 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 19 08:15:24.456627 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 19 08:15:24.464837 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 19 08:15:24.467817 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 19 08:15:24.468052 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 19 08:15:24.468576 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 19 08:15:24.468718 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 19 08:15:24.479322 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 19 08:15:24.480100 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 19 08:15:24.493004 ignition[1089]: INFO : Ignition 2.21.0 Aug 19 08:15:24.493004 ignition[1089]: INFO : Stage: umount Aug 19 08:15:24.494779 ignition[1089]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 19 08:15:24.494779 ignition[1089]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 19 08:15:24.497843 ignition[1089]: INFO : umount: umount passed Aug 19 08:15:24.498659 ignition[1089]: INFO : Ignition finished successfully Aug 19 08:15:24.501444 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 19 08:15:24.501610 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 19 08:15:24.502399 systemd[1]: Stopped target network.target - Network. Aug 19 08:15:24.504454 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 19 08:15:24.504517 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 19 08:15:24.504797 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 19 08:15:24.504854 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 19 08:15:24.505299 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 19 08:15:24.505353 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 19 08:15:24.505614 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 19 08:15:24.505660 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 19 08:15:24.506073 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 19 08:15:24.506852 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 19 08:15:24.515539 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 19 08:15:24.515682 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 19 08:15:24.520086 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Aug 19 08:15:24.520771 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 19 08:15:24.520889 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 19 08:15:24.525320 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Aug 19 08:15:24.533565 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 19 08:15:24.533748 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 19 08:15:24.538248 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Aug 19 08:15:24.538443 systemd[1]: Stopped target network-pre.target - Preparation for Network. Aug 19 08:15:24.538734 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 19 08:15:24.538775 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 19 08:15:24.596630 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 19 08:15:24.597065 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 19 08:15:24.597128 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 19 08:15:24.597519 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 19 08:15:24.597567 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 19 08:15:24.602873 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 19 08:15:24.602947 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 19 08:15:24.603445 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 19 08:15:24.605612 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 19 08:15:24.631843 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 19 08:15:24.632604 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 19 08:15:24.632778 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 19 08:15:24.634421 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 19 08:15:24.634520 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 19 08:15:24.636621 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 19 08:15:24.636700 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 19 08:15:24.638528 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 19 08:15:24.638568 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 19 08:15:24.639014 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 19 08:15:24.639061 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 19 08:15:24.639660 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 19 08:15:24.639707 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 19 08:15:24.640298 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 19 08:15:24.640352 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 19 08:15:24.641635 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 19 08:15:24.641924 systemd[1]: systemd-network-generator.service: Deactivated successfully. Aug 19 08:15:24.642012 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Aug 19 08:15:24.644954 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 19 08:15:24.645056 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 19 08:15:24.648634 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 19 08:15:24.648702 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 19 08:15:24.670513 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 19 08:15:24.670656 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 19 08:15:24.679089 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 19 08:15:24.679237 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 19 08:15:24.679832 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 19 08:15:24.682208 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 19 08:15:24.682279 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 19 08:15:24.686873 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 19 08:15:24.706039 systemd[1]: Switching root. Aug 19 08:15:24.746480 systemd-journald[220]: Journal stopped Aug 19 08:15:26.318008 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). Aug 19 08:15:26.318085 kernel: SELinux: policy capability network_peer_controls=1 Aug 19 08:15:26.318101 kernel: SELinux: policy capability open_perms=1 Aug 19 08:15:26.318112 kernel: SELinux: policy capability extended_socket_class=1 Aug 19 08:15:26.318124 kernel: SELinux: policy capability always_check_network=0 Aug 19 08:15:26.318135 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 19 08:15:26.318146 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 19 08:15:26.318158 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 19 08:15:26.318172 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 19 08:15:26.318184 kernel: SELinux: policy capability userspace_initial_context=0 Aug 19 08:15:26.318195 kernel: audit: type=1403 audit(1755591325.539:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 19 08:15:26.318213 systemd[1]: Successfully loaded SELinux policy in 68.257ms. Aug 19 08:15:26.318248 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.800ms. Aug 19 08:15:26.318262 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 19 08:15:26.318276 systemd[1]: Detected virtualization kvm. Aug 19 08:15:26.318290 systemd[1]: Detected architecture x86-64. Aug 19 08:15:26.318303 systemd[1]: Detected first boot. Aug 19 08:15:26.318323 systemd[1]: Initializing machine ID from VM UUID. Aug 19 08:15:26.318336 zram_generator::config[1134]: No configuration found. Aug 19 08:15:26.318350 kernel: Guest personality initialized and is inactive Aug 19 08:15:26.318464 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Aug 19 08:15:26.318476 kernel: Initialized host personality Aug 19 08:15:26.318487 kernel: NET: Registered PF_VSOCK protocol family Aug 19 08:15:26.318499 systemd[1]: Populated /etc with preset unit settings. Aug 19 08:15:26.318512 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Aug 19 08:15:26.318528 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 19 08:15:26.318540 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 19 08:15:26.318557 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 19 08:15:26.318575 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 19 08:15:26.318587 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 19 08:15:26.318599 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 19 08:15:26.318612 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 19 08:15:26.318624 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 19 08:15:26.318636 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 19 08:15:26.318662 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 19 08:15:26.318685 systemd[1]: Created slice user.slice - User and Session Slice. Aug 19 08:15:26.318699 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 19 08:15:26.318711 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 19 08:15:26.318723 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 19 08:15:26.318735 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 19 08:15:26.318757 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 19 08:15:26.318782 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 19 08:15:26.318795 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 19 08:15:26.318807 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 19 08:15:26.318819 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 19 08:15:26.318832 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 19 08:15:26.318844 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 19 08:15:26.318866 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 19 08:15:26.318879 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 19 08:15:26.318891 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 19 08:15:26.318906 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 19 08:15:26.318919 systemd[1]: Reached target slices.target - Slice Units. Aug 19 08:15:26.318945 systemd[1]: Reached target swap.target - Swaps. Aug 19 08:15:26.318958 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 19 08:15:26.318970 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 19 08:15:26.318998 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Aug 19 08:15:26.319023 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 19 08:15:26.319038 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 19 08:15:26.319050 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 19 08:15:26.319062 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 19 08:15:26.319078 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 19 08:15:26.319090 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 19 08:15:26.319109 systemd[1]: Mounting media.mount - External Media Directory... Aug 19 08:15:26.319139 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 19 08:15:26.319155 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 19 08:15:26.319175 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 19 08:15:26.319197 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 19 08:15:26.319209 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 19 08:15:26.319224 systemd[1]: Reached target machines.target - Containers. Aug 19 08:15:26.319236 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 19 08:15:26.319249 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 19 08:15:26.319261 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 19 08:15:26.319274 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 19 08:15:26.319287 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 19 08:15:26.319314 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 19 08:15:26.319328 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 19 08:15:26.319340 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 19 08:15:26.319356 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 19 08:15:26.319379 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 19 08:15:26.319393 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 19 08:15:26.319405 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 19 08:15:26.319417 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 19 08:15:26.319447 systemd[1]: Stopped systemd-fsck-usr.service. Aug 19 08:15:26.319469 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 19 08:15:26.319482 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 19 08:15:26.319496 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 19 08:15:26.319509 kernel: loop: module loaded Aug 19 08:15:26.319537 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 19 08:15:26.319560 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 19 08:15:26.319581 kernel: fuse: init (API version 7.41) Aug 19 08:15:26.319611 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Aug 19 08:15:26.319627 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 19 08:15:26.319665 systemd[1]: verity-setup.service: Deactivated successfully. Aug 19 08:15:26.319698 systemd[1]: Stopped verity-setup.service. Aug 19 08:15:26.319728 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 19 08:15:26.319761 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 19 08:15:26.319793 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 19 08:15:26.319823 systemd[1]: Mounted media.mount - External Media Directory. Aug 19 08:15:26.319868 systemd-journald[1205]: Collecting audit messages is disabled. Aug 19 08:15:26.319892 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 19 08:15:26.319904 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 19 08:15:26.319917 systemd-journald[1205]: Journal started Aug 19 08:15:26.319942 systemd-journald[1205]: Runtime Journal (/run/log/journal/3bb1b996077841e09aad34a196ed8081) is 6M, max 48.2M, 42.2M free. Aug 19 08:15:26.323412 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 19 08:15:26.094915 systemd[1]: Queued start job for default target multi-user.target. Aug 19 08:15:26.107499 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Aug 19 08:15:26.108014 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 19 08:15:26.325888 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 19 08:15:26.325920 kernel: ACPI: bus type drm_connector registered Aug 19 08:15:26.330251 systemd[1]: Started systemd-journald.service - Journal Service. Aug 19 08:15:26.332309 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 19 08:15:26.334156 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 19 08:15:26.334433 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 19 08:15:26.336055 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 19 08:15:26.336497 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 19 08:15:26.338251 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 19 08:15:26.338498 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 19 08:15:26.339919 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 19 08:15:26.340260 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 19 08:15:26.341778 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 19 08:15:26.342056 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 19 08:15:26.343495 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 19 08:15:26.343767 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 19 08:15:26.345297 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 19 08:15:26.346875 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 19 08:15:26.348466 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 19 08:15:26.350053 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Aug 19 08:15:26.365362 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 19 08:15:26.367999 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 19 08:15:26.370204 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 19 08:15:26.371459 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 19 08:15:26.371494 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 19 08:15:26.373833 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Aug 19 08:15:26.384915 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 19 08:15:26.388147 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 19 08:15:26.390414 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 19 08:15:26.394058 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 19 08:15:26.395418 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 19 08:15:26.397139 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 19 08:15:26.398514 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 19 08:15:26.399568 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 19 08:15:26.406133 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 19 08:15:26.409057 systemd-journald[1205]: Time spent on flushing to /var/log/journal/3bb1b996077841e09aad34a196ed8081 is 14.361ms for 1039 entries. Aug 19 08:15:26.409057 systemd-journald[1205]: System Journal (/var/log/journal/3bb1b996077841e09aad34a196ed8081) is 8M, max 195.6M, 187.6M free. Aug 19 08:15:26.455497 systemd-journald[1205]: Received client request to flush runtime journal. Aug 19 08:15:26.455561 kernel: loop0: detected capacity change from 0 to 128016 Aug 19 08:15:26.410084 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 19 08:15:26.473557 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 19 08:15:26.413014 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 19 08:15:26.413532 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 19 08:15:26.423218 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 19 08:15:26.425268 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 19 08:15:26.428916 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Aug 19 08:15:26.443178 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 19 08:15:26.449871 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 19 08:15:26.457582 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 19 08:15:26.485281 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 19 08:15:26.489965 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 19 08:15:26.502897 kernel: loop1: detected capacity change from 0 to 111000 Aug 19 08:15:26.501362 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Aug 19 08:15:26.521597 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. Aug 19 08:15:26.521616 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. Aug 19 08:15:26.526735 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 19 08:15:26.528248 kernel: loop2: detected capacity change from 0 to 221472 Aug 19 08:15:26.555015 kernel: loop3: detected capacity change from 0 to 128016 Aug 19 08:15:26.569045 kernel: loop4: detected capacity change from 0 to 111000 Aug 19 08:15:26.582026 kernel: loop5: detected capacity change from 0 to 221472 Aug 19 08:15:26.618317 (sd-merge)[1275]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Aug 19 08:15:26.618927 (sd-merge)[1275]: Merged extensions into '/usr'. Aug 19 08:15:26.624490 systemd[1]: Reload requested from client PID 1253 ('systemd-sysext') (unit systemd-sysext.service)... Aug 19 08:15:26.624507 systemd[1]: Reloading... Aug 19 08:15:26.680017 zram_generator::config[1301]: No configuration found. Aug 19 08:15:26.926946 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 19 08:15:26.927210 systemd[1]: Reloading finished in 302 ms. Aug 19 08:15:26.978195 ldconfig[1248]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 19 08:15:26.979342 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 19 08:15:26.982340 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 19 08:15:27.007458 systemd[1]: Starting ensure-sysext.service... Aug 19 08:15:27.009298 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 19 08:15:27.024562 systemd[1]: Reload requested from client PID 1338 ('systemctl') (unit ensure-sysext.service)... Aug 19 08:15:27.024579 systemd[1]: Reloading... Aug 19 08:15:27.033167 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Aug 19 08:15:27.033548 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Aug 19 08:15:27.033925 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 19 08:15:27.034278 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 19 08:15:27.035332 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 19 08:15:27.035685 systemd-tmpfiles[1339]: ACLs are not supported, ignoring. Aug 19 08:15:27.035824 systemd-tmpfiles[1339]: ACLs are not supported, ignoring. Aug 19 08:15:27.041415 systemd-tmpfiles[1339]: Detected autofs mount point /boot during canonicalization of boot. Aug 19 08:15:27.041432 systemd-tmpfiles[1339]: Skipping /boot Aug 19 08:15:27.055020 systemd-tmpfiles[1339]: Detected autofs mount point /boot during canonicalization of boot. Aug 19 08:15:27.055034 systemd-tmpfiles[1339]: Skipping /boot Aug 19 08:15:27.144032 zram_generator::config[1366]: No configuration found. Aug 19 08:15:27.322534 systemd[1]: Reloading finished in 297 ms. Aug 19 08:15:27.347371 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 19 08:15:27.370489 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 19 08:15:27.381204 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 19 08:15:27.383827 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 19 08:15:27.393554 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 19 08:15:27.398915 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 19 08:15:27.405315 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 19 08:15:27.409604 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 19 08:15:27.414824 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 19 08:15:27.415171 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 19 08:15:27.418349 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 19 08:15:27.422285 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 19 08:15:27.425488 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 19 08:15:27.426759 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 19 08:15:27.426864 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 19 08:15:27.429769 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 19 08:15:27.431735 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 19 08:15:27.433862 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 19 08:15:27.435942 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 19 08:15:27.436187 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 19 08:15:27.438299 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 19 08:15:27.438560 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 19 08:15:27.440602 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 19 08:15:27.441028 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 19 08:15:27.452634 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 19 08:15:27.452869 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 19 08:15:27.455572 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 19 08:15:27.458900 augenrules[1439]: No rules Aug 19 08:15:27.459263 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 19 08:15:27.461026 systemd-udevd[1410]: Using default interface naming scheme 'v255'. Aug 19 08:15:27.462807 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 19 08:15:27.465024 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 19 08:15:27.465206 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 19 08:15:27.469137 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 19 08:15:27.470300 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 19 08:15:27.471711 systemd[1]: audit-rules.service: Deactivated successfully. Aug 19 08:15:27.472381 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 19 08:15:27.475026 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 19 08:15:27.477130 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 19 08:15:27.477361 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 19 08:15:27.479254 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 19 08:15:27.479527 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 19 08:15:27.481359 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 19 08:15:27.481578 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 19 08:15:27.484454 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 19 08:15:27.486830 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 19 08:15:27.499232 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 19 08:15:27.501077 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 19 08:15:27.513844 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 19 08:15:27.516817 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 19 08:15:27.518181 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 19 08:15:27.519214 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 19 08:15:27.522210 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 19 08:15:27.524189 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 19 08:15:27.534303 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 19 08:15:27.535506 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 19 08:15:27.535546 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 19 08:15:27.538504 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 19 08:15:27.540067 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 19 08:15:27.540094 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 19 08:15:27.542029 systemd[1]: Finished ensure-sysext.service. Aug 19 08:15:27.543394 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 19 08:15:27.543644 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 19 08:15:27.549006 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 19 08:15:27.552312 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 19 08:15:27.559011 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 19 08:15:27.559460 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 19 08:15:27.563416 augenrules[1485]: /sbin/augenrules: No change Aug 19 08:15:27.562753 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 19 08:15:27.569844 augenrules[1513]: No rules Aug 19 08:15:27.570073 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 19 08:15:27.571618 systemd[1]: audit-rules.service: Deactivated successfully. Aug 19 08:15:27.572182 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 19 08:15:27.574363 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 19 08:15:27.574620 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 19 08:15:27.583529 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Aug 19 08:15:27.584057 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 19 08:15:27.645154 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 19 08:15:27.650040 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 19 08:15:27.777008 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Aug 19 08:15:27.779035 kernel: mousedev: PS/2 mouse device common for all mice Aug 19 08:15:27.786003 kernel: ACPI: button: Power Button [PWRF] Aug 19 08:15:27.790162 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Aug 19 08:15:27.790658 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Aug 19 08:15:27.790845 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Aug 19 08:15:27.799550 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 19 08:15:27.855946 systemd-resolved[1408]: Positive Trust Anchors: Aug 19 08:15:27.856362 systemd-resolved[1408]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 19 08:15:27.856397 systemd-resolved[1408]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 19 08:15:27.862760 systemd-resolved[1408]: Defaulting to hostname 'linux'. Aug 19 08:15:27.864580 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 19 08:15:27.866192 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 19 08:15:27.908007 systemd-networkd[1490]: lo: Link UP Aug 19 08:15:27.908032 systemd-networkd[1490]: lo: Gained carrier Aug 19 08:15:27.909746 systemd-networkd[1490]: Enumeration completed Aug 19 08:15:27.909835 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 19 08:15:27.910830 systemd-networkd[1490]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 19 08:15:27.910834 systemd-networkd[1490]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 19 08:15:27.911171 systemd[1]: Reached target network.target - Network. Aug 19 08:15:27.911400 systemd-networkd[1490]: eth0: Link UP Aug 19 08:15:27.911576 systemd-networkd[1490]: eth0: Gained carrier Aug 19 08:15:27.911589 systemd-networkd[1490]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 19 08:15:27.915138 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Aug 19 08:15:27.919198 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 19 08:15:27.926056 systemd-networkd[1490]: eth0: DHCPv4 address 10.0.0.113/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 19 08:15:27.931319 kernel: kvm_amd: TSC scaling supported Aug 19 08:15:27.931354 kernel: kvm_amd: Nested Virtualization enabled Aug 19 08:15:27.931367 kernel: kvm_amd: Nested Paging enabled Aug 19 08:15:27.931389 kernel: kvm_amd: LBR virtualization supported Aug 19 08:15:27.941487 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Aug 19 08:15:27.941545 kernel: kvm_amd: Virtual GIF supported Aug 19 08:15:27.945864 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 19 08:15:27.952833 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Aug 19 08:15:27.958338 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 19 08:15:27.961812 systemd[1]: Reached target time-set.target - System Time Set. Aug 19 08:15:28.569397 systemd-timesyncd[1518]: Contacted time server 10.0.0.1:123 (10.0.0.1). Aug 19 08:15:28.569452 systemd-timesyncd[1518]: Initial clock synchronization to Tue 2025-08-19 08:15:28.569316 UTC. Aug 19 08:15:28.577137 systemd-resolved[1408]: Clock change detected. Flushing caches. Aug 19 08:15:28.586291 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 19 08:15:28.586595 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 19 08:15:28.589335 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 19 08:15:28.593388 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 19 08:15:28.612784 kernel: EDAC MC: Ver: 3.0.0 Aug 19 08:15:28.670000 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 19 08:15:28.671764 systemd[1]: Reached target sysinit.target - System Initialization. Aug 19 08:15:28.673172 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 19 08:15:28.674617 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 19 08:15:28.676179 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Aug 19 08:15:28.678049 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 19 08:15:28.679423 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 19 08:15:28.680950 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 19 08:15:28.682364 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 19 08:15:28.682425 systemd[1]: Reached target paths.target - Path Units. Aug 19 08:15:28.683442 systemd[1]: Reached target timers.target - Timer Units. Aug 19 08:15:28.685618 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 19 08:15:28.688995 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 19 08:15:28.692605 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Aug 19 08:15:28.694202 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Aug 19 08:15:28.695582 systemd[1]: Reached target ssh-access.target - SSH Access Available. Aug 19 08:15:28.706532 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 19 08:15:28.708273 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Aug 19 08:15:28.710102 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 19 08:15:28.711882 systemd[1]: Reached target sockets.target - Socket Units. Aug 19 08:15:28.712855 systemd[1]: Reached target basic.target - Basic System. Aug 19 08:15:28.713811 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 19 08:15:28.713838 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 19 08:15:28.715036 systemd[1]: Starting containerd.service - containerd container runtime... Aug 19 08:15:28.717352 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 19 08:15:28.719528 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 19 08:15:28.723875 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 19 08:15:28.726580 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 19 08:15:28.727874 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 19 08:15:28.729414 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Aug 19 08:15:28.732380 jq[1571]: false Aug 19 08:15:28.733175 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 19 08:15:28.735857 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 19 08:15:28.738575 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 19 08:15:28.740194 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 19 08:15:28.743699 google_oslogin_nss_cache[1573]: oslogin_cache_refresh[1573]: Refreshing passwd entry cache Aug 19 08:15:28.743710 oslogin_cache_refresh[1573]: Refreshing passwd entry cache Aug 19 08:15:28.746924 extend-filesystems[1572]: Found /dev/vda6 Aug 19 08:15:28.749840 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 19 08:15:28.752092 google_oslogin_nss_cache[1573]: oslogin_cache_refresh[1573]: Failure getting users, quitting Aug 19 08:15:28.752087 oslogin_cache_refresh[1573]: Failure getting users, quitting Aug 19 08:15:28.752162 google_oslogin_nss_cache[1573]: oslogin_cache_refresh[1573]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Aug 19 08:15:28.752117 oslogin_cache_refresh[1573]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Aug 19 08:15:28.752206 google_oslogin_nss_cache[1573]: oslogin_cache_refresh[1573]: Refreshing group entry cache Aug 19 08:15:28.752178 oslogin_cache_refresh[1573]: Refreshing group entry cache Aug 19 08:15:28.753222 extend-filesystems[1572]: Found /dev/vda9 Aug 19 08:15:28.754150 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 19 08:15:28.754743 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 19 08:15:28.755359 systemd[1]: Starting update-engine.service - Update Engine... Aug 19 08:15:28.757301 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 19 08:15:28.758889 extend-filesystems[1572]: Checking size of /dev/vda9 Aug 19 08:15:28.761931 google_oslogin_nss_cache[1573]: oslogin_cache_refresh[1573]: Failure getting groups, quitting Aug 19 08:15:28.761931 google_oslogin_nss_cache[1573]: oslogin_cache_refresh[1573]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Aug 19 08:15:28.761924 oslogin_cache_refresh[1573]: Failure getting groups, quitting Aug 19 08:15:28.761939 oslogin_cache_refresh[1573]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Aug 19 08:15:28.767079 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 19 08:15:28.768998 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 19 08:15:28.769241 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 19 08:15:28.769619 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Aug 19 08:15:28.769906 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Aug 19 08:15:28.773211 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 19 08:15:28.775003 extend-filesystems[1572]: Resized partition /dev/vda9 Aug 19 08:15:28.775904 jq[1586]: true Aug 19 08:15:28.773509 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 19 08:15:28.781279 systemd[1]: motdgen.service: Deactivated successfully. Aug 19 08:15:28.781661 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 19 08:15:28.787291 extend-filesystems[1601]: resize2fs 1.47.2 (1-Jan-2025) Aug 19 08:15:28.807644 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Aug 19 08:15:28.807812 jq[1600]: true Aug 19 08:15:28.808081 update_engine[1585]: I20250819 08:15:28.794355 1585 main.cc:92] Flatcar Update Engine starting Aug 19 08:15:28.820228 (ntainerd)[1602]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 19 08:15:28.827771 tar[1599]: linux-amd64/helm Aug 19 08:15:28.833896 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Aug 19 08:15:28.863149 extend-filesystems[1601]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Aug 19 08:15:28.863149 extend-filesystems[1601]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 19 08:15:28.863149 extend-filesystems[1601]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Aug 19 08:15:28.871240 extend-filesystems[1572]: Resized filesystem in /dev/vda9 Aug 19 08:15:28.869569 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 19 08:15:28.871219 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 19 08:15:28.877342 systemd-logind[1582]: Watching system buttons on /dev/input/event2 (Power Button) Aug 19 08:15:28.877368 systemd-logind[1582]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 19 08:15:28.878932 systemd-logind[1582]: New seat seat0. Aug 19 08:15:28.882439 bash[1630]: Updated "/home/core/.ssh/authorized_keys" Aug 19 08:15:28.882376 systemd[1]: Started systemd-logind.service - User Login Management. Aug 19 08:15:28.885811 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 19 08:15:28.890433 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Aug 19 08:15:28.892244 dbus-daemon[1569]: [system] SELinux support is enabled Aug 19 08:15:28.892709 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 19 08:15:28.898154 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 19 08:15:28.898189 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 19 08:15:28.900121 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 19 08:15:28.900156 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 19 08:15:28.904338 dbus-daemon[1569]: [system] Successfully activated service 'org.freedesktop.systemd1' Aug 19 08:15:28.905369 systemd[1]: Started update-engine.service - Update Engine. Aug 19 08:15:28.909768 update_engine[1585]: I20250819 08:15:28.908948 1585 update_check_scheduler.cc:74] Next update check in 3m25s Aug 19 08:15:28.912151 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 19 08:15:29.002011 sshd_keygen[1596]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 19 08:15:29.009106 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 19 08:15:29.013081 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 19 08:15:29.042294 systemd[1]: issuegen.service: Deactivated successfully. Aug 19 08:15:29.042668 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 19 08:15:29.055281 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 19 08:15:29.064399 locksmithd[1634]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 19 08:15:29.139098 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 19 08:15:29.143202 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 19 08:15:29.146997 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 19 08:15:29.148279 systemd[1]: Reached target getty.target - Login Prompts. Aug 19 08:15:29.310056 containerd[1602]: time="2025-08-19T08:15:29Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Aug 19 08:15:29.311485 containerd[1602]: time="2025-08-19T08:15:29.311425661Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Aug 19 08:15:29.328703 containerd[1602]: time="2025-08-19T08:15:29.328516310Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="25.278µs" Aug 19 08:15:29.329226 containerd[1602]: time="2025-08-19T08:15:29.329177800Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Aug 19 08:15:29.329226 containerd[1602]: time="2025-08-19T08:15:29.329216963Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Aug 19 08:15:29.329621 containerd[1602]: time="2025-08-19T08:15:29.329572490Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Aug 19 08:15:29.329668 containerd[1602]: time="2025-08-19T08:15:29.329634205Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Aug 19 08:15:29.329723 containerd[1602]: time="2025-08-19T08:15:29.329700149Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Aug 19 08:15:29.329997 containerd[1602]: time="2025-08-19T08:15:29.329960527Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Aug 19 08:15:29.329997 containerd[1602]: time="2025-08-19T08:15:29.329988199Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Aug 19 08:15:29.330418 containerd[1602]: time="2025-08-19T08:15:29.330388299Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Aug 19 08:15:29.330418 containerd[1602]: time="2025-08-19T08:15:29.330410401Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Aug 19 08:15:29.330477 containerd[1602]: time="2025-08-19T08:15:29.330421842Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Aug 19 08:15:29.330477 containerd[1602]: time="2025-08-19T08:15:29.330430638Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Aug 19 08:15:29.330618 containerd[1602]: time="2025-08-19T08:15:29.330588184Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Aug 19 08:15:29.331057 containerd[1602]: time="2025-08-19T08:15:29.331018931Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Aug 19 08:15:29.331100 containerd[1602]: time="2025-08-19T08:15:29.331061421Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Aug 19 08:15:29.331100 containerd[1602]: time="2025-08-19T08:15:29.331073353Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Aug 19 08:15:29.331139 containerd[1602]: time="2025-08-19T08:15:29.331118438Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Aug 19 08:15:29.331776 containerd[1602]: time="2025-08-19T08:15:29.331463805Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Aug 19 08:15:29.331776 containerd[1602]: time="2025-08-19T08:15:29.331638683Z" level=info msg="metadata content store policy set" policy=shared Aug 19 08:15:29.340867 containerd[1602]: time="2025-08-19T08:15:29.340584894Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Aug 19 08:15:29.340867 containerd[1602]: time="2025-08-19T08:15:29.340861473Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Aug 19 08:15:29.340867 containerd[1602]: time="2025-08-19T08:15:29.340883955Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Aug 19 08:15:29.341073 containerd[1602]: time="2025-08-19T08:15:29.340896859Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Aug 19 08:15:29.341073 containerd[1602]: time="2025-08-19T08:15:29.340915243Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Aug 19 08:15:29.341073 containerd[1602]: time="2025-08-19T08:15:29.340927847Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Aug 19 08:15:29.341073 containerd[1602]: time="2025-08-19T08:15:29.340943486Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Aug 19 08:15:29.341073 containerd[1602]: time="2025-08-19T08:15:29.340958004Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Aug 19 08:15:29.341073 containerd[1602]: time="2025-08-19T08:15:29.340969565Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Aug 19 08:15:29.341073 containerd[1602]: time="2025-08-19T08:15:29.340978572Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Aug 19 08:15:29.341073 containerd[1602]: time="2025-08-19T08:15:29.340987759Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Aug 19 08:15:29.341073 containerd[1602]: time="2025-08-19T08:15:29.341014049Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Aug 19 08:15:29.341283 containerd[1602]: time="2025-08-19T08:15:29.341242988Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Aug 19 08:15:29.341283 containerd[1602]: time="2025-08-19T08:15:29.341273224Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Aug 19 08:15:29.341342 containerd[1602]: time="2025-08-19T08:15:29.341300997Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Aug 19 08:15:29.341342 containerd[1602]: time="2025-08-19T08:15:29.341318620Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Aug 19 08:15:29.341342 containerd[1602]: time="2025-08-19T08:15:29.341330382Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Aug 19 08:15:29.341420 containerd[1602]: time="2025-08-19T08:15:29.341354377Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Aug 19 08:15:29.341420 containerd[1602]: time="2025-08-19T08:15:29.341377720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Aug 19 08:15:29.341420 containerd[1602]: time="2025-08-19T08:15:29.341390434Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Aug 19 08:15:29.341420 containerd[1602]: time="2025-08-19T08:15:29.341402036Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Aug 19 08:15:29.341420 containerd[1602]: time="2025-08-19T08:15:29.341413077Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Aug 19 08:15:29.341556 containerd[1602]: time="2025-08-19T08:15:29.341423556Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Aug 19 08:15:29.341634 containerd[1602]: time="2025-08-19T08:15:29.341539864Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Aug 19 08:15:29.341634 containerd[1602]: time="2025-08-19T08:15:29.341625906Z" level=info msg="Start snapshots syncer" Aug 19 08:15:29.341701 containerd[1602]: time="2025-08-19T08:15:29.341659629Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Aug 19 08:15:29.342143 containerd[1602]: time="2025-08-19T08:15:29.342052285Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Aug 19 08:15:29.342425 containerd[1602]: time="2025-08-19T08:15:29.342153304Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Aug 19 08:15:29.344894 containerd[1602]: time="2025-08-19T08:15:29.344844710Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Aug 19 08:15:29.345046 containerd[1602]: time="2025-08-19T08:15:29.344994420Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Aug 19 08:15:29.345046 containerd[1602]: time="2025-08-19T08:15:29.345025759Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Aug 19 08:15:29.345140 containerd[1602]: time="2025-08-19T08:15:29.345036429Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Aug 19 08:15:29.345140 containerd[1602]: time="2025-08-19T08:15:29.345073158Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Aug 19 08:15:29.345140 containerd[1602]: time="2025-08-19T08:15:29.345093406Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Aug 19 08:15:29.345140 containerd[1602]: time="2025-08-19T08:15:29.345103695Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Aug 19 08:15:29.345140 containerd[1602]: time="2025-08-19T08:15:29.345115287Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Aug 19 08:15:29.345140 containerd[1602]: time="2025-08-19T08:15:29.345142548Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Aug 19 08:15:29.345296 containerd[1602]: time="2025-08-19T08:15:29.345154340Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Aug 19 08:15:29.345296 containerd[1602]: time="2025-08-19T08:15:29.345165351Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Aug 19 08:15:29.345296 containerd[1602]: time="2025-08-19T08:15:29.345233198Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Aug 19 08:15:29.345296 containerd[1602]: time="2025-08-19T08:15:29.345252494Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Aug 19 08:15:29.345406 containerd[1602]: time="2025-08-19T08:15:29.345329308Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Aug 19 08:15:29.345406 containerd[1602]: time="2025-08-19T08:15:29.345342282Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Aug 19 08:15:29.345406 containerd[1602]: time="2025-08-19T08:15:29.345350408Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Aug 19 08:15:29.345406 containerd[1602]: time="2025-08-19T08:15:29.345360587Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Aug 19 08:15:29.345406 containerd[1602]: time="2025-08-19T08:15:29.345374012Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Aug 19 08:15:29.345406 containerd[1602]: time="2025-08-19T08:15:29.345392597Z" level=info msg="runtime interface created" Aug 19 08:15:29.345406 containerd[1602]: time="2025-08-19T08:15:29.345397897Z" level=info msg="created NRI interface" Aug 19 08:15:29.345601 containerd[1602]: time="2025-08-19T08:15:29.345415640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Aug 19 08:15:29.345601 containerd[1602]: time="2025-08-19T08:15:29.345428975Z" level=info msg="Connect containerd service" Aug 19 08:15:29.345601 containerd[1602]: time="2025-08-19T08:15:29.345458100Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 19 08:15:29.346569 containerd[1602]: time="2025-08-19T08:15:29.346541000Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 19 08:15:29.358042 tar[1599]: linux-amd64/LICENSE Aug 19 08:15:29.358042 tar[1599]: linux-amd64/README.md Aug 19 08:15:29.387318 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 19 08:15:29.482340 containerd[1602]: time="2025-08-19T08:15:29.482258131Z" level=info msg="Start subscribing containerd event" Aug 19 08:15:29.482502 containerd[1602]: time="2025-08-19T08:15:29.482343351Z" level=info msg="Start recovering state" Aug 19 08:15:29.482571 containerd[1602]: time="2025-08-19T08:15:29.482530932Z" level=info msg="Start event monitor" Aug 19 08:15:29.482571 containerd[1602]: time="2025-08-19T08:15:29.482564866Z" level=info msg="Start cni network conf syncer for default" Aug 19 08:15:29.482571 containerd[1602]: time="2025-08-19T08:15:29.482576257Z" level=info msg="Start streaming server" Aug 19 08:15:29.482793 containerd[1602]: time="2025-08-19T08:15:29.482579303Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 19 08:15:29.482793 containerd[1602]: time="2025-08-19T08:15:29.482675964Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 19 08:15:29.482793 containerd[1602]: time="2025-08-19T08:15:29.482622965Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Aug 19 08:15:29.482793 containerd[1602]: time="2025-08-19T08:15:29.482701121Z" level=info msg="runtime interface starting up..." Aug 19 08:15:29.482793 containerd[1602]: time="2025-08-19T08:15:29.482707373Z" level=info msg="starting plugins..." Aug 19 08:15:29.482793 containerd[1602]: time="2025-08-19T08:15:29.482764821Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Aug 19 08:15:29.482977 containerd[1602]: time="2025-08-19T08:15:29.482934318Z" level=info msg="containerd successfully booted in 0.173922s" Aug 19 08:15:29.483099 systemd[1]: Started containerd.service - containerd container runtime. Aug 19 08:15:30.306033 systemd-networkd[1490]: eth0: Gained IPv6LL Aug 19 08:15:30.310108 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 19 08:15:30.312305 systemd[1]: Reached target network-online.target - Network is Online. Aug 19 08:15:30.315497 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Aug 19 08:15:30.318822 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 19 08:15:30.321561 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 19 08:15:30.357958 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 19 08:15:30.361154 systemd[1]: coreos-metadata.service: Deactivated successfully. Aug 19 08:15:30.361457 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Aug 19 08:15:30.362996 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 19 08:15:32.220432 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 08:15:32.222806 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 19 08:15:32.225451 systemd[1]: Startup finished in 4.139s (kernel) + 7.893s (initrd) + 6.145s (userspace) = 18.178s. Aug 19 08:15:32.229405 (kubelet)[1703]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 19 08:15:32.235333 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 19 08:15:32.239129 systemd[1]: Started sshd@0-10.0.0.113:22-10.0.0.1:57056.service - OpenSSH per-connection server daemon (10.0.0.1:57056). Aug 19 08:15:32.318595 sshd[1706]: Accepted publickey for core from 10.0.0.1 port 57056 ssh2: RSA SHA256:kecLVWRG1G7MHrHN/yG6X078KPWjs/jTMbEJqAmOzyM Aug 19 08:15:32.321017 sshd-session[1706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:15:32.328722 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 19 08:15:32.330047 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 19 08:15:32.337843 systemd-logind[1582]: New session 1 of user core. Aug 19 08:15:32.355202 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 19 08:15:32.359684 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 19 08:15:32.378158 (systemd)[1719]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 19 08:15:32.381453 systemd-logind[1582]: New session c1 of user core. Aug 19 08:15:32.590962 systemd[1719]: Queued start job for default target default.target. Aug 19 08:15:32.605210 systemd[1719]: Created slice app.slice - User Application Slice. Aug 19 08:15:32.605244 systemd[1719]: Reached target paths.target - Paths. Aug 19 08:15:32.605293 systemd[1719]: Reached target timers.target - Timers. Aug 19 08:15:32.607046 systemd[1719]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 19 08:15:32.620291 systemd[1719]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 19 08:15:32.620430 systemd[1719]: Reached target sockets.target - Sockets. Aug 19 08:15:32.620470 systemd[1719]: Reached target basic.target - Basic System. Aug 19 08:15:32.620510 systemd[1719]: Reached target default.target - Main User Target. Aug 19 08:15:32.620546 systemd[1719]: Startup finished in 228ms. Aug 19 08:15:32.620885 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 19 08:15:32.622797 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 19 08:15:32.713839 systemd[1]: Started sshd@1-10.0.0.113:22-10.0.0.1:57068.service - OpenSSH per-connection server daemon (10.0.0.1:57068). Aug 19 08:15:32.781618 sshd[1731]: Accepted publickey for core from 10.0.0.1 port 57068 ssh2: RSA SHA256:kecLVWRG1G7MHrHN/yG6X078KPWjs/jTMbEJqAmOzyM Aug 19 08:15:32.783398 sshd-session[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:15:32.789121 systemd-logind[1582]: New session 2 of user core. Aug 19 08:15:32.795949 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 19 08:15:32.857464 sshd[1734]: Connection closed by 10.0.0.1 port 57068 Aug 19 08:15:32.858039 sshd-session[1731]: pam_unix(sshd:session): session closed for user core Aug 19 08:15:32.870293 systemd[1]: sshd@1-10.0.0.113:22-10.0.0.1:57068.service: Deactivated successfully. Aug 19 08:15:32.872500 systemd[1]: session-2.scope: Deactivated successfully. Aug 19 08:15:32.873478 systemd-logind[1582]: Session 2 logged out. Waiting for processes to exit. Aug 19 08:15:32.877508 systemd[1]: Started sshd@2-10.0.0.113:22-10.0.0.1:57084.service - OpenSSH per-connection server daemon (10.0.0.1:57084). Aug 19 08:15:32.878209 systemd-logind[1582]: Removed session 2. Aug 19 08:15:32.913207 kubelet[1703]: E0819 08:15:32.913101 1703 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 19 08:15:32.918431 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 19 08:15:32.918642 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 19 08:15:32.919092 systemd[1]: kubelet.service: Consumed 2.404s CPU time, 265.5M memory peak. Aug 19 08:15:32.936315 sshd[1740]: Accepted publickey for core from 10.0.0.1 port 57084 ssh2: RSA SHA256:kecLVWRG1G7MHrHN/yG6X078KPWjs/jTMbEJqAmOzyM Aug 19 08:15:32.938639 sshd-session[1740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:15:32.945226 systemd-logind[1582]: New session 3 of user core. Aug 19 08:15:32.955953 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 19 08:15:33.010876 sshd[1745]: Connection closed by 10.0.0.1 port 57084 Aug 19 08:15:33.011456 sshd-session[1740]: pam_unix(sshd:session): session closed for user core Aug 19 08:15:33.023897 systemd[1]: sshd@2-10.0.0.113:22-10.0.0.1:57084.service: Deactivated successfully. Aug 19 08:15:33.026571 systemd[1]: session-3.scope: Deactivated successfully. Aug 19 08:15:33.027476 systemd-logind[1582]: Session 3 logged out. Waiting for processes to exit. Aug 19 08:15:33.030402 systemd-logind[1582]: Removed session 3. Aug 19 08:15:33.032211 systemd[1]: Started sshd@3-10.0.0.113:22-10.0.0.1:57096.service - OpenSSH per-connection server daemon (10.0.0.1:57096). Aug 19 08:15:33.099311 sshd[1751]: Accepted publickey for core from 10.0.0.1 port 57096 ssh2: RSA SHA256:kecLVWRG1G7MHrHN/yG6X078KPWjs/jTMbEJqAmOzyM Aug 19 08:15:33.101317 sshd-session[1751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:15:33.106613 systemd-logind[1582]: New session 4 of user core. Aug 19 08:15:33.117980 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 19 08:15:33.174092 sshd[1754]: Connection closed by 10.0.0.1 port 57096 Aug 19 08:15:33.174520 sshd-session[1751]: pam_unix(sshd:session): session closed for user core Aug 19 08:15:33.191982 systemd[1]: sshd@3-10.0.0.113:22-10.0.0.1:57096.service: Deactivated successfully. Aug 19 08:15:33.194259 systemd[1]: session-4.scope: Deactivated successfully. Aug 19 08:15:33.195006 systemd-logind[1582]: Session 4 logged out. Waiting for processes to exit. Aug 19 08:15:33.198346 systemd[1]: Started sshd@4-10.0.0.113:22-10.0.0.1:57102.service - OpenSSH per-connection server daemon (10.0.0.1:57102). Aug 19 08:15:33.199240 systemd-logind[1582]: Removed session 4. Aug 19 08:15:33.257235 sshd[1760]: Accepted publickey for core from 10.0.0.1 port 57102 ssh2: RSA SHA256:kecLVWRG1G7MHrHN/yG6X078KPWjs/jTMbEJqAmOzyM Aug 19 08:15:33.258862 sshd-session[1760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:15:33.263356 systemd-logind[1582]: New session 5 of user core. Aug 19 08:15:33.281901 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 19 08:15:33.341648 sudo[1764]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 19 08:15:33.341995 sudo[1764]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 19 08:15:33.361221 sudo[1764]: pam_unix(sudo:session): session closed for user root Aug 19 08:15:33.362820 sshd[1763]: Connection closed by 10.0.0.1 port 57102 Aug 19 08:15:33.363216 sshd-session[1760]: pam_unix(sshd:session): session closed for user core Aug 19 08:15:33.380566 systemd[1]: sshd@4-10.0.0.113:22-10.0.0.1:57102.service: Deactivated successfully. Aug 19 08:15:33.382296 systemd[1]: session-5.scope: Deactivated successfully. Aug 19 08:15:33.383070 systemd-logind[1582]: Session 5 logged out. Waiting for processes to exit. Aug 19 08:15:33.385665 systemd[1]: Started sshd@5-10.0.0.113:22-10.0.0.1:57112.service - OpenSSH per-connection server daemon (10.0.0.1:57112). Aug 19 08:15:33.386486 systemd-logind[1582]: Removed session 5. Aug 19 08:15:33.446600 sshd[1770]: Accepted publickey for core from 10.0.0.1 port 57112 ssh2: RSA SHA256:kecLVWRG1G7MHrHN/yG6X078KPWjs/jTMbEJqAmOzyM Aug 19 08:15:33.448479 sshd-session[1770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:15:33.453425 systemd-logind[1582]: New session 6 of user core. Aug 19 08:15:33.466882 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 19 08:15:33.523613 sudo[1775]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 19 08:15:33.524140 sudo[1775]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 19 08:15:33.534750 sudo[1775]: pam_unix(sudo:session): session closed for user root Aug 19 08:15:33.544591 sudo[1774]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Aug 19 08:15:33.544925 sudo[1774]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 19 08:15:33.558442 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 19 08:15:33.604534 augenrules[1797]: No rules Aug 19 08:15:33.606416 systemd[1]: audit-rules.service: Deactivated successfully. Aug 19 08:15:33.606796 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 19 08:15:33.608602 sudo[1774]: pam_unix(sudo:session): session closed for user root Aug 19 08:15:33.610336 sshd[1773]: Connection closed by 10.0.0.1 port 57112 Aug 19 08:15:33.610662 sshd-session[1770]: pam_unix(sshd:session): session closed for user core Aug 19 08:15:33.619413 systemd[1]: sshd@5-10.0.0.113:22-10.0.0.1:57112.service: Deactivated successfully. Aug 19 08:15:33.621212 systemd[1]: session-6.scope: Deactivated successfully. Aug 19 08:15:33.621964 systemd-logind[1582]: Session 6 logged out. Waiting for processes to exit. Aug 19 08:15:33.624554 systemd[1]: Started sshd@6-10.0.0.113:22-10.0.0.1:57126.service - OpenSSH per-connection server daemon (10.0.0.1:57126). Aug 19 08:15:33.625386 systemd-logind[1582]: Removed session 6. Aug 19 08:15:33.673226 sshd[1806]: Accepted publickey for core from 10.0.0.1 port 57126 ssh2: RSA SHA256:kecLVWRG1G7MHrHN/yG6X078KPWjs/jTMbEJqAmOzyM Aug 19 08:15:33.674618 sshd-session[1806]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:15:33.678907 systemd-logind[1582]: New session 7 of user core. Aug 19 08:15:33.688860 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 19 08:15:33.742170 sudo[1810]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 19 08:15:33.742592 sudo[1810]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 19 08:15:34.438339 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 19 08:15:34.460315 (dockerd)[1830]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 19 08:15:34.946685 dockerd[1830]: time="2025-08-19T08:15:34.946598578Z" level=info msg="Starting up" Aug 19 08:15:34.947567 dockerd[1830]: time="2025-08-19T08:15:34.947529392Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Aug 19 08:15:34.971342 dockerd[1830]: time="2025-08-19T08:15:34.971281910Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Aug 19 08:15:36.555453 dockerd[1830]: time="2025-08-19T08:15:36.555386570Z" level=info msg="Loading containers: start." Aug 19 08:15:36.567139 kernel: Initializing XFRM netlink socket Aug 19 08:15:36.854155 systemd-networkd[1490]: docker0: Link UP Aug 19 08:15:36.859237 dockerd[1830]: time="2025-08-19T08:15:36.859185004Z" level=info msg="Loading containers: done." Aug 19 08:15:36.880211 dockerd[1830]: time="2025-08-19T08:15:36.880150046Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 19 08:15:36.880381 dockerd[1830]: time="2025-08-19T08:15:36.880302572Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Aug 19 08:15:36.880480 dockerd[1830]: time="2025-08-19T08:15:36.880451601Z" level=info msg="Initializing buildkit" Aug 19 08:15:36.914350 dockerd[1830]: time="2025-08-19T08:15:36.914290367Z" level=info msg="Completed buildkit initialization" Aug 19 08:15:36.922293 dockerd[1830]: time="2025-08-19T08:15:36.922236733Z" level=info msg="Daemon has completed initialization" Aug 19 08:15:36.922409 dockerd[1830]: time="2025-08-19T08:15:36.922345878Z" level=info msg="API listen on /run/docker.sock" Aug 19 08:15:36.922627 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 19 08:15:38.000294 containerd[1602]: time="2025-08-19T08:15:38.000236909Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\"" Aug 19 08:15:38.747254 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2426151648.mount: Deactivated successfully. Aug 19 08:15:40.888643 containerd[1602]: time="2025-08-19T08:15:40.888514107Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:15:40.889515 containerd[1602]: time="2025-08-19T08:15:40.889069137Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.12: active requests=0, bytes read=28079631" Aug 19 08:15:40.890917 containerd[1602]: time="2025-08-19T08:15:40.890830299Z" level=info msg="ImageCreate event name:\"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:15:40.895818 containerd[1602]: time="2025-08-19T08:15:40.895755953Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:15:40.897909 containerd[1602]: time="2025-08-19T08:15:40.897869144Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.12\" with image id \"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\", size \"28076431\" in 2.897568045s" Aug 19 08:15:40.897955 containerd[1602]: time="2025-08-19T08:15:40.897920551Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\" returns image reference \"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\"" Aug 19 08:15:40.899901 containerd[1602]: time="2025-08-19T08:15:40.899867521Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\"" Aug 19 08:15:42.516783 containerd[1602]: time="2025-08-19T08:15:42.516678794Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:15:42.517466 containerd[1602]: time="2025-08-19T08:15:42.517414954Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.12: active requests=0, bytes read=24714681" Aug 19 08:15:42.518905 containerd[1602]: time="2025-08-19T08:15:42.518819878Z" level=info msg="ImageCreate event name:\"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:15:42.521834 containerd[1602]: time="2025-08-19T08:15:42.521800766Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:15:42.523050 containerd[1602]: time="2025-08-19T08:15:42.523008290Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.12\" with image id \"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\", size \"26317875\" in 1.622999564s" Aug 19 08:15:42.523050 containerd[1602]: time="2025-08-19T08:15:42.523049537Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\" returns image reference \"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\"" Aug 19 08:15:42.523718 containerd[1602]: time="2025-08-19T08:15:42.523668167Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\"" Aug 19 08:15:42.973319 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 19 08:15:42.975129 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 19 08:15:43.342837 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 08:15:43.357210 (kubelet)[2114]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 19 08:15:43.683656 kubelet[2114]: E0819 08:15:43.683503 2114 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 19 08:15:43.692614 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 19 08:15:43.692854 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 19 08:15:43.693255 systemd[1]: kubelet.service: Consumed 334ms CPU time, 111.1M memory peak. Aug 19 08:15:45.609400 containerd[1602]: time="2025-08-19T08:15:45.609331524Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:15:45.610974 containerd[1602]: time="2025-08-19T08:15:45.610888683Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.12: active requests=0, bytes read=18782427" Aug 19 08:15:45.614764 containerd[1602]: time="2025-08-19T08:15:45.612912747Z" level=info msg="ImageCreate event name:\"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:15:45.617134 containerd[1602]: time="2025-08-19T08:15:45.617088846Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:15:45.618016 containerd[1602]: time="2025-08-19T08:15:45.617944860Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.12\" with image id \"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\", size \"20385639\" in 3.094238702s" Aug 19 08:15:45.618016 containerd[1602]: time="2025-08-19T08:15:45.617996317Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\" returns image reference \"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\"" Aug 19 08:15:45.618504 containerd[1602]: time="2025-08-19T08:15:45.618473291Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\"" Aug 19 08:15:47.631444 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3778560299.mount: Deactivated successfully. Aug 19 08:15:48.046106 containerd[1602]: time="2025-08-19T08:15:48.046027107Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:15:48.046754 containerd[1602]: time="2025-08-19T08:15:48.046678749Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.12: active requests=0, bytes read=30384255" Aug 19 08:15:48.047912 containerd[1602]: time="2025-08-19T08:15:48.047853401Z" level=info msg="ImageCreate event name:\"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:15:48.049748 containerd[1602]: time="2025-08-19T08:15:48.049704481Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:15:48.050205 containerd[1602]: time="2025-08-19T08:15:48.050159715Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.12\" with image id \"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\", repo tag \"registry.k8s.io/kube-proxy:v1.31.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\", size \"30383274\" in 2.431654293s" Aug 19 08:15:48.050205 containerd[1602]: time="2025-08-19T08:15:48.050200952Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\" returns image reference \"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\"" Aug 19 08:15:48.050803 containerd[1602]: time="2025-08-19T08:15:48.050775629Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 19 08:15:48.732036 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount34058381.mount: Deactivated successfully. Aug 19 08:15:52.692836 containerd[1602]: time="2025-08-19T08:15:52.692768268Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:15:52.693594 containerd[1602]: time="2025-08-19T08:15:52.693565793Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Aug 19 08:15:52.694603 containerd[1602]: time="2025-08-19T08:15:52.694553765Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:15:52.697126 containerd[1602]: time="2025-08-19T08:15:52.697093025Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:15:52.698108 containerd[1602]: time="2025-08-19T08:15:52.698080356Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 4.647277014s" Aug 19 08:15:52.698108 containerd[1602]: time="2025-08-19T08:15:52.698109450Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 19 08:15:52.698583 containerd[1602]: time="2025-08-19T08:15:52.698555927Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 19 08:15:53.314975 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3350082771.mount: Deactivated successfully. Aug 19 08:15:53.321015 containerd[1602]: time="2025-08-19T08:15:53.320973869Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 19 08:15:53.321777 containerd[1602]: time="2025-08-19T08:15:53.321713846Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Aug 19 08:15:53.322948 containerd[1602]: time="2025-08-19T08:15:53.322919666Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 19 08:15:53.324904 containerd[1602]: time="2025-08-19T08:15:53.324873339Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 19 08:15:53.325441 containerd[1602]: time="2025-08-19T08:15:53.325410937Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 626.824032ms" Aug 19 08:15:53.325441 containerd[1602]: time="2025-08-19T08:15:53.325439630Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 19 08:15:53.326126 containerd[1602]: time="2025-08-19T08:15:53.325961679Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Aug 19 08:15:53.723459 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 19 08:15:53.725602 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 19 08:15:53.771467 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1351972204.mount: Deactivated successfully. Aug 19 08:15:53.952834 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 08:15:53.970091 (kubelet)[2203]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 19 08:15:54.018169 kubelet[2203]: E0819 08:15:54.018017 2203 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 19 08:15:54.022644 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 19 08:15:54.022914 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 19 08:15:54.023413 systemd[1]: kubelet.service: Consumed 245ms CPU time, 112.3M memory peak. Aug 19 08:15:55.936297 containerd[1602]: time="2025-08-19T08:15:55.936210493Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:15:55.937041 containerd[1602]: time="2025-08-19T08:15:55.936986809Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56910709" Aug 19 08:15:55.938450 containerd[1602]: time="2025-08-19T08:15:55.938403504Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:15:55.941174 containerd[1602]: time="2025-08-19T08:15:55.941139523Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:15:55.942989 containerd[1602]: time="2025-08-19T08:15:55.942946481Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.616955798s" Aug 19 08:15:55.942989 containerd[1602]: time="2025-08-19T08:15:55.942986887Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Aug 19 08:15:57.973387 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 08:15:57.973552 systemd[1]: kubelet.service: Consumed 245ms CPU time, 112.3M memory peak. Aug 19 08:15:57.975927 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 19 08:15:58.003164 systemd[1]: Reload requested from client PID 2290 ('systemctl') (unit session-7.scope)... Aug 19 08:15:58.003182 systemd[1]: Reloading... Aug 19 08:15:58.104787 zram_generator::config[2332]: No configuration found. Aug 19 08:15:58.333463 systemd[1]: Reloading finished in 329 ms. Aug 19 08:15:58.389086 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 19 08:15:58.389223 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 19 08:15:58.389570 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 08:15:58.391404 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 19 08:15:58.590812 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 08:15:58.607070 (kubelet)[2379]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 19 08:15:58.652352 kubelet[2379]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 19 08:15:58.652352 kubelet[2379]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 19 08:15:58.652352 kubelet[2379]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 19 08:15:58.652814 kubelet[2379]: I0819 08:15:58.652396 2379 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 19 08:15:59.136445 kubelet[2379]: I0819 08:15:59.136380 2379 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 19 08:15:59.136445 kubelet[2379]: I0819 08:15:59.136419 2379 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 19 08:15:59.136712 kubelet[2379]: I0819 08:15:59.136683 2379 server.go:934] "Client rotation is on, will bootstrap in background" Aug 19 08:15:59.184110 kubelet[2379]: E0819 08:15:59.184044 2379 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.113:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.113:6443: connect: connection refused" logger="UnhandledError" Aug 19 08:15:59.184873 kubelet[2379]: I0819 08:15:59.184853 2379 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 19 08:15:59.191753 kubelet[2379]: I0819 08:15:59.191700 2379 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Aug 19 08:15:59.199867 kubelet[2379]: I0819 08:15:59.199835 2379 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 19 08:15:59.200017 kubelet[2379]: I0819 08:15:59.199981 2379 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 19 08:15:59.200212 kubelet[2379]: I0819 08:15:59.200145 2379 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 19 08:15:59.200371 kubelet[2379]: I0819 08:15:59.200181 2379 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 19 08:15:59.200493 kubelet[2379]: I0819 08:15:59.200382 2379 topology_manager.go:138] "Creating topology manager with none policy" Aug 19 08:15:59.200493 kubelet[2379]: I0819 08:15:59.200390 2379 container_manager_linux.go:300] "Creating device plugin manager" Aug 19 08:15:59.200548 kubelet[2379]: I0819 08:15:59.200538 2379 state_mem.go:36] "Initialized new in-memory state store" Aug 19 08:15:59.202767 kubelet[2379]: I0819 08:15:59.202697 2379 kubelet.go:408] "Attempting to sync node with API server" Aug 19 08:15:59.202767 kubelet[2379]: I0819 08:15:59.202771 2379 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 19 08:15:59.202959 kubelet[2379]: I0819 08:15:59.202833 2379 kubelet.go:314] "Adding apiserver pod source" Aug 19 08:15:59.202959 kubelet[2379]: I0819 08:15:59.202875 2379 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 19 08:15:59.206496 kubelet[2379]: I0819 08:15:59.206465 2379 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Aug 19 08:15:59.206928 kubelet[2379]: I0819 08:15:59.206893 2379 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 19 08:15:59.206994 kubelet[2379]: W0819 08:15:59.206974 2379 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 19 08:15:59.208047 kubelet[2379]: W0819 08:15:59.207971 2379 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.113:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.113:6443: connect: connection refused Aug 19 08:15:59.208165 kubelet[2379]: W0819 08:15:59.208126 2379 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.113:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.113:6443: connect: connection refused Aug 19 08:15:59.208220 kubelet[2379]: E0819 08:15:59.208171 2379 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.113:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.113:6443: connect: connection refused" logger="UnhandledError" Aug 19 08:15:59.208370 kubelet[2379]: E0819 08:15:59.208144 2379 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.113:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.113:6443: connect: connection refused" logger="UnhandledError" Aug 19 08:15:59.209043 kubelet[2379]: I0819 08:15:59.208830 2379 server.go:1274] "Started kubelet" Aug 19 08:15:59.210399 kubelet[2379]: I0819 08:15:59.210371 2379 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 19 08:15:59.215453 kubelet[2379]: I0819 08:15:59.215415 2379 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 19 08:15:59.216411 kubelet[2379]: I0819 08:15:59.216376 2379 server.go:449] "Adding debug handlers to kubelet server" Aug 19 08:15:59.218418 kubelet[2379]: E0819 08:15:59.216879 2379 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.113:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.113:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.185d1d0a2862b202 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-08-19 08:15:59.208796674 +0000 UTC m=+0.594652052,LastTimestamp:2025-08-19 08:15:59.208796674 +0000 UTC m=+0.594652052,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Aug 19 08:15:59.218418 kubelet[2379]: E0819 08:15:59.217816 2379 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 19 08:15:59.218418 kubelet[2379]: I0819 08:15:59.217981 2379 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 19 08:15:59.218418 kubelet[2379]: I0819 08:15:59.218018 2379 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 19 08:15:59.218418 kubelet[2379]: I0819 08:15:59.218073 2379 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 19 08:15:59.218418 kubelet[2379]: I0819 08:15:59.218136 2379 reconciler.go:26] "Reconciler: start to sync state" Aug 19 08:15:59.218418 kubelet[2379]: I0819 08:15:59.218364 2379 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 19 08:15:59.218418 kubelet[2379]: W0819 08:15:59.218386 2379 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.113:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.113:6443: connect: connection refused Aug 19 08:15:59.218716 kubelet[2379]: E0819 08:15:59.218421 2379 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.113:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.113:6443: connect: connection refused" logger="UnhandledError" Aug 19 08:15:59.218716 kubelet[2379]: I0819 08:15:59.218492 2379 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 19 08:15:59.219519 kubelet[2379]: E0819 08:15:59.218867 2379 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 19 08:15:59.219519 kubelet[2379]: E0819 08:15:59.218959 2379 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.113:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.113:6443: connect: connection refused" interval="200ms" Aug 19 08:15:59.219789 kubelet[2379]: I0819 08:15:59.219762 2379 factory.go:221] Registration of the systemd container factory successfully Aug 19 08:15:59.219890 kubelet[2379]: I0819 08:15:59.219862 2379 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 19 08:15:59.221065 kubelet[2379]: I0819 08:15:59.221042 2379 factory.go:221] Registration of the containerd container factory successfully Aug 19 08:15:59.234443 kubelet[2379]: I0819 08:15:59.234392 2379 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 19 08:15:59.236067 kubelet[2379]: I0819 08:15:59.236049 2379 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 19 08:15:59.236165 kubelet[2379]: I0819 08:15:59.236154 2379 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 19 08:15:59.236242 kubelet[2379]: I0819 08:15:59.236231 2379 kubelet.go:2321] "Starting kubelet main sync loop" Aug 19 08:15:59.236354 kubelet[2379]: E0819 08:15:59.236334 2379 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 19 08:15:59.238162 kubelet[2379]: W0819 08:15:59.238094 2379 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.113:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.113:6443: connect: connection refused Aug 19 08:15:59.238226 kubelet[2379]: E0819 08:15:59.238169 2379 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.113:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.113:6443: connect: connection refused" logger="UnhandledError" Aug 19 08:15:59.240493 kubelet[2379]: I0819 08:15:59.240466 2379 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 19 08:15:59.240553 kubelet[2379]: I0819 08:15:59.240503 2379 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 19 08:15:59.240553 kubelet[2379]: I0819 08:15:59.240519 2379 state_mem.go:36] "Initialized new in-memory state store" Aug 19 08:15:59.319934 kubelet[2379]: E0819 08:15:59.319887 2379 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 19 08:15:59.337180 kubelet[2379]: E0819 08:15:59.337132 2379 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 19 08:15:59.369173 kubelet[2379]: I0819 08:15:59.369148 2379 policy_none.go:49] "None policy: Start" Aug 19 08:15:59.369845 kubelet[2379]: I0819 08:15:59.369811 2379 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 19 08:15:59.369845 kubelet[2379]: I0819 08:15:59.369843 2379 state_mem.go:35] "Initializing new in-memory state store" Aug 19 08:15:59.380144 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 19 08:15:59.398851 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 19 08:15:59.403018 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 19 08:15:59.420276 kubelet[2379]: E0819 08:15:59.420218 2379 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 19 08:15:59.420706 kubelet[2379]: E0819 08:15:59.420640 2379 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.113:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.113:6443: connect: connection refused" interval="400ms" Aug 19 08:15:59.424866 kubelet[2379]: I0819 08:15:59.424837 2379 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 19 08:15:59.425112 kubelet[2379]: I0819 08:15:59.425090 2379 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 19 08:15:59.425167 kubelet[2379]: I0819 08:15:59.425112 2379 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 19 08:15:59.425422 kubelet[2379]: I0819 08:15:59.425400 2379 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 19 08:15:59.427445 kubelet[2379]: E0819 08:15:59.427415 2379 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Aug 19 08:15:59.527181 kubelet[2379]: I0819 08:15:59.527100 2379 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 19 08:15:59.528034 kubelet[2379]: E0819 08:15:59.527983 2379 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.113:6443/api/v1/nodes\": dial tcp 10.0.0.113:6443: connect: connection refused" node="localhost" Aug 19 08:15:59.547702 systemd[1]: Created slice kubepods-burstable-podb01ff527f92d8be52a38400c5a3712e3.slice - libcontainer container kubepods-burstable-podb01ff527f92d8be52a38400c5a3712e3.slice. Aug 19 08:15:59.562906 systemd[1]: Created slice kubepods-burstable-podfec3f691a145cb26ff55e4af388500b7.slice - libcontainer container kubepods-burstable-podfec3f691a145cb26ff55e4af388500b7.slice. Aug 19 08:15:59.585122 systemd[1]: Created slice kubepods-burstable-pod5dc878868de11c6196259ae42039f4ff.slice - libcontainer container kubepods-burstable-pod5dc878868de11c6196259ae42039f4ff.slice. Aug 19 08:15:59.620070 kubelet[2379]: I0819 08:15:59.620023 2379 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Aug 19 08:15:59.620070 kubelet[2379]: I0819 08:15:59.620075 2379 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Aug 19 08:15:59.620236 kubelet[2379]: I0819 08:15:59.620111 2379 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Aug 19 08:15:59.620236 kubelet[2379]: I0819 08:15:59.620136 2379 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b01ff527f92d8be52a38400c5a3712e3-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b01ff527f92d8be52a38400c5a3712e3\") " pod="kube-system/kube-apiserver-localhost" Aug 19 08:15:59.620236 kubelet[2379]: I0819 08:15:59.620168 2379 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b01ff527f92d8be52a38400c5a3712e3-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b01ff527f92d8be52a38400c5a3712e3\") " pod="kube-system/kube-apiserver-localhost" Aug 19 08:15:59.620236 kubelet[2379]: I0819 08:15:59.620190 2379 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Aug 19 08:15:59.620236 kubelet[2379]: I0819 08:15:59.620215 2379 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Aug 19 08:15:59.620351 kubelet[2379]: I0819 08:15:59.620249 2379 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Aug 19 08:15:59.620351 kubelet[2379]: I0819 08:15:59.620272 2379 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b01ff527f92d8be52a38400c5a3712e3-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b01ff527f92d8be52a38400c5a3712e3\") " pod="kube-system/kube-apiserver-localhost" Aug 19 08:15:59.729548 kubelet[2379]: I0819 08:15:59.729510 2379 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 19 08:15:59.729997 kubelet[2379]: E0819 08:15:59.729919 2379 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.113:6443/api/v1/nodes\": dial tcp 10.0.0.113:6443: connect: connection refused" node="localhost" Aug 19 08:15:59.822107 kubelet[2379]: E0819 08:15:59.822032 2379 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.113:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.113:6443: connect: connection refused" interval="800ms" Aug 19 08:15:59.861725 kubelet[2379]: E0819 08:15:59.861661 2379 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:15:59.862659 containerd[1602]: time="2025-08-19T08:15:59.862603047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b01ff527f92d8be52a38400c5a3712e3,Namespace:kube-system,Attempt:0,}" Aug 19 08:15:59.883251 kubelet[2379]: E0819 08:15:59.883193 2379 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:15:59.883938 containerd[1602]: time="2025-08-19T08:15:59.883895092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,}" Aug 19 08:15:59.888371 kubelet[2379]: E0819 08:15:59.888327 2379 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:15:59.889011 containerd[1602]: time="2025-08-19T08:15:59.888965317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,}" Aug 19 08:16:00.031827 containerd[1602]: time="2025-08-19T08:16:00.030523919Z" level=info msg="connecting to shim 7dcc49a42d975b9be559f62371760818168107ea8a5774cbbeebe57190ec77d0" address="unix:///run/containerd/s/c0c7c0987a30573a004f3db3ae69cfa8bb779fbde8b66a8fd63674ccb654c849" namespace=k8s.io protocol=ttrpc version=3 Aug 19 08:16:00.035500 containerd[1602]: time="2025-08-19T08:16:00.035456295Z" level=info msg="connecting to shim d6ecb27668d59ae28e81ba07be747cfa3209a9e263b4b40d727a5a96f3f55e88" address="unix:///run/containerd/s/74a85b215526e346d09cb80c23364ae151002f794bb6f0bb28494c5e5cbef059" namespace=k8s.io protocol=ttrpc version=3 Aug 19 08:16:00.040691 kubelet[2379]: W0819 08:16:00.040623 2379 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.113:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.113:6443: connect: connection refused Aug 19 08:16:00.040846 kubelet[2379]: E0819 08:16:00.040818 2379 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.113:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.113:6443: connect: connection refused" logger="UnhandledError" Aug 19 08:16:00.047727 containerd[1602]: time="2025-08-19T08:16:00.047682305Z" level=info msg="connecting to shim 8781145de3b79cac9b945f1347201d3496769d9d70d4629e3c56156e7d5051ea" address="unix:///run/containerd/s/9b9420f2f10d544aea65285ab2969409832f0c60cd6715d61c687e3ab9a0fbfe" namespace=k8s.io protocol=ttrpc version=3 Aug 19 08:16:00.068420 systemd[1]: Started cri-containerd-7dcc49a42d975b9be559f62371760818168107ea8a5774cbbeebe57190ec77d0.scope - libcontainer container 7dcc49a42d975b9be559f62371760818168107ea8a5774cbbeebe57190ec77d0. Aug 19 08:16:00.074213 systemd[1]: Started cri-containerd-d6ecb27668d59ae28e81ba07be747cfa3209a9e263b4b40d727a5a96f3f55e88.scope - libcontainer container d6ecb27668d59ae28e81ba07be747cfa3209a9e263b4b40d727a5a96f3f55e88. Aug 19 08:16:00.085973 systemd[1]: Started cri-containerd-8781145de3b79cac9b945f1347201d3496769d9d70d4629e3c56156e7d5051ea.scope - libcontainer container 8781145de3b79cac9b945f1347201d3496769d9d70d4629e3c56156e7d5051ea. Aug 19 08:16:00.132729 kubelet[2379]: I0819 08:16:00.132689 2379 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 19 08:16:00.133763 kubelet[2379]: E0819 08:16:00.133714 2379 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.113:6443/api/v1/nodes\": dial tcp 10.0.0.113:6443: connect: connection refused" node="localhost" Aug 19 08:16:00.165907 kubelet[2379]: W0819 08:16:00.165726 2379 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.113:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.113:6443: connect: connection refused Aug 19 08:16:00.166077 kubelet[2379]: E0819 08:16:00.166035 2379 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.113:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.113:6443: connect: connection refused" logger="UnhandledError" Aug 19 08:16:00.166667 containerd[1602]: time="2025-08-19T08:16:00.166624208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"d6ecb27668d59ae28e81ba07be747cfa3209a9e263b4b40d727a5a96f3f55e88\"" Aug 19 08:16:00.167994 kubelet[2379]: E0819 08:16:00.167973 2379 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:16:00.170668 containerd[1602]: time="2025-08-19T08:16:00.170632602Z" level=info msg="CreateContainer within sandbox \"d6ecb27668d59ae28e81ba07be747cfa3209a9e263b4b40d727a5a96f3f55e88\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 19 08:16:00.172714 containerd[1602]: time="2025-08-19T08:16:00.172673949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b01ff527f92d8be52a38400c5a3712e3,Namespace:kube-system,Attempt:0,} returns sandbox id \"7dcc49a42d975b9be559f62371760818168107ea8a5774cbbeebe57190ec77d0\"" Aug 19 08:16:00.173911 kubelet[2379]: E0819 08:16:00.173875 2379 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:16:00.176407 containerd[1602]: time="2025-08-19T08:16:00.176362764Z" level=info msg="CreateContainer within sandbox \"7dcc49a42d975b9be559f62371760818168107ea8a5774cbbeebe57190ec77d0\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 19 08:16:00.184146 containerd[1602]: time="2025-08-19T08:16:00.184102544Z" level=info msg="Container 2fab5879c3b0f1ea9d1c248d17ab8aef016ed0f3c05a715e0e38b15e45129dd7: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:16:00.185487 containerd[1602]: time="2025-08-19T08:16:00.185453887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"8781145de3b79cac9b945f1347201d3496769d9d70d4629e3c56156e7d5051ea\"" Aug 19 08:16:00.186703 kubelet[2379]: E0819 08:16:00.186676 2379 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:16:00.188631 containerd[1602]: time="2025-08-19T08:16:00.188596047Z" level=info msg="CreateContainer within sandbox \"8781145de3b79cac9b945f1347201d3496769d9d70d4629e3c56156e7d5051ea\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 19 08:16:00.191672 containerd[1602]: time="2025-08-19T08:16:00.191621199Z" level=info msg="Container d6e4c04606de51e1e94428a82948da8955f8af5485349c27352b75dd214a6b95: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:16:00.197679 containerd[1602]: time="2025-08-19T08:16:00.197650311Z" level=info msg="CreateContainer within sandbox \"d6ecb27668d59ae28e81ba07be747cfa3209a9e263b4b40d727a5a96f3f55e88\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2fab5879c3b0f1ea9d1c248d17ab8aef016ed0f3c05a715e0e38b15e45129dd7\"" Aug 19 08:16:00.198309 containerd[1602]: time="2025-08-19T08:16:00.198283658Z" level=info msg="StartContainer for \"2fab5879c3b0f1ea9d1c248d17ab8aef016ed0f3c05a715e0e38b15e45129dd7\"" Aug 19 08:16:00.199448 containerd[1602]: time="2025-08-19T08:16:00.199412995Z" level=info msg="connecting to shim 2fab5879c3b0f1ea9d1c248d17ab8aef016ed0f3c05a715e0e38b15e45129dd7" address="unix:///run/containerd/s/74a85b215526e346d09cb80c23364ae151002f794bb6f0bb28494c5e5cbef059" protocol=ttrpc version=3 Aug 19 08:16:00.204572 containerd[1602]: time="2025-08-19T08:16:00.204381079Z" level=info msg="Container dff16c356d9b4e1302499bcae354037bbe80a6b6df69379373b7c96883261fe0: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:16:00.207453 containerd[1602]: time="2025-08-19T08:16:00.207409616Z" level=info msg="CreateContainer within sandbox \"7dcc49a42d975b9be559f62371760818168107ea8a5774cbbeebe57190ec77d0\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d6e4c04606de51e1e94428a82948da8955f8af5485349c27352b75dd214a6b95\"" Aug 19 08:16:00.207902 containerd[1602]: time="2025-08-19T08:16:00.207869118Z" level=info msg="StartContainer for \"d6e4c04606de51e1e94428a82948da8955f8af5485349c27352b75dd214a6b95\"" Aug 19 08:16:00.209033 containerd[1602]: time="2025-08-19T08:16:00.209006370Z" level=info msg="connecting to shim d6e4c04606de51e1e94428a82948da8955f8af5485349c27352b75dd214a6b95" address="unix:///run/containerd/s/c0c7c0987a30573a004f3db3ae69cfa8bb779fbde8b66a8fd63674ccb654c849" protocol=ttrpc version=3 Aug 19 08:16:00.212746 containerd[1602]: time="2025-08-19T08:16:00.212689745Z" level=info msg="CreateContainer within sandbox \"8781145de3b79cac9b945f1347201d3496769d9d70d4629e3c56156e7d5051ea\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"dff16c356d9b4e1302499bcae354037bbe80a6b6df69379373b7c96883261fe0\"" Aug 19 08:16:00.213253 containerd[1602]: time="2025-08-19T08:16:00.213176908Z" level=info msg="StartContainer for \"dff16c356d9b4e1302499bcae354037bbe80a6b6df69379373b7c96883261fe0\"" Aug 19 08:16:00.214361 containerd[1602]: time="2025-08-19T08:16:00.214317656Z" level=info msg="connecting to shim dff16c356d9b4e1302499bcae354037bbe80a6b6df69379373b7c96883261fe0" address="unix:///run/containerd/s/9b9420f2f10d544aea65285ab2969409832f0c60cd6715d61c687e3ab9a0fbfe" protocol=ttrpc version=3 Aug 19 08:16:00.223901 systemd[1]: Started cri-containerd-2fab5879c3b0f1ea9d1c248d17ab8aef016ed0f3c05a715e0e38b15e45129dd7.scope - libcontainer container 2fab5879c3b0f1ea9d1c248d17ab8aef016ed0f3c05a715e0e38b15e45129dd7. Aug 19 08:16:00.237903 systemd[1]: Started cri-containerd-d6e4c04606de51e1e94428a82948da8955f8af5485349c27352b75dd214a6b95.scope - libcontainer container d6e4c04606de51e1e94428a82948da8955f8af5485349c27352b75dd214a6b95. Aug 19 08:16:00.241837 systemd[1]: Started cri-containerd-dff16c356d9b4e1302499bcae354037bbe80a6b6df69379373b7c96883261fe0.scope - libcontainer container dff16c356d9b4e1302499bcae354037bbe80a6b6df69379373b7c96883261fe0. Aug 19 08:16:00.300287 containerd[1602]: time="2025-08-19T08:16:00.300142527Z" level=info msg="StartContainer for \"2fab5879c3b0f1ea9d1c248d17ab8aef016ed0f3c05a715e0e38b15e45129dd7\" returns successfully" Aug 19 08:16:00.315328 containerd[1602]: time="2025-08-19T08:16:00.315269916Z" level=info msg="StartContainer for \"d6e4c04606de51e1e94428a82948da8955f8af5485349c27352b75dd214a6b95\" returns successfully" Aug 19 08:16:00.316654 containerd[1602]: time="2025-08-19T08:16:00.316624114Z" level=info msg="StartContainer for \"dff16c356d9b4e1302499bcae354037bbe80a6b6df69379373b7c96883261fe0\" returns successfully" Aug 19 08:16:00.975067 kubelet[2379]: I0819 08:16:00.936088 2379 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 19 08:16:01.274847 kubelet[2379]: E0819 08:16:01.274629 2379 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:16:01.275128 kubelet[2379]: E0819 08:16:01.274992 2379 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:16:01.281296 kubelet[2379]: E0819 08:16:01.281197 2379 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:16:01.544677 kubelet[2379]: E0819 08:16:01.544277 2379 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Aug 19 08:16:01.702275 kubelet[2379]: I0819 08:16:01.702211 2379 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Aug 19 08:16:01.702275 kubelet[2379]: E0819 08:16:01.702250 2379 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Aug 19 08:16:01.714400 kubelet[2379]: E0819 08:16:01.714358 2379 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 19 08:16:01.814982 kubelet[2379]: E0819 08:16:01.814813 2379 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 19 08:16:01.915398 kubelet[2379]: E0819 08:16:01.915352 2379 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 19 08:16:02.016094 kubelet[2379]: E0819 08:16:02.016029 2379 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 19 08:16:02.116863 kubelet[2379]: E0819 08:16:02.116693 2379 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 19 08:16:02.217217 kubelet[2379]: E0819 08:16:02.217127 2379 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 19 08:16:02.282535 kubelet[2379]: E0819 08:16:02.282465 2379 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:16:02.282718 kubelet[2379]: E0819 08:16:02.282662 2379 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:16:02.282718 kubelet[2379]: E0819 08:16:02.282688 2379 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:16:02.317604 kubelet[2379]: E0819 08:16:02.317527 2379 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 19 08:16:02.418209 kubelet[2379]: E0819 08:16:02.418064 2379 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 19 08:16:02.519105 kubelet[2379]: E0819 08:16:02.518649 2379 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 19 08:16:02.619226 kubelet[2379]: E0819 08:16:02.619170 2379 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 19 08:16:02.720602 kubelet[2379]: E0819 08:16:02.719818 2379 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 19 08:16:02.820633 kubelet[2379]: E0819 08:16:02.820568 2379 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 19 08:16:03.205756 kubelet[2379]: I0819 08:16:03.205683 2379 apiserver.go:52] "Watching apiserver" Aug 19 08:16:03.218545 kubelet[2379]: I0819 08:16:03.218490 2379 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 19 08:16:03.291189 kubelet[2379]: E0819 08:16:03.291139 2379 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:16:04.083719 systemd[1]: Reload requested from client PID 2656 ('systemctl') (unit session-7.scope)... Aug 19 08:16:04.083765 systemd[1]: Reloading... Aug 19 08:16:04.170864 zram_generator::config[2702]: No configuration found. Aug 19 08:16:04.284450 kubelet[2379]: E0819 08:16:04.284387 2379 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:16:04.395817 systemd[1]: Reloading finished in 311 ms. Aug 19 08:16:04.427028 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 19 08:16:04.447219 systemd[1]: kubelet.service: Deactivated successfully. Aug 19 08:16:04.447558 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 08:16:04.447632 systemd[1]: kubelet.service: Consumed 1.104s CPU time, 135.1M memory peak. Aug 19 08:16:04.449657 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 19 08:16:04.654698 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 08:16:04.659112 (kubelet)[2744]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 19 08:16:04.720213 kubelet[2744]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 19 08:16:04.720685 kubelet[2744]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 19 08:16:04.720685 kubelet[2744]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 19 08:16:04.720876 kubelet[2744]: I0819 08:16:04.720756 2744 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 19 08:16:04.726707 kubelet[2744]: I0819 08:16:04.726670 2744 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 19 08:16:04.726707 kubelet[2744]: I0819 08:16:04.726698 2744 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 19 08:16:04.726952 kubelet[2744]: I0819 08:16:04.726922 2744 server.go:934] "Client rotation is on, will bootstrap in background" Aug 19 08:16:04.728231 kubelet[2744]: I0819 08:16:04.728203 2744 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 19 08:16:04.729945 kubelet[2744]: I0819 08:16:04.729921 2744 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 19 08:16:04.737847 kubelet[2744]: I0819 08:16:04.737823 2744 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Aug 19 08:16:04.742535 kubelet[2744]: I0819 08:16:04.742509 2744 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 19 08:16:04.742658 kubelet[2744]: I0819 08:16:04.742638 2744 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 19 08:16:04.742827 kubelet[2744]: I0819 08:16:04.742780 2744 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 19 08:16:04.742998 kubelet[2744]: I0819 08:16:04.742811 2744 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 19 08:16:04.743149 kubelet[2744]: I0819 08:16:04.743010 2744 topology_manager.go:138] "Creating topology manager with none policy" Aug 19 08:16:04.743149 kubelet[2744]: I0819 08:16:04.743019 2744 container_manager_linux.go:300] "Creating device plugin manager" Aug 19 08:16:04.743149 kubelet[2744]: I0819 08:16:04.743052 2744 state_mem.go:36] "Initialized new in-memory state store" Aug 19 08:16:04.743216 kubelet[2744]: I0819 08:16:04.743171 2744 kubelet.go:408] "Attempting to sync node with API server" Aug 19 08:16:04.743216 kubelet[2744]: I0819 08:16:04.743185 2744 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 19 08:16:04.743268 kubelet[2744]: I0819 08:16:04.743225 2744 kubelet.go:314] "Adding apiserver pod source" Aug 19 08:16:04.743268 kubelet[2744]: I0819 08:16:04.743238 2744 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 19 08:16:04.743763 kubelet[2744]: I0819 08:16:04.743705 2744 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Aug 19 08:16:04.744208 kubelet[2744]: I0819 08:16:04.744170 2744 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 19 08:16:04.744621 kubelet[2744]: I0819 08:16:04.744573 2744 server.go:1274] "Started kubelet" Aug 19 08:16:04.745329 kubelet[2744]: I0819 08:16:04.745274 2744 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 19 08:16:04.746300 kubelet[2744]: I0819 08:16:04.746256 2744 server.go:449] "Adding debug handlers to kubelet server" Aug 19 08:16:04.746396 kubelet[2744]: I0819 08:16:04.746353 2744 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 19 08:16:04.747428 kubelet[2744]: I0819 08:16:04.747340 2744 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 19 08:16:04.747713 kubelet[2744]: I0819 08:16:04.747671 2744 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 19 08:16:04.756784 kubelet[2744]: I0819 08:16:04.756713 2744 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 19 08:16:04.757211 kubelet[2744]: I0819 08:16:04.757168 2744 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 19 08:16:04.757334 kubelet[2744]: I0819 08:16:04.757310 2744 reconciler.go:26] "Reconciler: start to sync state" Aug 19 08:16:04.761574 kubelet[2744]: I0819 08:16:04.761515 2744 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 19 08:16:04.761837 kubelet[2744]: I0819 08:16:04.761807 2744 factory.go:221] Registration of the systemd container factory successfully Aug 19 08:16:04.762548 kubelet[2744]: I0819 08:16:04.762483 2744 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 19 08:16:04.763695 kubelet[2744]: I0819 08:16:04.763674 2744 factory.go:221] Registration of the containerd container factory successfully Aug 19 08:16:04.766325 kubelet[2744]: I0819 08:16:04.766285 2744 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 19 08:16:04.767924 kubelet[2744]: I0819 08:16:04.767882 2744 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 19 08:16:04.767924 kubelet[2744]: I0819 08:16:04.767906 2744 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 19 08:16:04.767924 kubelet[2744]: I0819 08:16:04.767927 2744 kubelet.go:2321] "Starting kubelet main sync loop" Aug 19 08:16:04.768180 kubelet[2744]: E0819 08:16:04.767972 2744 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 19 08:16:04.768180 kubelet[2744]: E0819 08:16:04.763706 2744 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 19 08:16:04.800229 kubelet[2744]: I0819 08:16:04.800189 2744 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 19 08:16:04.800229 kubelet[2744]: I0819 08:16:04.800210 2744 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 19 08:16:04.800229 kubelet[2744]: I0819 08:16:04.800229 2744 state_mem.go:36] "Initialized new in-memory state store" Aug 19 08:16:04.800437 kubelet[2744]: I0819 08:16:04.800409 2744 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 19 08:16:04.800437 kubelet[2744]: I0819 08:16:04.800420 2744 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 19 08:16:04.800437 kubelet[2744]: I0819 08:16:04.800439 2744 policy_none.go:49] "None policy: Start" Aug 19 08:16:04.801093 kubelet[2744]: I0819 08:16:04.801066 2744 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 19 08:16:04.801142 kubelet[2744]: I0819 08:16:04.801102 2744 state_mem.go:35] "Initializing new in-memory state store" Aug 19 08:16:04.801288 kubelet[2744]: I0819 08:16:04.801263 2744 state_mem.go:75] "Updated machine memory state" Aug 19 08:16:04.806071 kubelet[2744]: I0819 08:16:04.806047 2744 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 19 08:16:04.806258 kubelet[2744]: I0819 08:16:04.806234 2744 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 19 08:16:04.806294 kubelet[2744]: I0819 08:16:04.806253 2744 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 19 08:16:04.806565 kubelet[2744]: I0819 08:16:04.806451 2744 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 19 08:16:04.875048 kubelet[2744]: E0819 08:16:04.874990 2744 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Aug 19 08:16:04.913288 kubelet[2744]: I0819 08:16:04.913199 2744 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 19 08:16:04.918451 kubelet[2744]: I0819 08:16:04.918412 2744 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Aug 19 08:16:04.918533 kubelet[2744]: I0819 08:16:04.918517 2744 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Aug 19 08:16:05.035553 sudo[2780]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Aug 19 08:16:05.035951 sudo[2780]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Aug 19 08:16:05.059194 kubelet[2744]: I0819 08:16:05.059134 2744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b01ff527f92d8be52a38400c5a3712e3-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b01ff527f92d8be52a38400c5a3712e3\") " pod="kube-system/kube-apiserver-localhost" Aug 19 08:16:05.059194 kubelet[2744]: I0819 08:16:05.059177 2744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Aug 19 08:16:05.059522 kubelet[2744]: I0819 08:16:05.059221 2744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Aug 19 08:16:05.059522 kubelet[2744]: I0819 08:16:05.059281 2744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Aug 19 08:16:05.059522 kubelet[2744]: I0819 08:16:05.059351 2744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Aug 19 08:16:05.059522 kubelet[2744]: I0819 08:16:05.059403 2744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Aug 19 08:16:05.059522 kubelet[2744]: I0819 08:16:05.059432 2744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Aug 19 08:16:05.059779 kubelet[2744]: I0819 08:16:05.059457 2744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b01ff527f92d8be52a38400c5a3712e3-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b01ff527f92d8be52a38400c5a3712e3\") " pod="kube-system/kube-apiserver-localhost" Aug 19 08:16:05.059779 kubelet[2744]: I0819 08:16:05.059477 2744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b01ff527f92d8be52a38400c5a3712e3-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b01ff527f92d8be52a38400c5a3712e3\") " pod="kube-system/kube-apiserver-localhost" Aug 19 08:16:05.174933 kubelet[2744]: E0819 08:16:05.174322 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:16:05.174933 kubelet[2744]: E0819 08:16:05.174337 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:16:05.175570 kubelet[2744]: E0819 08:16:05.175526 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:16:05.372290 sudo[2780]: pam_unix(sudo:session): session closed for user root Aug 19 08:16:05.744296 kubelet[2744]: I0819 08:16:05.744244 2744 apiserver.go:52] "Watching apiserver" Aug 19 08:16:05.757836 kubelet[2744]: I0819 08:16:05.757803 2744 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 19 08:16:05.783501 kubelet[2744]: E0819 08:16:05.783453 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:16:05.783501 kubelet[2744]: E0819 08:16:05.783498 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:16:05.784333 kubelet[2744]: E0819 08:16:05.783699 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:16:05.811763 kubelet[2744]: I0819 08:16:05.811382 2744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.811352315 podStartE2EDuration="2.811352315s" podCreationTimestamp="2025-08-19 08:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-19 08:16:05.805345109 +0000 UTC m=+1.129036669" watchObservedRunningTime="2025-08-19 08:16:05.811352315 +0000 UTC m=+1.135043875" Aug 19 08:16:05.817425 kubelet[2744]: I0819 08:16:05.817239 2744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.817197832 podStartE2EDuration="1.817197832s" podCreationTimestamp="2025-08-19 08:16:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-19 08:16:05.811874013 +0000 UTC m=+1.135565573" watchObservedRunningTime="2025-08-19 08:16:05.817197832 +0000 UTC m=+1.140889392" Aug 19 08:16:05.817648 kubelet[2744]: I0819 08:16:05.817461 2744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.8174538120000001 podStartE2EDuration="1.817453812s" podCreationTimestamp="2025-08-19 08:16:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-19 08:16:05.817198063 +0000 UTC m=+1.140889623" watchObservedRunningTime="2025-08-19 08:16:05.817453812 +0000 UTC m=+1.141145372" Aug 19 08:16:06.729415 sudo[1810]: pam_unix(sudo:session): session closed for user root Aug 19 08:16:06.731480 sshd[1809]: Connection closed by 10.0.0.1 port 57126 Aug 19 08:16:06.731915 sshd-session[1806]: pam_unix(sshd:session): session closed for user core Aug 19 08:16:06.737202 systemd[1]: sshd@6-10.0.0.113:22-10.0.0.1:57126.service: Deactivated successfully. Aug 19 08:16:06.739716 systemd[1]: session-7.scope: Deactivated successfully. Aug 19 08:16:06.740015 systemd[1]: session-7.scope: Consumed 4.660s CPU time, 262.7M memory peak. Aug 19 08:16:06.741536 systemd-logind[1582]: Session 7 logged out. Waiting for processes to exit. Aug 19 08:16:06.743320 systemd-logind[1582]: Removed session 7. Aug 19 08:16:06.784798 kubelet[2744]: E0819 08:16:06.784723 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:16:09.737663 kubelet[2744]: I0819 08:16:09.737592 2744 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 19 08:16:09.739007 containerd[1602]: time="2025-08-19T08:16:09.738957577Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 19 08:16:09.740264 kubelet[2744]: I0819 08:16:09.740243 2744 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 19 08:16:09.816632 kubelet[2744]: E0819 08:16:09.816577 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:16:10.309706 kubelet[2744]: E0819 08:16:10.309671 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:16:10.753827 systemd[1]: Created slice kubepods-besteffort-pod5c36304f_543c_44f3_a160_1326cf4ec73a.slice - libcontainer container kubepods-besteffort-pod5c36304f_543c_44f3_a160_1326cf4ec73a.slice. Aug 19 08:16:10.773679 systemd[1]: Created slice kubepods-burstable-podae0e5a86_a329_48d5_995a_09c169f434f6.slice - libcontainer container kubepods-burstable-podae0e5a86_a329_48d5_995a_09c169f434f6.slice. Aug 19 08:16:10.790904 kubelet[2744]: E0819 08:16:10.790863 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:16:10.794830 kubelet[2744]: I0819 08:16:10.794784 2744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ae0e5a86-a329-48d5-995a-09c169f434f6-etc-cni-netd\") pod \"cilium-hh22h\" (UID: \"ae0e5a86-a329-48d5-995a-09c169f434f6\") " pod="kube-system/cilium-hh22h" Aug 19 08:16:10.794830 kubelet[2744]: I0819 08:16:10.794813 2744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ae0e5a86-a329-48d5-995a-09c169f434f6-xtables-lock\") pod \"cilium-hh22h\" (UID: \"ae0e5a86-a329-48d5-995a-09c169f434f6\") " pod="kube-system/cilium-hh22h" Aug 19 08:16:10.794996 kubelet[2744]: I0819 08:16:10.794849 2744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5c36304f-543c-44f3-a160-1326cf4ec73a-kube-proxy\") pod \"kube-proxy-k82w5\" (UID: \"5c36304f-543c-44f3-a160-1326cf4ec73a\") " pod="kube-system/kube-proxy-k82w5" Aug 19 08:16:10.794996 kubelet[2744]: I0819 08:16:10.794878 2744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5c36304f-543c-44f3-a160-1326cf4ec73a-xtables-lock\") pod \"kube-proxy-k82w5\" (UID: \"5c36304f-543c-44f3-a160-1326cf4ec73a\") " pod="kube-system/kube-proxy-k82w5" Aug 19 08:16:10.794996 kubelet[2744]: I0819 08:16:10.794932 2744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ae0e5a86-a329-48d5-995a-09c169f434f6-bpf-maps\") pod \"cilium-hh22h\" (UID: \"ae0e5a86-a329-48d5-995a-09c169f434f6\") " pod="kube-system/cilium-hh22h" Aug 19 08:16:10.794996 kubelet[2744]: I0819 08:16:10.794973 2744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tkfc\" (UniqueName: \"kubernetes.io/projected/ae0e5a86-a329-48d5-995a-09c169f434f6-kube-api-access-6tkfc\") pod \"cilium-hh22h\" (UID: \"ae0e5a86-a329-48d5-995a-09c169f434f6\") " pod="kube-system/cilium-hh22h" Aug 19 08:16:10.794996 kubelet[2744]: I0819 08:16:10.794994 2744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ae0e5a86-a329-48d5-995a-09c169f434f6-cilium-cgroup\") pod \"cilium-hh22h\" (UID: \"ae0e5a86-a329-48d5-995a-09c169f434f6\") " pod="kube-system/cilium-hh22h" Aug 19 08:16:10.795122 kubelet[2744]: I0819 08:16:10.795010 2744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ae0e5a86-a329-48d5-995a-09c169f434f6-lib-modules\") pod \"cilium-hh22h\" (UID: \"ae0e5a86-a329-48d5-995a-09c169f434f6\") " pod="kube-system/cilium-hh22h" Aug 19 08:16:10.795122 kubelet[2744]: I0819 08:16:10.795041 2744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ae0e5a86-a329-48d5-995a-09c169f434f6-host-proc-sys-kernel\") pod \"cilium-hh22h\" (UID: \"ae0e5a86-a329-48d5-995a-09c169f434f6\") " pod="kube-system/cilium-hh22h" Aug 19 08:16:10.795122 kubelet[2744]: I0819 08:16:10.795056 2744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ae0e5a86-a329-48d5-995a-09c169f434f6-hostproc\") pod \"cilium-hh22h\" (UID: \"ae0e5a86-a329-48d5-995a-09c169f434f6\") " pod="kube-system/cilium-hh22h" Aug 19 08:16:10.795122 kubelet[2744]: I0819 08:16:10.795075 2744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ae0e5a86-a329-48d5-995a-09c169f434f6-clustermesh-secrets\") pod \"cilium-hh22h\" (UID: \"ae0e5a86-a329-48d5-995a-09c169f434f6\") " pod="kube-system/cilium-hh22h" Aug 19 08:16:10.795122 kubelet[2744]: I0819 08:16:10.795097 2744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ae0e5a86-a329-48d5-995a-09c169f434f6-host-proc-sys-net\") pod \"cilium-hh22h\" (UID: \"ae0e5a86-a329-48d5-995a-09c169f434f6\") " pod="kube-system/cilium-hh22h" Aug 19 08:16:10.795239 kubelet[2744]: I0819 08:16:10.795131 2744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ae0e5a86-a329-48d5-995a-09c169f434f6-hubble-tls\") pod \"cilium-hh22h\" (UID: \"ae0e5a86-a329-48d5-995a-09c169f434f6\") " pod="kube-system/cilium-hh22h" Aug 19 08:16:10.795239 kubelet[2744]: I0819 08:16:10.795148 2744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5c36304f-543c-44f3-a160-1326cf4ec73a-lib-modules\") pod \"kube-proxy-k82w5\" (UID: \"5c36304f-543c-44f3-a160-1326cf4ec73a\") " pod="kube-system/kube-proxy-k82w5" Aug 19 08:16:10.795239 kubelet[2744]: I0819 08:16:10.795179 2744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmsjc\" (UniqueName: \"kubernetes.io/projected/5c36304f-543c-44f3-a160-1326cf4ec73a-kube-api-access-hmsjc\") pod \"kube-proxy-k82w5\" (UID: \"5c36304f-543c-44f3-a160-1326cf4ec73a\") " pod="kube-system/kube-proxy-k82w5" Aug 19 08:16:10.795239 kubelet[2744]: I0819 08:16:10.795195 2744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ae0e5a86-a329-48d5-995a-09c169f434f6-cilium-config-path\") pod \"cilium-hh22h\" (UID: \"ae0e5a86-a329-48d5-995a-09c169f434f6\") " pod="kube-system/cilium-hh22h" Aug 19 08:16:10.795330 kubelet[2744]: I0819 08:16:10.795255 2744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ae0e5a86-a329-48d5-995a-09c169f434f6-cni-path\") pod \"cilium-hh22h\" (UID: \"ae0e5a86-a329-48d5-995a-09c169f434f6\") " pod="kube-system/cilium-hh22h" Aug 19 08:16:10.795330 kubelet[2744]: I0819 08:16:10.795281 2744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ae0e5a86-a329-48d5-995a-09c169f434f6-cilium-run\") pod \"cilium-hh22h\" (UID: \"ae0e5a86-a329-48d5-995a-09c169f434f6\") " pod="kube-system/cilium-hh22h" Aug 19 08:16:10.957540 systemd[1]: Created slice kubepods-besteffort-pod29eba499_2eb0_45cc_adc2_ce2cef4738e8.slice - libcontainer container kubepods-besteffort-pod29eba499_2eb0_45cc_adc2_ce2cef4738e8.slice. Aug 19 08:16:10.997495 kubelet[2744]: I0819 08:16:10.997416 2744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/29eba499-2eb0-45cc-adc2-ce2cef4738e8-cilium-config-path\") pod \"cilium-operator-5d85765b45-m6j8c\" (UID: \"29eba499-2eb0-45cc-adc2-ce2cef4738e8\") " pod="kube-system/cilium-operator-5d85765b45-m6j8c" Aug 19 08:16:10.997495 kubelet[2744]: I0819 08:16:10.997463 2744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vmp2\" (UniqueName: \"kubernetes.io/projected/29eba499-2eb0-45cc-adc2-ce2cef4738e8-kube-api-access-8vmp2\") pod \"cilium-operator-5d85765b45-m6j8c\" (UID: \"29eba499-2eb0-45cc-adc2-ce2cef4738e8\") " pod="kube-system/cilium-operator-5d85765b45-m6j8c" Aug 19 08:16:11.069864 kubelet[2744]: E0819 08:16:11.069685 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:16:11.070583 containerd[1602]: time="2025-08-19T08:16:11.070532234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k82w5,Uid:5c36304f-543c-44f3-a160-1326cf4ec73a,Namespace:kube-system,Attempt:0,}" Aug 19 08:16:11.076614 kubelet[2744]: E0819 08:16:11.076590 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:16:11.082123 containerd[1602]: time="2025-08-19T08:16:11.082060462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hh22h,Uid:ae0e5a86-a329-48d5-995a-09c169f434f6,Namespace:kube-system,Attempt:0,}" Aug 19 08:16:11.263997 kubelet[2744]: E0819 08:16:11.263939 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:16:11.264550 containerd[1602]: time="2025-08-19T08:16:11.264435203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-m6j8c,Uid:29eba499-2eb0-45cc-adc2-ce2cef4738e8,Namespace:kube-system,Attempt:0,}" Aug 19 08:16:11.283023 containerd[1602]: time="2025-08-19T08:16:11.282875100Z" level=info msg="connecting to shim 216c8b3b6c909664033d9679a47ed3133a76cb971b3e134ec987489de622659f" address="unix:///run/containerd/s/165a490f35049dfd11f173bd9fb799f066fbd77d1ec62b1fef01ffc64d640702" namespace=k8s.io protocol=ttrpc version=3 Aug 19 08:16:11.284965 containerd[1602]: time="2025-08-19T08:16:11.284922113Z" level=info msg="connecting to shim 415077c815ad899b18629b761c61c2a2a894bd3a6ba4508d3a76241a3f16a531" address="unix:///run/containerd/s/8db733eeffff8b444bbf734564729d74580a5bc3dd4c4d719eb9480898fabe05" namespace=k8s.io protocol=ttrpc version=3 Aug 19 08:16:11.302974 containerd[1602]: time="2025-08-19T08:16:11.302914430Z" level=info msg="connecting to shim 848ab4945285932536144a1e5031364f4f41db33f11880023d1b3455697334db" address="unix:///run/containerd/s/af8c43f73c785e4e72ccd9aa233812fc7836f666b6ff7a81c703cf4676ce6210" namespace=k8s.io protocol=ttrpc version=3 Aug 19 08:16:11.313922 systemd[1]: Started cri-containerd-216c8b3b6c909664033d9679a47ed3133a76cb971b3e134ec987489de622659f.scope - libcontainer container 216c8b3b6c909664033d9679a47ed3133a76cb971b3e134ec987489de622659f. Aug 19 08:16:11.315871 systemd[1]: Started cri-containerd-415077c815ad899b18629b761c61c2a2a894bd3a6ba4508d3a76241a3f16a531.scope - libcontainer container 415077c815ad899b18629b761c61c2a2a894bd3a6ba4508d3a76241a3f16a531. Aug 19 08:16:11.324263 systemd[1]: Started cri-containerd-848ab4945285932536144a1e5031364f4f41db33f11880023d1b3455697334db.scope - libcontainer container 848ab4945285932536144a1e5031364f4f41db33f11880023d1b3455697334db. Aug 19 08:16:11.353509 containerd[1602]: time="2025-08-19T08:16:11.353458625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k82w5,Uid:5c36304f-543c-44f3-a160-1326cf4ec73a,Namespace:kube-system,Attempt:0,} returns sandbox id \"216c8b3b6c909664033d9679a47ed3133a76cb971b3e134ec987489de622659f\"" Aug 19 08:16:11.354247 kubelet[2744]: E0819 08:16:11.354223 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:16:11.358465 containerd[1602]: time="2025-08-19T08:16:11.358427541Z" level=info msg="CreateContainer within sandbox \"216c8b3b6c909664033d9679a47ed3133a76cb971b3e134ec987489de622659f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 19 08:16:11.358577 containerd[1602]: time="2025-08-19T08:16:11.358428393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hh22h,Uid:ae0e5a86-a329-48d5-995a-09c169f434f6,Namespace:kube-system,Attempt:0,} returns sandbox id \"415077c815ad899b18629b761c61c2a2a894bd3a6ba4508d3a76241a3f16a531\"" Aug 19 08:16:11.359434 kubelet[2744]: E0819 08:16:11.359405 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:16:11.360638 containerd[1602]: time="2025-08-19T08:16:11.360601515Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Aug 19 08:16:11.371311 containerd[1602]: time="2025-08-19T08:16:11.371254568Z" level=info msg="Container 726e06462e335e0f7f725ce4c86c988c261ebefb91656b1cc0efc0d0aef495a4: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:16:11.380329 containerd[1602]: time="2025-08-19T08:16:11.380207589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-m6j8c,Uid:29eba499-2eb0-45cc-adc2-ce2cef4738e8,Namespace:kube-system,Attempt:0,} returns sandbox id \"848ab4945285932536144a1e5031364f4f41db33f11880023d1b3455697334db\"" Aug 19 08:16:11.381160 kubelet[2744]: E0819 08:16:11.381122 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:16:11.381559 containerd[1602]: time="2025-08-19T08:16:11.381516939Z" level=info msg="CreateContainer within sandbox \"216c8b3b6c909664033d9679a47ed3133a76cb971b3e134ec987489de622659f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"726e06462e335e0f7f725ce4c86c988c261ebefb91656b1cc0efc0d0aef495a4\"" Aug 19 08:16:11.382239 containerd[1602]: time="2025-08-19T08:16:11.382200218Z" level=info msg="StartContainer for \"726e06462e335e0f7f725ce4c86c988c261ebefb91656b1cc0efc0d0aef495a4\"" Aug 19 08:16:11.384042 containerd[1602]: time="2025-08-19T08:16:11.383975834Z" level=info msg="connecting to shim 726e06462e335e0f7f725ce4c86c988c261ebefb91656b1cc0efc0d0aef495a4" address="unix:///run/containerd/s/165a490f35049dfd11f173bd9fb799f066fbd77d1ec62b1fef01ffc64d640702" protocol=ttrpc version=3 Aug 19 08:16:11.420903 systemd[1]: Started cri-containerd-726e06462e335e0f7f725ce4c86c988c261ebefb91656b1cc0efc0d0aef495a4.scope - libcontainer container 726e06462e335e0f7f725ce4c86c988c261ebefb91656b1cc0efc0d0aef495a4. Aug 19 08:16:11.470412 containerd[1602]: time="2025-08-19T08:16:11.470359657Z" level=info msg="StartContainer for \"726e06462e335e0f7f725ce4c86c988c261ebefb91656b1cc0efc0d0aef495a4\" returns successfully" Aug 19 08:16:11.797920 kubelet[2744]: E0819 08:16:11.797098 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:16:11.805570 kubelet[2744]: I0819 08:16:11.805499 2744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-k82w5" podStartSLOduration=1.8054771939999998 podStartE2EDuration="1.805477194s" podCreationTimestamp="2025-08-19 08:16:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-19 08:16:11.805248359 +0000 UTC m=+7.128939929" watchObservedRunningTime="2025-08-19 08:16:11.805477194 +0000 UTC m=+7.129168744" Aug 19 08:16:14.049517 update_engine[1585]: I20250819 08:16:14.049400 1585 update_attempter.cc:509] Updating boot flags... Aug 19 08:16:14.169615 kubelet[2744]: E0819 08:16:14.169558 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:16:14.802184 kubelet[2744]: E0819 08:16:14.802074 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:16:15.803708 kubelet[2744]: E0819 08:16:15.803656 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:16:19.821883 kubelet[2744]: E0819 08:16:19.821843 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:16:20.352698 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4187280087.mount: Deactivated successfully. Aug 19 08:16:25.140524 containerd[1602]: time="2025-08-19T08:16:25.140444573Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:16:25.141297 containerd[1602]: time="2025-08-19T08:16:25.141230705Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Aug 19 08:16:25.142458 containerd[1602]: time="2025-08-19T08:16:25.142416100Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:16:25.144028 containerd[1602]: time="2025-08-19T08:16:25.143983295Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 13.783335312s" Aug 19 08:16:25.144028 containerd[1602]: time="2025-08-19T08:16:25.144025103Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Aug 19 08:16:25.145181 containerd[1602]: time="2025-08-19T08:16:25.145138242Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Aug 19 08:16:25.146492 containerd[1602]: time="2025-08-19T08:16:25.146401474Z" level=info msg="CreateContainer within sandbox \"415077c815ad899b18629b761c61c2a2a894bd3a6ba4508d3a76241a3f16a531\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 19 08:16:25.155460 containerd[1602]: time="2025-08-19T08:16:25.155422713Z" level=info msg="Container 6287a972b5797cf4261f4208f45cb10c97b039b9710872e621199d4418cc9fef: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:16:25.162354 containerd[1602]: time="2025-08-19T08:16:25.162317994Z" level=info msg="CreateContainer within sandbox \"415077c815ad899b18629b761c61c2a2a894bd3a6ba4508d3a76241a3f16a531\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6287a972b5797cf4261f4208f45cb10c97b039b9710872e621199d4418cc9fef\"" Aug 19 08:16:25.162851 containerd[1602]: time="2025-08-19T08:16:25.162821733Z" level=info msg="StartContainer for \"6287a972b5797cf4261f4208f45cb10c97b039b9710872e621199d4418cc9fef\"" Aug 19 08:16:25.163697 containerd[1602]: time="2025-08-19T08:16:25.163667427Z" level=info msg="connecting to shim 6287a972b5797cf4261f4208f45cb10c97b039b9710872e621199d4418cc9fef" address="unix:///run/containerd/s/8db733eeffff8b444bbf734564729d74580a5bc3dd4c4d719eb9480898fabe05" protocol=ttrpc version=3 Aug 19 08:16:25.223902 systemd[1]: Started cri-containerd-6287a972b5797cf4261f4208f45cb10c97b039b9710872e621199d4418cc9fef.scope - libcontainer container 6287a972b5797cf4261f4208f45cb10c97b039b9710872e621199d4418cc9fef. Aug 19 08:16:25.260689 containerd[1602]: time="2025-08-19T08:16:25.260622517Z" level=info msg="StartContainer for \"6287a972b5797cf4261f4208f45cb10c97b039b9710872e621199d4418cc9fef\" returns successfully" Aug 19 08:16:25.273259 systemd[1]: cri-containerd-6287a972b5797cf4261f4208f45cb10c97b039b9710872e621199d4418cc9fef.scope: Deactivated successfully. Aug 19 08:16:25.274294 systemd[1]: cri-containerd-6287a972b5797cf4261f4208f45cb10c97b039b9710872e621199d4418cc9fef.scope: Consumed 30ms CPU time, 6.7M memory peak, 16K read from disk, 3.2M written to disk. Aug 19 08:16:25.275201 containerd[1602]: time="2025-08-19T08:16:25.275125271Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6287a972b5797cf4261f4208f45cb10c97b039b9710872e621199d4418cc9fef\" id:\"6287a972b5797cf4261f4208f45cb10c97b039b9710872e621199d4418cc9fef\" pid:3187 exited_at:{seconds:1755591385 nanos:274621320}" Aug 19 08:16:25.275404 containerd[1602]: time="2025-08-19T08:16:25.275245808Z" level=info msg="received exit event container_id:\"6287a972b5797cf4261f4208f45cb10c97b039b9710872e621199d4418cc9fef\" id:\"6287a972b5797cf4261f4208f45cb10c97b039b9710872e621199d4418cc9fef\" pid:3187 exited_at:{seconds:1755591385 nanos:274621320}" Aug 19 08:16:25.300625 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6287a972b5797cf4261f4208f45cb10c97b039b9710872e621199d4418cc9fef-rootfs.mount: Deactivated successfully. Aug 19 08:16:25.824654 kubelet[2744]: E0819 08:16:25.824570 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:16:25.834160 containerd[1602]: time="2025-08-19T08:16:25.828063643Z" level=info msg="CreateContainer within sandbox \"415077c815ad899b18629b761c61c2a2a894bd3a6ba4508d3a76241a3f16a531\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 19 08:16:25.848871 containerd[1602]: time="2025-08-19T08:16:25.848792184Z" level=info msg="Container 35b334275c250f165f9c1749b7bafe242b2f31a121e6283e66624cf4a9a5afa8: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:16:25.859094 containerd[1602]: time="2025-08-19T08:16:25.859032131Z" level=info msg="CreateContainer within sandbox \"415077c815ad899b18629b761c61c2a2a894bd3a6ba4508d3a76241a3f16a531\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"35b334275c250f165f9c1749b7bafe242b2f31a121e6283e66624cf4a9a5afa8\"" Aug 19 08:16:25.865498 containerd[1602]: time="2025-08-19T08:16:25.864993250Z" level=info msg="StartContainer for \"35b334275c250f165f9c1749b7bafe242b2f31a121e6283e66624cf4a9a5afa8\"" Aug 19 08:16:25.866238 containerd[1602]: time="2025-08-19T08:16:25.866206216Z" level=info msg="connecting to shim 35b334275c250f165f9c1749b7bafe242b2f31a121e6283e66624cf4a9a5afa8" address="unix:///run/containerd/s/8db733eeffff8b444bbf734564729d74580a5bc3dd4c4d719eb9480898fabe05" protocol=ttrpc version=3 Aug 19 08:16:25.915929 systemd[1]: Started cri-containerd-35b334275c250f165f9c1749b7bafe242b2f31a121e6283e66624cf4a9a5afa8.scope - libcontainer container 35b334275c250f165f9c1749b7bafe242b2f31a121e6283e66624cf4a9a5afa8. Aug 19 08:16:25.961455 containerd[1602]: time="2025-08-19T08:16:25.961387241Z" level=info msg="StartContainer for \"35b334275c250f165f9c1749b7bafe242b2f31a121e6283e66624cf4a9a5afa8\" returns successfully" Aug 19 08:16:25.980895 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 19 08:16:25.981158 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 19 08:16:25.981587 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Aug 19 08:16:25.983661 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 19 08:16:25.985789 systemd[1]: cri-containerd-35b334275c250f165f9c1749b7bafe242b2f31a121e6283e66624cf4a9a5afa8.scope: Deactivated successfully. Aug 19 08:16:25.987058 containerd[1602]: time="2025-08-19T08:16:25.987020500Z" level=info msg="received exit event container_id:\"35b334275c250f165f9c1749b7bafe242b2f31a121e6283e66624cf4a9a5afa8\" id:\"35b334275c250f165f9c1749b7bafe242b2f31a121e6283e66624cf4a9a5afa8\" pid:3234 exited_at:{seconds:1755591385 nanos:986698493}" Aug 19 08:16:25.987161 containerd[1602]: time="2025-08-19T08:16:25.987131369Z" level=info msg="TaskExit event in podsandbox handler container_id:\"35b334275c250f165f9c1749b7bafe242b2f31a121e6283e66624cf4a9a5afa8\" id:\"35b334275c250f165f9c1749b7bafe242b2f31a121e6283e66624cf4a9a5afa8\" pid:3234 exited_at:{seconds:1755591385 nanos:986698493}" Aug 19 08:16:26.011445 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 19 08:16:26.829959 kubelet[2744]: E0819 08:16:26.829899 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:16:26.832939 containerd[1602]: time="2025-08-19T08:16:26.832854619Z" level=info msg="CreateContainer within sandbox \"415077c815ad899b18629b761c61c2a2a894bd3a6ba4508d3a76241a3f16a531\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 19 08:16:26.856361 containerd[1602]: time="2025-08-19T08:16:26.856283156Z" level=info msg="Container 1d7f2539f7e96c8d41d0a3608ec9db58c09450387d559890e52d8c20c896db53: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:16:26.859959 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3888128494.mount: Deactivated successfully. Aug 19 08:16:26.867285 containerd[1602]: time="2025-08-19T08:16:26.867228444Z" level=info msg="CreateContainer within sandbox \"415077c815ad899b18629b761c61c2a2a894bd3a6ba4508d3a76241a3f16a531\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1d7f2539f7e96c8d41d0a3608ec9db58c09450387d559890e52d8c20c896db53\"" Aug 19 08:16:26.868064 containerd[1602]: time="2025-08-19T08:16:26.867695064Z" level=info msg="StartContainer for \"1d7f2539f7e96c8d41d0a3608ec9db58c09450387d559890e52d8c20c896db53\"" Aug 19 08:16:26.869239 containerd[1602]: time="2025-08-19T08:16:26.869210601Z" level=info msg="connecting to shim 1d7f2539f7e96c8d41d0a3608ec9db58c09450387d559890e52d8c20c896db53" address="unix:///run/containerd/s/8db733eeffff8b444bbf734564729d74580a5bc3dd4c4d719eb9480898fabe05" protocol=ttrpc version=3 Aug 19 08:16:26.900871 systemd[1]: Started cri-containerd-1d7f2539f7e96c8d41d0a3608ec9db58c09450387d559890e52d8c20c896db53.scope - libcontainer container 1d7f2539f7e96c8d41d0a3608ec9db58c09450387d559890e52d8c20c896db53. Aug 19 08:16:26.945402 systemd[1]: cri-containerd-1d7f2539f7e96c8d41d0a3608ec9db58c09450387d559890e52d8c20c896db53.scope: Deactivated successfully. Aug 19 08:16:26.947116 containerd[1602]: time="2025-08-19T08:16:26.946898452Z" level=info msg="received exit event container_id:\"1d7f2539f7e96c8d41d0a3608ec9db58c09450387d559890e52d8c20c896db53\" id:\"1d7f2539f7e96c8d41d0a3608ec9db58c09450387d559890e52d8c20c896db53\" pid:3283 exited_at:{seconds:1755591386 nanos:946408438}" Aug 19 08:16:26.947599 containerd[1602]: time="2025-08-19T08:16:26.947107756Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1d7f2539f7e96c8d41d0a3608ec9db58c09450387d559890e52d8c20c896db53\" id:\"1d7f2539f7e96c8d41d0a3608ec9db58c09450387d559890e52d8c20c896db53\" pid:3283 exited_at:{seconds:1755591386 nanos:946408438}" Aug 19 08:16:26.948728 containerd[1602]: time="2025-08-19T08:16:26.948678797Z" level=info msg="StartContainer for \"1d7f2539f7e96c8d41d0a3608ec9db58c09450387d559890e52d8c20c896db53\" returns successfully" Aug 19 08:16:26.971870 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1d7f2539f7e96c8d41d0a3608ec9db58c09450387d559890e52d8c20c896db53-rootfs.mount: Deactivated successfully. Aug 19 08:16:27.360925 containerd[1602]: time="2025-08-19T08:16:27.360852337Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:16:27.361524 containerd[1602]: time="2025-08-19T08:16:27.361495819Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Aug 19 08:16:27.362549 containerd[1602]: time="2025-08-19T08:16:27.362519006Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:16:27.363678 containerd[1602]: time="2025-08-19T08:16:27.363623237Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.218440922s" Aug 19 08:16:27.363678 containerd[1602]: time="2025-08-19T08:16:27.363663503Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Aug 19 08:16:27.365690 containerd[1602]: time="2025-08-19T08:16:27.365660065Z" level=info msg="CreateContainer within sandbox \"848ab4945285932536144a1e5031364f4f41db33f11880023d1b3455697334db\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Aug 19 08:16:27.372644 containerd[1602]: time="2025-08-19T08:16:27.372603327Z" level=info msg="Container a75e99e0f877f10cdc0b8b99e2aef78381a9ad8ec06b0428ba9226b5a4f8d1c2: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:16:27.378978 containerd[1602]: time="2025-08-19T08:16:27.378939996Z" level=info msg="CreateContainer within sandbox \"848ab4945285932536144a1e5031364f4f41db33f11880023d1b3455697334db\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"a75e99e0f877f10cdc0b8b99e2aef78381a9ad8ec06b0428ba9226b5a4f8d1c2\"" Aug 19 08:16:27.379325 containerd[1602]: time="2025-08-19T08:16:27.379303490Z" level=info msg="StartContainer for \"a75e99e0f877f10cdc0b8b99e2aef78381a9ad8ec06b0428ba9226b5a4f8d1c2\"" Aug 19 08:16:27.380102 containerd[1602]: time="2025-08-19T08:16:27.380065787Z" level=info msg="connecting to shim a75e99e0f877f10cdc0b8b99e2aef78381a9ad8ec06b0428ba9226b5a4f8d1c2" address="unix:///run/containerd/s/af8c43f73c785e4e72ccd9aa233812fc7836f666b6ff7a81c703cf4676ce6210" protocol=ttrpc version=3 Aug 19 08:16:27.407857 systemd[1]: Started cri-containerd-a75e99e0f877f10cdc0b8b99e2aef78381a9ad8ec06b0428ba9226b5a4f8d1c2.scope - libcontainer container a75e99e0f877f10cdc0b8b99e2aef78381a9ad8ec06b0428ba9226b5a4f8d1c2. Aug 19 08:16:27.442289 containerd[1602]: time="2025-08-19T08:16:27.442230702Z" level=info msg="StartContainer for \"a75e99e0f877f10cdc0b8b99e2aef78381a9ad8ec06b0428ba9226b5a4f8d1c2\" returns successfully" Aug 19 08:16:27.835947 kubelet[2744]: E0819 08:16:27.835902 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:16:27.844881 kubelet[2744]: E0819 08:16:27.844838 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:16:27.855818 containerd[1602]: time="2025-08-19T08:16:27.854932383Z" level=info msg="CreateContainer within sandbox \"415077c815ad899b18629b761c61c2a2a894bd3a6ba4508d3a76241a3f16a531\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 19 08:16:27.872305 containerd[1602]: time="2025-08-19T08:16:27.872254249Z" level=info msg="Container 962e6e8b18d749d5051851318bce0a403313fdc9a887c6fe720810c4a014a054: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:16:27.876524 kubelet[2744]: I0819 08:16:27.876438 2744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-m6j8c" podStartSLOduration=1.894764559 podStartE2EDuration="17.876408737s" podCreationTimestamp="2025-08-19 08:16:10 +0000 UTC" firstStartedPulling="2025-08-19 08:16:11.382677014 +0000 UTC m=+6.706368574" lastFinishedPulling="2025-08-19 08:16:27.364321202 +0000 UTC m=+22.688012752" observedRunningTime="2025-08-19 08:16:27.857904863 +0000 UTC m=+23.181596433" watchObservedRunningTime="2025-08-19 08:16:27.876408737 +0000 UTC m=+23.200100297" Aug 19 08:16:27.882908 containerd[1602]: time="2025-08-19T08:16:27.882864701Z" level=info msg="CreateContainer within sandbox \"415077c815ad899b18629b761c61c2a2a894bd3a6ba4508d3a76241a3f16a531\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"962e6e8b18d749d5051851318bce0a403313fdc9a887c6fe720810c4a014a054\"" Aug 19 08:16:27.883356 containerd[1602]: time="2025-08-19T08:16:27.883314448Z" level=info msg="StartContainer for \"962e6e8b18d749d5051851318bce0a403313fdc9a887c6fe720810c4a014a054\"" Aug 19 08:16:27.884361 containerd[1602]: time="2025-08-19T08:16:27.884331003Z" level=info msg="connecting to shim 962e6e8b18d749d5051851318bce0a403313fdc9a887c6fe720810c4a014a054" address="unix:///run/containerd/s/8db733eeffff8b444bbf734564729d74580a5bc3dd4c4d719eb9480898fabe05" protocol=ttrpc version=3 Aug 19 08:16:27.914044 systemd[1]: Started cri-containerd-962e6e8b18d749d5051851318bce0a403313fdc9a887c6fe720810c4a014a054.scope - libcontainer container 962e6e8b18d749d5051851318bce0a403313fdc9a887c6fe720810c4a014a054. Aug 19 08:16:27.945397 systemd[1]: cri-containerd-962e6e8b18d749d5051851318bce0a403313fdc9a887c6fe720810c4a014a054.scope: Deactivated successfully. Aug 19 08:16:27.946454 containerd[1602]: time="2025-08-19T08:16:27.946348159Z" level=info msg="TaskExit event in podsandbox handler container_id:\"962e6e8b18d749d5051851318bce0a403313fdc9a887c6fe720810c4a014a054\" id:\"962e6e8b18d749d5051851318bce0a403313fdc9a887c6fe720810c4a014a054\" pid:3372 exited_at:{seconds:1755591387 nanos:945829963}" Aug 19 08:16:28.055815 containerd[1602]: time="2025-08-19T08:16:28.055762385Z" level=info msg="received exit event container_id:\"962e6e8b18d749d5051851318bce0a403313fdc9a887c6fe720810c4a014a054\" id:\"962e6e8b18d749d5051851318bce0a403313fdc9a887c6fe720810c4a014a054\" pid:3372 exited_at:{seconds:1755591387 nanos:945829963}" Aug 19 08:16:28.063052 containerd[1602]: time="2025-08-19T08:16:28.063002752Z" level=info msg="StartContainer for \"962e6e8b18d749d5051851318bce0a403313fdc9a887c6fe720810c4a014a054\" returns successfully" Aug 19 08:16:28.850728 kubelet[2744]: E0819 08:16:28.850114 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:16:28.851489 kubelet[2744]: E0819 08:16:28.850858 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:16:29.854637 kubelet[2744]: E0819 08:16:29.854603 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:16:29.856893 containerd[1602]: time="2025-08-19T08:16:29.856724862Z" level=info msg="CreateContainer within sandbox \"415077c815ad899b18629b761c61c2a2a894bd3a6ba4508d3a76241a3f16a531\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 19 08:16:29.870633 containerd[1602]: time="2025-08-19T08:16:29.870571484Z" level=info msg="Container a59d04d788cbc817623c3588adae1686d994e14f32c950589a8b32a7e984d45a: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:16:29.877982 containerd[1602]: time="2025-08-19T08:16:29.877931292Z" level=info msg="CreateContainer within sandbox \"415077c815ad899b18629b761c61c2a2a894bd3a6ba4508d3a76241a3f16a531\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a59d04d788cbc817623c3588adae1686d994e14f32c950589a8b32a7e984d45a\"" Aug 19 08:16:29.878515 containerd[1602]: time="2025-08-19T08:16:29.878485086Z" level=info msg="StartContainer for \"a59d04d788cbc817623c3588adae1686d994e14f32c950589a8b32a7e984d45a\"" Aug 19 08:16:29.879435 containerd[1602]: time="2025-08-19T08:16:29.879402633Z" level=info msg="connecting to shim a59d04d788cbc817623c3588adae1686d994e14f32c950589a8b32a7e984d45a" address="unix:///run/containerd/s/8db733eeffff8b444bbf734564729d74580a5bc3dd4c4d719eb9480898fabe05" protocol=ttrpc version=3 Aug 19 08:16:29.902883 systemd[1]: Started cri-containerd-a59d04d788cbc817623c3588adae1686d994e14f32c950589a8b32a7e984d45a.scope - libcontainer container a59d04d788cbc817623c3588adae1686d994e14f32c950589a8b32a7e984d45a. Aug 19 08:16:29.943753 containerd[1602]: time="2025-08-19T08:16:29.943389567Z" level=info msg="StartContainer for \"a59d04d788cbc817623c3588adae1686d994e14f32c950589a8b32a7e984d45a\" returns successfully" Aug 19 08:16:30.019000 containerd[1602]: time="2025-08-19T08:16:30.018530646Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a59d04d788cbc817623c3588adae1686d994e14f32c950589a8b32a7e984d45a\" id:\"9be0df4f0001e1ec26692dbb78b1e94e54b415e56a9b32c2ad485188e135078d\" pid:3436 exited_at:{seconds:1755591390 nanos:18187950}" Aug 19 08:16:30.048803 kubelet[2744]: I0819 08:16:30.048715 2744 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Aug 19 08:16:30.084517 systemd[1]: Created slice kubepods-burstable-pod1b035a63_4139_4758_946b_0a3d95f25c9e.slice - libcontainer container kubepods-burstable-pod1b035a63_4139_4758_946b_0a3d95f25c9e.slice. Aug 19 08:16:30.091806 systemd[1]: Created slice kubepods-burstable-pod78545998_621c_4fc9_bd37_0312d8c026e8.slice - libcontainer container kubepods-burstable-pod78545998_621c_4fc9_bd37_0312d8c026e8.slice. Aug 19 08:16:30.224297 kubelet[2744]: I0819 08:16:30.224255 2744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1b035a63-4139-4758-946b-0a3d95f25c9e-config-volume\") pod \"coredns-7c65d6cfc9-s6j7l\" (UID: \"1b035a63-4139-4758-946b-0a3d95f25c9e\") " pod="kube-system/coredns-7c65d6cfc9-s6j7l" Aug 19 08:16:30.224297 kubelet[2744]: I0819 08:16:30.224299 2744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/78545998-621c-4fc9-bd37-0312d8c026e8-config-volume\") pod \"coredns-7c65d6cfc9-9zgdh\" (UID: \"78545998-621c-4fc9-bd37-0312d8c026e8\") " pod="kube-system/coredns-7c65d6cfc9-9zgdh" Aug 19 08:16:30.224478 kubelet[2744]: I0819 08:16:30.224323 2744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hq62w\" (UniqueName: \"kubernetes.io/projected/1b035a63-4139-4758-946b-0a3d95f25c9e-kube-api-access-hq62w\") pod \"coredns-7c65d6cfc9-s6j7l\" (UID: \"1b035a63-4139-4758-946b-0a3d95f25c9e\") " pod="kube-system/coredns-7c65d6cfc9-s6j7l" Aug 19 08:16:30.224478 kubelet[2744]: I0819 08:16:30.224342 2744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mcqd\" (UniqueName: \"kubernetes.io/projected/78545998-621c-4fc9-bd37-0312d8c026e8-kube-api-access-6mcqd\") pod \"coredns-7c65d6cfc9-9zgdh\" (UID: \"78545998-621c-4fc9-bd37-0312d8c026e8\") " pod="kube-system/coredns-7c65d6cfc9-9zgdh" Aug 19 08:16:30.389690 kubelet[2744]: E0819 08:16:30.389648 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:16:30.390716 containerd[1602]: time="2025-08-19T08:16:30.390669463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-s6j7l,Uid:1b035a63-4139-4758-946b-0a3d95f25c9e,Namespace:kube-system,Attempt:0,}" Aug 19 08:16:30.395580 kubelet[2744]: E0819 08:16:30.395548 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:16:30.396099 containerd[1602]: time="2025-08-19T08:16:30.396070670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-9zgdh,Uid:78545998-621c-4fc9-bd37-0312d8c026e8,Namespace:kube-system,Attempt:0,}" Aug 19 08:16:30.868007 kubelet[2744]: E0819 08:16:30.867970 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:16:31.870050 kubelet[2744]: E0819 08:16:31.870004 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:16:32.287281 systemd-networkd[1490]: cilium_host: Link UP Aug 19 08:16:32.287525 systemd-networkd[1490]: cilium_net: Link UP Aug 19 08:16:32.288065 systemd-networkd[1490]: cilium_host: Gained carrier Aug 19 08:16:32.288437 systemd-networkd[1490]: cilium_net: Gained carrier Aug 19 08:16:32.396243 systemd-networkd[1490]: cilium_vxlan: Link UP Aug 19 08:16:32.396256 systemd-networkd[1490]: cilium_vxlan: Gained carrier Aug 19 08:16:32.638795 kernel: NET: Registered PF_ALG protocol family Aug 19 08:16:32.872674 kubelet[2744]: E0819 08:16:32.872629 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:16:32.898073 systemd-networkd[1490]: cilium_host: Gained IPv6LL Aug 19 08:16:33.154933 systemd-networkd[1490]: cilium_net: Gained IPv6LL Aug 19 08:16:33.360013 systemd-networkd[1490]: lxc_health: Link UP Aug 19 08:16:33.360340 systemd-networkd[1490]: lxc_health: Gained carrier Aug 19 08:16:33.455112 systemd-networkd[1490]: lxc11a860fb317a: Link UP Aug 19 08:16:33.502878 kernel: eth0: renamed from tmp0951c Aug 19 08:16:33.505523 systemd-networkd[1490]: lxc11a860fb317a: Gained carrier Aug 19 08:16:33.923575 systemd-networkd[1490]: cilium_vxlan: Gained IPv6LL Aug 19 08:16:33.963796 systemd-networkd[1490]: lxccdf7818da274: Link UP Aug 19 08:16:33.974767 kernel: eth0: renamed from tmp04980 Aug 19 08:16:33.975013 systemd-networkd[1490]: lxccdf7818da274: Gained carrier Aug 19 08:16:34.253996 systemd[1]: Started sshd@7-10.0.0.113:22-10.0.0.1:42242.service - OpenSSH per-connection server daemon (10.0.0.1:42242). Aug 19 08:16:34.311946 sshd[3900]: Accepted publickey for core from 10.0.0.1 port 42242 ssh2: RSA SHA256:kecLVWRG1G7MHrHN/yG6X078KPWjs/jTMbEJqAmOzyM Aug 19 08:16:34.313978 sshd-session[3900]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:16:34.319081 systemd-logind[1582]: New session 8 of user core. Aug 19 08:16:34.325875 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 19 08:16:34.457006 sshd[3903]: Connection closed by 10.0.0.1 port 42242 Aug 19 08:16:34.457377 sshd-session[3900]: pam_unix(sshd:session): session closed for user core Aug 19 08:16:34.460968 systemd[1]: sshd@7-10.0.0.113:22-10.0.0.1:42242.service: Deactivated successfully. Aug 19 08:16:34.462937 systemd[1]: session-8.scope: Deactivated successfully. Aug 19 08:16:34.464352 systemd-logind[1582]: Session 8 logged out. Waiting for processes to exit. Aug 19 08:16:34.465574 systemd-logind[1582]: Removed session 8. Aug 19 08:16:34.561941 systemd-networkd[1490]: lxc_health: Gained IPv6LL Aug 19 08:16:34.818021 systemd-networkd[1490]: lxc11a860fb317a: Gained IPv6LL Aug 19 08:16:35.078944 kubelet[2744]: E0819 08:16:35.078123 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:16:35.098938 kubelet[2744]: I0819 08:16:35.098845 2744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-hh22h" podStartSLOduration=11.313937299 podStartE2EDuration="25.098826413s" podCreationTimestamp="2025-08-19 08:16:10 +0000 UTC" firstStartedPulling="2025-08-19 08:16:11.35987331 +0000 UTC m=+6.683564870" lastFinishedPulling="2025-08-19 08:16:25.144762414 +0000 UTC m=+20.468453984" observedRunningTime="2025-08-19 08:16:30.883263571 +0000 UTC m=+26.206955131" watchObservedRunningTime="2025-08-19 08:16:35.098826413 +0000 UTC m=+30.422517973" Aug 19 08:16:35.457983 systemd-networkd[1490]: lxccdf7818da274: Gained IPv6LL Aug 19 08:16:35.878955 kubelet[2744]: E0819 08:16:35.878910 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:16:36.881041 kubelet[2744]: E0819 08:16:36.880991 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:16:36.944436 containerd[1602]: time="2025-08-19T08:16:36.944365452Z" level=info msg="connecting to shim 0951c03279d6c6a71ac9cafdd5c61e4823104daec87a020431d5db5a5fc53502" address="unix:///run/containerd/s/44cae2e375d6e1cd62b4651c8f2846a9a33d605932c06b95b63031df49487bf3" namespace=k8s.io protocol=ttrpc version=3 Aug 19 08:16:36.946196 containerd[1602]: time="2025-08-19T08:16:36.946153643Z" level=info msg="connecting to shim 0498031b7f9021bf845c1c126229a091bf1c6c9ca6ab9ff43a4712f4e07d357a" address="unix:///run/containerd/s/22a11ce4c0599537cb640df691c183230a3c9f4cced920b778566d3ae92c63b5" namespace=k8s.io protocol=ttrpc version=3 Aug 19 08:16:36.977905 systemd[1]: Started cri-containerd-0951c03279d6c6a71ac9cafdd5c61e4823104daec87a020431d5db5a5fc53502.scope - libcontainer container 0951c03279d6c6a71ac9cafdd5c61e4823104daec87a020431d5db5a5fc53502. Aug 19 08:16:36.982053 systemd[1]: Started cri-containerd-0498031b7f9021bf845c1c126229a091bf1c6c9ca6ab9ff43a4712f4e07d357a.scope - libcontainer container 0498031b7f9021bf845c1c126229a091bf1c6c9ca6ab9ff43a4712f4e07d357a. Aug 19 08:16:36.995924 systemd-resolved[1408]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 19 08:16:36.997502 systemd-resolved[1408]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 19 08:16:37.028916 containerd[1602]: time="2025-08-19T08:16:37.028863232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-s6j7l,Uid:1b035a63-4139-4758-946b-0a3d95f25c9e,Namespace:kube-system,Attempt:0,} returns sandbox id \"0498031b7f9021bf845c1c126229a091bf1c6c9ca6ab9ff43a4712f4e07d357a\"" Aug 19 08:16:37.029650 kubelet[2744]: E0819 08:16:37.029541 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:16:37.031706 containerd[1602]: time="2025-08-19T08:16:37.031656763Z" level=info msg="CreateContainer within sandbox \"0498031b7f9021bf845c1c126229a091bf1c6c9ca6ab9ff43a4712f4e07d357a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 19 08:16:37.033320 containerd[1602]: time="2025-08-19T08:16:37.033282299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-9zgdh,Uid:78545998-621c-4fc9-bd37-0312d8c026e8,Namespace:kube-system,Attempt:0,} returns sandbox id \"0951c03279d6c6a71ac9cafdd5c61e4823104daec87a020431d5db5a5fc53502\"" Aug 19 08:16:37.033885 kubelet[2744]: E0819 08:16:37.033860 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:16:37.035783 containerd[1602]: time="2025-08-19T08:16:37.035749567Z" level=info msg="CreateContainer within sandbox \"0951c03279d6c6a71ac9cafdd5c61e4823104daec87a020431d5db5a5fc53502\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 19 08:16:37.043634 containerd[1602]: time="2025-08-19T08:16:37.043578181Z" level=info msg="Container e6cfbe9c18c8b5a76215c4cb9eea4f848b7d2013fddd23084fcf421776079124: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:16:37.049880 containerd[1602]: time="2025-08-19T08:16:37.049841995Z" level=info msg="Container 9cb459d45203f17166e96fbeea7ea3feb3e55b768d09bbfc0b08871114b0736e: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:16:37.054464 containerd[1602]: time="2025-08-19T08:16:37.054420179Z" level=info msg="CreateContainer within sandbox \"0498031b7f9021bf845c1c126229a091bf1c6c9ca6ab9ff43a4712f4e07d357a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e6cfbe9c18c8b5a76215c4cb9eea4f848b7d2013fddd23084fcf421776079124\"" Aug 19 08:16:37.054974 containerd[1602]: time="2025-08-19T08:16:37.054927654Z" level=info msg="StartContainer for \"e6cfbe9c18c8b5a76215c4cb9eea4f848b7d2013fddd23084fcf421776079124\"" Aug 19 08:16:37.056065 containerd[1602]: time="2025-08-19T08:16:37.056039493Z" level=info msg="connecting to shim e6cfbe9c18c8b5a76215c4cb9eea4f848b7d2013fddd23084fcf421776079124" address="unix:///run/containerd/s/22a11ce4c0599537cb640df691c183230a3c9f4cced920b778566d3ae92c63b5" protocol=ttrpc version=3 Aug 19 08:16:37.059049 containerd[1602]: time="2025-08-19T08:16:37.059008404Z" level=info msg="CreateContainer within sandbox \"0951c03279d6c6a71ac9cafdd5c61e4823104daec87a020431d5db5a5fc53502\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9cb459d45203f17166e96fbeea7ea3feb3e55b768d09bbfc0b08871114b0736e\"" Aug 19 08:16:37.059489 containerd[1602]: time="2025-08-19T08:16:37.059459151Z" level=info msg="StartContainer for \"9cb459d45203f17166e96fbeea7ea3feb3e55b768d09bbfc0b08871114b0736e\"" Aug 19 08:16:37.060477 containerd[1602]: time="2025-08-19T08:16:37.060428513Z" level=info msg="connecting to shim 9cb459d45203f17166e96fbeea7ea3feb3e55b768d09bbfc0b08871114b0736e" address="unix:///run/containerd/s/44cae2e375d6e1cd62b4651c8f2846a9a33d605932c06b95b63031df49487bf3" protocol=ttrpc version=3 Aug 19 08:16:37.083986 systemd[1]: Started cri-containerd-e6cfbe9c18c8b5a76215c4cb9eea4f848b7d2013fddd23084fcf421776079124.scope - libcontainer container e6cfbe9c18c8b5a76215c4cb9eea4f848b7d2013fddd23084fcf421776079124. Aug 19 08:16:37.087541 systemd[1]: Started cri-containerd-9cb459d45203f17166e96fbeea7ea3feb3e55b768d09bbfc0b08871114b0736e.scope - libcontainer container 9cb459d45203f17166e96fbeea7ea3feb3e55b768d09bbfc0b08871114b0736e. Aug 19 08:16:37.117961 containerd[1602]: time="2025-08-19T08:16:37.117814776Z" level=info msg="StartContainer for \"e6cfbe9c18c8b5a76215c4cb9eea4f848b7d2013fddd23084fcf421776079124\" returns successfully" Aug 19 08:16:37.126933 containerd[1602]: time="2025-08-19T08:16:37.126879875Z" level=info msg="StartContainer for \"9cb459d45203f17166e96fbeea7ea3feb3e55b768d09bbfc0b08871114b0736e\" returns successfully" Aug 19 08:16:37.966307 kubelet[2744]: E0819 08:16:37.966043 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:16:37.968759 kubelet[2744]: E0819 08:16:37.968650 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:16:38.285886 kubelet[2744]: I0819 08:16:38.285073 2744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-s6j7l" podStartSLOduration=28.285052773 podStartE2EDuration="28.285052773s" podCreationTimestamp="2025-08-19 08:16:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-19 08:16:38.273325864 +0000 UTC m=+33.597017424" watchObservedRunningTime="2025-08-19 08:16:38.285052773 +0000 UTC m=+33.608744333" Aug 19 08:16:38.299283 kubelet[2744]: I0819 08:16:38.299206 2744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-9zgdh" podStartSLOduration=28.299174662 podStartE2EDuration="28.299174662s" podCreationTimestamp="2025-08-19 08:16:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-19 08:16:38.29707733 +0000 UTC m=+33.620768900" watchObservedRunningTime="2025-08-19 08:16:38.299174662 +0000 UTC m=+33.622866222" Aug 19 08:16:38.970725 kubelet[2744]: E0819 08:16:38.970677 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:16:38.971190 kubelet[2744]: E0819 08:16:38.970795 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:16:39.475394 systemd[1]: Started sshd@8-10.0.0.113:22-10.0.0.1:59554.service - OpenSSH per-connection server daemon (10.0.0.1:59554). Aug 19 08:16:39.518866 sshd[4098]: Accepted publickey for core from 10.0.0.1 port 59554 ssh2: RSA SHA256:kecLVWRG1G7MHrHN/yG6X078KPWjs/jTMbEJqAmOzyM Aug 19 08:16:39.520372 sshd-session[4098]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:16:39.525347 systemd-logind[1582]: New session 9 of user core. Aug 19 08:16:39.535880 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 19 08:16:39.657373 sshd[4101]: Connection closed by 10.0.0.1 port 59554 Aug 19 08:16:39.657769 sshd-session[4098]: pam_unix(sshd:session): session closed for user core Aug 19 08:16:39.663217 systemd[1]: sshd@8-10.0.0.113:22-10.0.0.1:59554.service: Deactivated successfully. Aug 19 08:16:39.665946 systemd[1]: session-9.scope: Deactivated successfully. Aug 19 08:16:39.667498 systemd-logind[1582]: Session 9 logged out. Waiting for processes to exit. Aug 19 08:16:39.669003 systemd-logind[1582]: Removed session 9. Aug 19 08:16:39.972818 kubelet[2744]: E0819 08:16:39.972788 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:16:39.973318 kubelet[2744]: E0819 08:16:39.972847 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:16:44.681722 systemd[1]: Started sshd@9-10.0.0.113:22-10.0.0.1:59566.service - OpenSSH per-connection server daemon (10.0.0.1:59566). Aug 19 08:16:44.735761 sshd[4119]: Accepted publickey for core from 10.0.0.1 port 59566 ssh2: RSA SHA256:kecLVWRG1G7MHrHN/yG6X078KPWjs/jTMbEJqAmOzyM Aug 19 08:16:44.737529 sshd-session[4119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:16:44.741782 systemd-logind[1582]: New session 10 of user core. Aug 19 08:16:44.752876 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 19 08:16:44.872183 sshd[4122]: Connection closed by 10.0.0.1 port 59566 Aug 19 08:16:44.872567 sshd-session[4119]: pam_unix(sshd:session): session closed for user core Aug 19 08:16:44.875906 systemd[1]: sshd@9-10.0.0.113:22-10.0.0.1:59566.service: Deactivated successfully. Aug 19 08:16:44.877926 systemd[1]: session-10.scope: Deactivated successfully. Aug 19 08:16:44.879303 systemd-logind[1582]: Session 10 logged out. Waiting for processes to exit. Aug 19 08:16:44.880412 systemd-logind[1582]: Removed session 10. Aug 19 08:16:49.891670 systemd[1]: Started sshd@10-10.0.0.113:22-10.0.0.1:44946.service - OpenSSH per-connection server daemon (10.0.0.1:44946). Aug 19 08:16:49.940372 sshd[4136]: Accepted publickey for core from 10.0.0.1 port 44946 ssh2: RSA SHA256:kecLVWRG1G7MHrHN/yG6X078KPWjs/jTMbEJqAmOzyM Aug 19 08:16:49.941866 sshd-session[4136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:16:49.945955 systemd-logind[1582]: New session 11 of user core. Aug 19 08:16:49.955864 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 19 08:16:50.058844 sshd[4139]: Connection closed by 10.0.0.1 port 44946 Aug 19 08:16:50.059207 sshd-session[4136]: pam_unix(sshd:session): session closed for user core Aug 19 08:16:50.076365 systemd[1]: sshd@10-10.0.0.113:22-10.0.0.1:44946.service: Deactivated successfully. Aug 19 08:16:50.078496 systemd[1]: session-11.scope: Deactivated successfully. Aug 19 08:16:50.079368 systemd-logind[1582]: Session 11 logged out. Waiting for processes to exit. Aug 19 08:16:50.082596 systemd[1]: Started sshd@11-10.0.0.113:22-10.0.0.1:44958.service - OpenSSH per-connection server daemon (10.0.0.1:44958). Aug 19 08:16:50.083691 systemd-logind[1582]: Removed session 11. Aug 19 08:16:50.131825 sshd[4153]: Accepted publickey for core from 10.0.0.1 port 44958 ssh2: RSA SHA256:kecLVWRG1G7MHrHN/yG6X078KPWjs/jTMbEJqAmOzyM Aug 19 08:16:50.133614 sshd-session[4153]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:16:50.138347 systemd-logind[1582]: New session 12 of user core. Aug 19 08:16:50.152911 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 19 08:16:50.300704 sshd[4156]: Connection closed by 10.0.0.1 port 44958 Aug 19 08:16:50.301658 sshd-session[4153]: pam_unix(sshd:session): session closed for user core Aug 19 08:16:50.315298 systemd[1]: sshd@11-10.0.0.113:22-10.0.0.1:44958.service: Deactivated successfully. Aug 19 08:16:50.321173 systemd[1]: session-12.scope: Deactivated successfully. Aug 19 08:16:50.323391 systemd-logind[1582]: Session 12 logged out. Waiting for processes to exit. Aug 19 08:16:50.327083 systemd[1]: Started sshd@12-10.0.0.113:22-10.0.0.1:44962.service - OpenSSH per-connection server daemon (10.0.0.1:44962). Aug 19 08:16:50.328365 systemd-logind[1582]: Removed session 12. Aug 19 08:16:50.388172 sshd[4167]: Accepted publickey for core from 10.0.0.1 port 44962 ssh2: RSA SHA256:kecLVWRG1G7MHrHN/yG6X078KPWjs/jTMbEJqAmOzyM Aug 19 08:16:50.389433 sshd-session[4167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:16:50.394329 systemd-logind[1582]: New session 13 of user core. Aug 19 08:16:50.400892 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 19 08:16:50.514771 sshd[4170]: Connection closed by 10.0.0.1 port 44962 Aug 19 08:16:50.515164 sshd-session[4167]: pam_unix(sshd:session): session closed for user core Aug 19 08:16:50.520272 systemd[1]: sshd@12-10.0.0.113:22-10.0.0.1:44962.service: Deactivated successfully. Aug 19 08:16:50.522302 systemd[1]: session-13.scope: Deactivated successfully. Aug 19 08:16:50.523128 systemd-logind[1582]: Session 13 logged out. Waiting for processes to exit. Aug 19 08:16:50.524320 systemd-logind[1582]: Removed session 13. Aug 19 08:16:55.529716 systemd[1]: Started sshd@13-10.0.0.113:22-10.0.0.1:44968.service - OpenSSH per-connection server daemon (10.0.0.1:44968). Aug 19 08:16:55.583048 sshd[4185]: Accepted publickey for core from 10.0.0.1 port 44968 ssh2: RSA SHA256:kecLVWRG1G7MHrHN/yG6X078KPWjs/jTMbEJqAmOzyM Aug 19 08:16:55.584403 sshd-session[4185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:16:55.588905 systemd-logind[1582]: New session 14 of user core. Aug 19 08:16:55.598884 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 19 08:16:55.706873 sshd[4188]: Connection closed by 10.0.0.1 port 44968 Aug 19 08:16:55.707267 sshd-session[4185]: pam_unix(sshd:session): session closed for user core Aug 19 08:16:55.711918 systemd[1]: sshd@13-10.0.0.113:22-10.0.0.1:44968.service: Deactivated successfully. Aug 19 08:16:55.714076 systemd[1]: session-14.scope: Deactivated successfully. Aug 19 08:16:55.715191 systemd-logind[1582]: Session 14 logged out. Waiting for processes to exit. Aug 19 08:16:55.717177 systemd-logind[1582]: Removed session 14. Aug 19 08:17:00.720496 systemd[1]: Started sshd@14-10.0.0.113:22-10.0.0.1:42560.service - OpenSSH per-connection server daemon (10.0.0.1:42560). Aug 19 08:17:00.775761 sshd[4201]: Accepted publickey for core from 10.0.0.1 port 42560 ssh2: RSA SHA256:kecLVWRG1G7MHrHN/yG6X078KPWjs/jTMbEJqAmOzyM Aug 19 08:17:00.777403 sshd-session[4201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:17:00.782215 systemd-logind[1582]: New session 15 of user core. Aug 19 08:17:00.791884 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 19 08:17:00.909880 sshd[4204]: Connection closed by 10.0.0.1 port 42560 Aug 19 08:17:00.910239 sshd-session[4201]: pam_unix(sshd:session): session closed for user core Aug 19 08:17:00.914640 systemd[1]: sshd@14-10.0.0.113:22-10.0.0.1:42560.service: Deactivated successfully. Aug 19 08:17:00.917076 systemd[1]: session-15.scope: Deactivated successfully. Aug 19 08:17:00.918785 systemd-logind[1582]: Session 15 logged out. Waiting for processes to exit. Aug 19 08:17:00.920447 systemd-logind[1582]: Removed session 15. Aug 19 08:17:05.925152 systemd[1]: Started sshd@15-10.0.0.113:22-10.0.0.1:42562.service - OpenSSH per-connection server daemon (10.0.0.1:42562). Aug 19 08:17:05.989702 sshd[4219]: Accepted publickey for core from 10.0.0.1 port 42562 ssh2: RSA SHA256:kecLVWRG1G7MHrHN/yG6X078KPWjs/jTMbEJqAmOzyM Aug 19 08:17:05.991473 sshd-session[4219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:17:05.996329 systemd-logind[1582]: New session 16 of user core. Aug 19 08:17:06.005933 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 19 08:17:06.127246 sshd[4222]: Connection closed by 10.0.0.1 port 42562 Aug 19 08:17:06.127784 sshd-session[4219]: pam_unix(sshd:session): session closed for user core Aug 19 08:17:06.136657 systemd[1]: sshd@15-10.0.0.113:22-10.0.0.1:42562.service: Deactivated successfully. Aug 19 08:17:06.138746 systemd[1]: session-16.scope: Deactivated successfully. Aug 19 08:17:06.139603 systemd-logind[1582]: Session 16 logged out. Waiting for processes to exit. Aug 19 08:17:06.142387 systemd[1]: Started sshd@16-10.0.0.113:22-10.0.0.1:42578.service - OpenSSH per-connection server daemon (10.0.0.1:42578). Aug 19 08:17:06.143315 systemd-logind[1582]: Removed session 16. Aug 19 08:17:06.205600 sshd[4238]: Accepted publickey for core from 10.0.0.1 port 42578 ssh2: RSA SHA256:kecLVWRG1G7MHrHN/yG6X078KPWjs/jTMbEJqAmOzyM Aug 19 08:17:06.207297 sshd-session[4238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:17:06.212460 systemd-logind[1582]: New session 17 of user core. Aug 19 08:17:06.225901 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 19 08:17:06.664128 sshd[4241]: Connection closed by 10.0.0.1 port 42578 Aug 19 08:17:06.664658 sshd-session[4238]: pam_unix(sshd:session): session closed for user core Aug 19 08:17:06.673648 systemd[1]: sshd@16-10.0.0.113:22-10.0.0.1:42578.service: Deactivated successfully. Aug 19 08:17:06.675678 systemd[1]: session-17.scope: Deactivated successfully. Aug 19 08:17:06.676480 systemd-logind[1582]: Session 17 logged out. Waiting for processes to exit. Aug 19 08:17:06.679954 systemd[1]: Started sshd@17-10.0.0.113:22-10.0.0.1:42586.service - OpenSSH per-connection server daemon (10.0.0.1:42586). Aug 19 08:17:06.680609 systemd-logind[1582]: Removed session 17. Aug 19 08:17:06.730632 sshd[4252]: Accepted publickey for core from 10.0.0.1 port 42586 ssh2: RSA SHA256:kecLVWRG1G7MHrHN/yG6X078KPWjs/jTMbEJqAmOzyM Aug 19 08:17:06.732250 sshd-session[4252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:17:06.737313 systemd-logind[1582]: New session 18 of user core. Aug 19 08:17:06.745928 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 19 08:17:08.077507 sshd[4255]: Connection closed by 10.0.0.1 port 42586 Aug 19 08:17:08.079502 sshd-session[4252]: pam_unix(sshd:session): session closed for user core Aug 19 08:17:08.090269 systemd[1]: sshd@17-10.0.0.113:22-10.0.0.1:42586.service: Deactivated successfully. Aug 19 08:17:08.092708 systemd[1]: session-18.scope: Deactivated successfully. Aug 19 08:17:08.093945 systemd-logind[1582]: Session 18 logged out. Waiting for processes to exit. Aug 19 08:17:08.099780 systemd[1]: Started sshd@18-10.0.0.113:22-10.0.0.1:52478.service - OpenSSH per-connection server daemon (10.0.0.1:52478). Aug 19 08:17:08.100808 systemd-logind[1582]: Removed session 18. Aug 19 08:17:08.163078 sshd[4278]: Accepted publickey for core from 10.0.0.1 port 52478 ssh2: RSA SHA256:kecLVWRG1G7MHrHN/yG6X078KPWjs/jTMbEJqAmOzyM Aug 19 08:17:08.164913 sshd-session[4278]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:17:08.169268 systemd-logind[1582]: New session 19 of user core. Aug 19 08:17:08.180881 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 19 08:17:08.510372 sshd[4281]: Connection closed by 10.0.0.1 port 52478 Aug 19 08:17:08.510815 sshd-session[4278]: pam_unix(sshd:session): session closed for user core Aug 19 08:17:08.519493 systemd[1]: sshd@18-10.0.0.113:22-10.0.0.1:52478.service: Deactivated successfully. Aug 19 08:17:08.521821 systemd[1]: session-19.scope: Deactivated successfully. Aug 19 08:17:08.522641 systemd-logind[1582]: Session 19 logged out. Waiting for processes to exit. Aug 19 08:17:08.525608 systemd[1]: Started sshd@19-10.0.0.113:22-10.0.0.1:52484.service - OpenSSH per-connection server daemon (10.0.0.1:52484). Aug 19 08:17:08.526572 systemd-logind[1582]: Removed session 19. Aug 19 08:17:08.576337 sshd[4292]: Accepted publickey for core from 10.0.0.1 port 52484 ssh2: RSA SHA256:kecLVWRG1G7MHrHN/yG6X078KPWjs/jTMbEJqAmOzyM Aug 19 08:17:08.577723 sshd-session[4292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:17:08.582453 systemd-logind[1582]: New session 20 of user core. Aug 19 08:17:08.591889 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 19 08:17:08.704924 sshd[4295]: Connection closed by 10.0.0.1 port 52484 Aug 19 08:17:08.705332 sshd-session[4292]: pam_unix(sshd:session): session closed for user core Aug 19 08:17:08.709477 systemd[1]: sshd@19-10.0.0.113:22-10.0.0.1:52484.service: Deactivated successfully. Aug 19 08:17:08.712049 systemd[1]: session-20.scope: Deactivated successfully. Aug 19 08:17:08.713642 systemd-logind[1582]: Session 20 logged out. Waiting for processes to exit. Aug 19 08:17:08.715478 systemd-logind[1582]: Removed session 20. Aug 19 08:17:13.720924 systemd[1]: Started sshd@20-10.0.0.113:22-10.0.0.1:52496.service - OpenSSH per-connection server daemon (10.0.0.1:52496). Aug 19 08:17:13.767189 sshd[4313]: Accepted publickey for core from 10.0.0.1 port 52496 ssh2: RSA SHA256:kecLVWRG1G7MHrHN/yG6X078KPWjs/jTMbEJqAmOzyM Aug 19 08:17:13.768589 sshd-session[4313]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:17:13.769011 kubelet[2744]: E0819 08:17:13.768963 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:17:13.773637 systemd-logind[1582]: New session 21 of user core. Aug 19 08:17:13.782979 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 19 08:17:13.893350 sshd[4316]: Connection closed by 10.0.0.1 port 52496 Aug 19 08:17:13.893729 sshd-session[4313]: pam_unix(sshd:session): session closed for user core Aug 19 08:17:13.898616 systemd[1]: sshd@20-10.0.0.113:22-10.0.0.1:52496.service: Deactivated successfully. Aug 19 08:17:13.900710 systemd[1]: session-21.scope: Deactivated successfully. Aug 19 08:17:13.901432 systemd-logind[1582]: Session 21 logged out. Waiting for processes to exit. Aug 19 08:17:13.902518 systemd-logind[1582]: Removed session 21. Aug 19 08:17:18.919395 systemd[1]: Started sshd@21-10.0.0.113:22-10.0.0.1:37380.service - OpenSSH per-connection server daemon (10.0.0.1:37380). Aug 19 08:17:18.960471 sshd[4329]: Accepted publickey for core from 10.0.0.1 port 37380 ssh2: RSA SHA256:kecLVWRG1G7MHrHN/yG6X078KPWjs/jTMbEJqAmOzyM Aug 19 08:17:18.961771 sshd-session[4329]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:17:18.966192 systemd-logind[1582]: New session 22 of user core. Aug 19 08:17:18.979886 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 19 08:17:19.088569 sshd[4332]: Connection closed by 10.0.0.1 port 37380 Aug 19 08:17:19.088963 sshd-session[4329]: pam_unix(sshd:session): session closed for user core Aug 19 08:17:19.093711 systemd[1]: sshd@21-10.0.0.113:22-10.0.0.1:37380.service: Deactivated successfully. Aug 19 08:17:19.095563 systemd[1]: session-22.scope: Deactivated successfully. Aug 19 08:17:19.096307 systemd-logind[1582]: Session 22 logged out. Waiting for processes to exit. Aug 19 08:17:19.097574 systemd-logind[1582]: Removed session 22. Aug 19 08:17:19.769305 kubelet[2744]: E0819 08:17:19.769237 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:17:24.106281 systemd[1]: Started sshd@22-10.0.0.113:22-10.0.0.1:37390.service - OpenSSH per-connection server daemon (10.0.0.1:37390). Aug 19 08:17:24.168523 sshd[4345]: Accepted publickey for core from 10.0.0.1 port 37390 ssh2: RSA SHA256:kecLVWRG1G7MHrHN/yG6X078KPWjs/jTMbEJqAmOzyM Aug 19 08:17:24.170613 sshd-session[4345]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:17:24.175393 systemd-logind[1582]: New session 23 of user core. Aug 19 08:17:24.187903 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 19 08:17:24.312823 sshd[4348]: Connection closed by 10.0.0.1 port 37390 Aug 19 08:17:24.313256 sshd-session[4345]: pam_unix(sshd:session): session closed for user core Aug 19 08:17:24.318917 systemd[1]: sshd@22-10.0.0.113:22-10.0.0.1:37390.service: Deactivated successfully. Aug 19 08:17:24.322015 systemd[1]: session-23.scope: Deactivated successfully. Aug 19 08:17:24.323250 systemd-logind[1582]: Session 23 logged out. Waiting for processes to exit. Aug 19 08:17:24.324664 systemd-logind[1582]: Removed session 23. Aug 19 08:17:26.769198 kubelet[2744]: E0819 08:17:26.769150 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:17:29.326036 systemd[1]: Started sshd@23-10.0.0.113:22-10.0.0.1:58980.service - OpenSSH per-connection server daemon (10.0.0.1:58980). Aug 19 08:17:29.388221 sshd[4361]: Accepted publickey for core from 10.0.0.1 port 58980 ssh2: RSA SHA256:kecLVWRG1G7MHrHN/yG6X078KPWjs/jTMbEJqAmOzyM Aug 19 08:17:29.389925 sshd-session[4361]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:17:29.394103 systemd-logind[1582]: New session 24 of user core. Aug 19 08:17:29.404871 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 19 08:17:29.514235 sshd[4364]: Connection closed by 10.0.0.1 port 58980 Aug 19 08:17:29.514548 sshd-session[4361]: pam_unix(sshd:session): session closed for user core Aug 19 08:17:29.529455 systemd[1]: sshd@23-10.0.0.113:22-10.0.0.1:58980.service: Deactivated successfully. Aug 19 08:17:29.531619 systemd[1]: session-24.scope: Deactivated successfully. Aug 19 08:17:29.532428 systemd-logind[1582]: Session 24 logged out. Waiting for processes to exit. Aug 19 08:17:29.535371 systemd[1]: Started sshd@24-10.0.0.113:22-10.0.0.1:58988.service - OpenSSH per-connection server daemon (10.0.0.1:58988). Aug 19 08:17:29.536044 systemd-logind[1582]: Removed session 24. Aug 19 08:17:29.587504 sshd[4378]: Accepted publickey for core from 10.0.0.1 port 58988 ssh2: RSA SHA256:kecLVWRG1G7MHrHN/yG6X078KPWjs/jTMbEJqAmOzyM Aug 19 08:17:29.588706 sshd-session[4378]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:17:29.593160 systemd-logind[1582]: New session 25 of user core. Aug 19 08:17:29.602884 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 19 08:17:30.769076 kubelet[2744]: E0819 08:17:30.769039 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:17:31.344621 containerd[1602]: time="2025-08-19T08:17:31.344507244Z" level=info msg="StopContainer for \"a75e99e0f877f10cdc0b8b99e2aef78381a9ad8ec06b0428ba9226b5a4f8d1c2\" with timeout 30 (s)" Aug 19 08:17:31.345370 containerd[1602]: time="2025-08-19T08:17:31.345218241Z" level=info msg="Stop container \"a75e99e0f877f10cdc0b8b99e2aef78381a9ad8ec06b0428ba9226b5a4f8d1c2\" with signal terminated" Aug 19 08:17:31.360328 systemd[1]: cri-containerd-a75e99e0f877f10cdc0b8b99e2aef78381a9ad8ec06b0428ba9226b5a4f8d1c2.scope: Deactivated successfully. Aug 19 08:17:31.362308 containerd[1602]: time="2025-08-19T08:17:31.362262480Z" level=info msg="received exit event container_id:\"a75e99e0f877f10cdc0b8b99e2aef78381a9ad8ec06b0428ba9226b5a4f8d1c2\" id:\"a75e99e0f877f10cdc0b8b99e2aef78381a9ad8ec06b0428ba9226b5a4f8d1c2\" pid:3338 exited_at:{seconds:1755591451 nanos:361615986}" Aug 19 08:17:31.362423 containerd[1602]: time="2025-08-19T08:17:31.362258933Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a75e99e0f877f10cdc0b8b99e2aef78381a9ad8ec06b0428ba9226b5a4f8d1c2\" id:\"a75e99e0f877f10cdc0b8b99e2aef78381a9ad8ec06b0428ba9226b5a4f8d1c2\" pid:3338 exited_at:{seconds:1755591451 nanos:361615986}" Aug 19 08:17:31.379089 containerd[1602]: time="2025-08-19T08:17:31.379006356Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 19 08:17:31.379582 containerd[1602]: time="2025-08-19T08:17:31.379546797Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a59d04d788cbc817623c3588adae1686d994e14f32c950589a8b32a7e984d45a\" id:\"87fd29e30d7d3687bb6dc3ba4deae000926183168984f8acfad5a92ed8ad806e\" pid:4407 exited_at:{seconds:1755591451 nanos:379145000}" Aug 19 08:17:31.382562 containerd[1602]: time="2025-08-19T08:17:31.382176426Z" level=info msg="StopContainer for \"a59d04d788cbc817623c3588adae1686d994e14f32c950589a8b32a7e984d45a\" with timeout 2 (s)" Aug 19 08:17:31.383099 containerd[1602]: time="2025-08-19T08:17:31.383053229Z" level=info msg="Stop container \"a59d04d788cbc817623c3588adae1686d994e14f32c950589a8b32a7e984d45a\" with signal terminated" Aug 19 08:17:31.388835 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a75e99e0f877f10cdc0b8b99e2aef78381a9ad8ec06b0428ba9226b5a4f8d1c2-rootfs.mount: Deactivated successfully. Aug 19 08:17:31.394607 systemd-networkd[1490]: lxc_health: Link DOWN Aug 19 08:17:31.395017 systemd-networkd[1490]: lxc_health: Lost carrier Aug 19 08:17:31.415617 systemd[1]: cri-containerd-a59d04d788cbc817623c3588adae1686d994e14f32c950589a8b32a7e984d45a.scope: Deactivated successfully. Aug 19 08:17:31.416495 systemd[1]: cri-containerd-a59d04d788cbc817623c3588adae1686d994e14f32c950589a8b32a7e984d45a.scope: Consumed 6.741s CPU time, 124.1M memory peak, 172K read from disk, 13.3M written to disk. Aug 19 08:17:31.417785 containerd[1602]: time="2025-08-19T08:17:31.417633266Z" level=info msg="received exit event container_id:\"a59d04d788cbc817623c3588adae1686d994e14f32c950589a8b32a7e984d45a\" id:\"a59d04d788cbc817623c3588adae1686d994e14f32c950589a8b32a7e984d45a\" pid:3406 exited_at:{seconds:1755591451 nanos:417264812}" Aug 19 08:17:31.418183 containerd[1602]: time="2025-08-19T08:17:31.418144882Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a59d04d788cbc817623c3588adae1686d994e14f32c950589a8b32a7e984d45a\" id:\"a59d04d788cbc817623c3588adae1686d994e14f32c950589a8b32a7e984d45a\" pid:3406 exited_at:{seconds:1755591451 nanos:417264812}" Aug 19 08:17:31.424374 containerd[1602]: time="2025-08-19T08:17:31.424286873Z" level=info msg="StopContainer for \"a75e99e0f877f10cdc0b8b99e2aef78381a9ad8ec06b0428ba9226b5a4f8d1c2\" returns successfully" Aug 19 08:17:31.428625 containerd[1602]: time="2025-08-19T08:17:31.428543808Z" level=info msg="StopPodSandbox for \"848ab4945285932536144a1e5031364f4f41db33f11880023d1b3455697334db\"" Aug 19 08:17:31.437165 containerd[1602]: time="2025-08-19T08:17:31.437106660Z" level=info msg="Container to stop \"a75e99e0f877f10cdc0b8b99e2aef78381a9ad8ec06b0428ba9226b5a4f8d1c2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 19 08:17:31.446934 containerd[1602]: time="2025-08-19T08:17:31.446886115Z" level=info msg="TaskExit event in podsandbox handler container_id:\"848ab4945285932536144a1e5031364f4f41db33f11880023d1b3455697334db\" id:\"848ab4945285932536144a1e5031364f4f41db33f11880023d1b3455697334db\" pid:2940 exit_status:137 exited_at:{seconds:1755591451 nanos:446544302}" Aug 19 08:17:31.447042 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a59d04d788cbc817623c3588adae1686d994e14f32c950589a8b32a7e984d45a-rootfs.mount: Deactivated successfully. Aug 19 08:17:31.447890 systemd[1]: cri-containerd-848ab4945285932536144a1e5031364f4f41db33f11880023d1b3455697334db.scope: Deactivated successfully. Aug 19 08:17:31.459247 containerd[1602]: time="2025-08-19T08:17:31.459190577Z" level=info msg="StopContainer for \"a59d04d788cbc817623c3588adae1686d994e14f32c950589a8b32a7e984d45a\" returns successfully" Aug 19 08:17:31.459947 containerd[1602]: time="2025-08-19T08:17:31.459919529Z" level=info msg="StopPodSandbox for \"415077c815ad899b18629b761c61c2a2a894bd3a6ba4508d3a76241a3f16a531\"" Aug 19 08:17:31.460028 containerd[1602]: time="2025-08-19T08:17:31.459988370Z" level=info msg="Container to stop \"6287a972b5797cf4261f4208f45cb10c97b039b9710872e621199d4418cc9fef\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 19 08:17:31.460028 containerd[1602]: time="2025-08-19T08:17:31.460020190Z" level=info msg="Container to stop \"1d7f2539f7e96c8d41d0a3608ec9db58c09450387d559890e52d8c20c896db53\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 19 08:17:31.460078 containerd[1602]: time="2025-08-19T08:17:31.460033657Z" level=info msg="Container to stop \"962e6e8b18d749d5051851318bce0a403313fdc9a887c6fe720810c4a014a054\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 19 08:17:31.460078 containerd[1602]: time="2025-08-19T08:17:31.460046040Z" level=info msg="Container to stop \"35b334275c250f165f9c1749b7bafe242b2f31a121e6283e66624cf4a9a5afa8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 19 08:17:31.460078 containerd[1602]: time="2025-08-19T08:17:31.460056861Z" level=info msg="Container to stop \"a59d04d788cbc817623c3588adae1686d994e14f32c950589a8b32a7e984d45a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 19 08:17:31.466206 systemd[1]: cri-containerd-415077c815ad899b18629b761c61c2a2a894bd3a6ba4508d3a76241a3f16a531.scope: Deactivated successfully. Aug 19 08:17:31.474615 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-848ab4945285932536144a1e5031364f4f41db33f11880023d1b3455697334db-rootfs.mount: Deactivated successfully. Aug 19 08:17:31.491939 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-415077c815ad899b18629b761c61c2a2a894bd3a6ba4508d3a76241a3f16a531-rootfs.mount: Deactivated successfully. Aug 19 08:17:31.539255 containerd[1602]: time="2025-08-19T08:17:31.539211267Z" level=info msg="shim disconnected" id=848ab4945285932536144a1e5031364f4f41db33f11880023d1b3455697334db namespace=k8s.io Aug 19 08:17:31.539255 containerd[1602]: time="2025-08-19T08:17:31.539247977Z" level=warning msg="cleaning up after shim disconnected" id=848ab4945285932536144a1e5031364f4f41db33f11880023d1b3455697334db namespace=k8s.io Aug 19 08:17:31.542086 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-848ab4945285932536144a1e5031364f4f41db33f11880023d1b3455697334db-shm.mount: Deactivated successfully. Aug 19 08:17:31.555029 containerd[1602]: time="2025-08-19T08:17:31.539262825Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 19 08:17:31.555173 containerd[1602]: time="2025-08-19T08:17:31.539445494Z" level=info msg="shim disconnected" id=415077c815ad899b18629b761c61c2a2a894bd3a6ba4508d3a76241a3f16a531 namespace=k8s.io Aug 19 08:17:31.555249 containerd[1602]: time="2025-08-19T08:17:31.539546106Z" level=info msg="TaskExit event in podsandbox handler container_id:\"415077c815ad899b18629b761c61c2a2a894bd3a6ba4508d3a76241a3f16a531\" id:\"415077c815ad899b18629b761c61c2a2a894bd3a6ba4508d3a76241a3f16a531\" pid:2931 exit_status:137 exited_at:{seconds:1755591451 nanos:467883287}" Aug 19 08:17:31.555429 containerd[1602]: time="2025-08-19T08:17:31.553699838Z" level=info msg="TearDown network for sandbox \"848ab4945285932536144a1e5031364f4f41db33f11880023d1b3455697334db\" successfully" Aug 19 08:17:31.555429 containerd[1602]: time="2025-08-19T08:17:31.555149375Z" level=warning msg="cleaning up after shim disconnected" id=415077c815ad899b18629b761c61c2a2a894bd3a6ba4508d3a76241a3f16a531 namespace=k8s.io Aug 19 08:17:31.555429 containerd[1602]: time="2025-08-19T08:17:31.555325070Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 19 08:17:31.555711 containerd[1602]: time="2025-08-19T08:17:31.555685789Z" level=info msg="TearDown network for sandbox \"415077c815ad899b18629b761c61c2a2a894bd3a6ba4508d3a76241a3f16a531\" successfully" Aug 19 08:17:31.556020 containerd[1602]: time="2025-08-19T08:17:31.555770720Z" level=info msg="StopPodSandbox for \"415077c815ad899b18629b761c61c2a2a894bd3a6ba4508d3a76241a3f16a531\" returns successfully" Aug 19 08:17:31.556020 containerd[1602]: time="2025-08-19T08:17:31.555309431Z" level=info msg="StopPodSandbox for \"848ab4945285932536144a1e5031364f4f41db33f11880023d1b3455697334db\" returns successfully" Aug 19 08:17:31.556359 containerd[1602]: time="2025-08-19T08:17:31.556307906Z" level=info msg="received exit event sandbox_id:\"848ab4945285932536144a1e5031364f4f41db33f11880023d1b3455697334db\" exit_status:137 exited_at:{seconds:1755591451 nanos:446544302}" Aug 19 08:17:31.558222 containerd[1602]: time="2025-08-19T08:17:31.556728188Z" level=info msg="received exit event sandbox_id:\"415077c815ad899b18629b761c61c2a2a894bd3a6ba4508d3a76241a3f16a531\" exit_status:137 exited_at:{seconds:1755591451 nanos:467883287}" Aug 19 08:17:31.762371 kubelet[2744]: I0819 08:17:31.762322 2744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ae0e5a86-a329-48d5-995a-09c169f434f6-etc-cni-netd\") pod \"ae0e5a86-a329-48d5-995a-09c169f434f6\" (UID: \"ae0e5a86-a329-48d5-995a-09c169f434f6\") " Aug 19 08:17:31.762371 kubelet[2744]: I0819 08:17:31.762372 2744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ae0e5a86-a329-48d5-995a-09c169f434f6-xtables-lock\") pod \"ae0e5a86-a329-48d5-995a-09c169f434f6\" (UID: \"ae0e5a86-a329-48d5-995a-09c169f434f6\") " Aug 19 08:17:31.762565 kubelet[2744]: I0819 08:17:31.762405 2744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/29eba499-2eb0-45cc-adc2-ce2cef4738e8-cilium-config-path\") pod \"29eba499-2eb0-45cc-adc2-ce2cef4738e8\" (UID: \"29eba499-2eb0-45cc-adc2-ce2cef4738e8\") " Aug 19 08:17:31.762565 kubelet[2744]: I0819 08:17:31.762428 2744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ae0e5a86-a329-48d5-995a-09c169f434f6-host-proc-sys-net\") pod \"ae0e5a86-a329-48d5-995a-09c169f434f6\" (UID: \"ae0e5a86-a329-48d5-995a-09c169f434f6\") " Aug 19 08:17:31.762565 kubelet[2744]: I0819 08:17:31.762449 2744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ae0e5a86-a329-48d5-995a-09c169f434f6-hubble-tls\") pod \"ae0e5a86-a329-48d5-995a-09c169f434f6\" (UID: \"ae0e5a86-a329-48d5-995a-09c169f434f6\") " Aug 19 08:17:31.762565 kubelet[2744]: I0819 08:17:31.762353 2744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae0e5a86-a329-48d5-995a-09c169f434f6-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "ae0e5a86-a329-48d5-995a-09c169f434f6" (UID: "ae0e5a86-a329-48d5-995a-09c169f434f6"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 19 08:17:31.762565 kubelet[2744]: I0819 08:17:31.762473 2744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ae0e5a86-a329-48d5-995a-09c169f434f6-clustermesh-secrets\") pod \"ae0e5a86-a329-48d5-995a-09c169f434f6\" (UID: \"ae0e5a86-a329-48d5-995a-09c169f434f6\") " Aug 19 08:17:31.762565 kubelet[2744]: I0819 08:17:31.762494 2744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6tkfc\" (UniqueName: \"kubernetes.io/projected/ae0e5a86-a329-48d5-995a-09c169f434f6-kube-api-access-6tkfc\") pod \"ae0e5a86-a329-48d5-995a-09c169f434f6\" (UID: \"ae0e5a86-a329-48d5-995a-09c169f434f6\") " Aug 19 08:17:31.762712 kubelet[2744]: I0819 08:17:31.762498 2744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae0e5a86-a329-48d5-995a-09c169f434f6-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ae0e5a86-a329-48d5-995a-09c169f434f6" (UID: "ae0e5a86-a329-48d5-995a-09c169f434f6"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 19 08:17:31.762712 kubelet[2744]: I0819 08:17:31.762514 2744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ae0e5a86-a329-48d5-995a-09c169f434f6-host-proc-sys-kernel\") pod \"ae0e5a86-a329-48d5-995a-09c169f434f6\" (UID: \"ae0e5a86-a329-48d5-995a-09c169f434f6\") " Aug 19 08:17:31.762712 kubelet[2744]: I0819 08:17:31.762533 2744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ae0e5a86-a329-48d5-995a-09c169f434f6-cni-path\") pod \"ae0e5a86-a329-48d5-995a-09c169f434f6\" (UID: \"ae0e5a86-a329-48d5-995a-09c169f434f6\") " Aug 19 08:17:31.762712 kubelet[2744]: I0819 08:17:31.762550 2744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ae0e5a86-a329-48d5-995a-09c169f434f6-lib-modules\") pod \"ae0e5a86-a329-48d5-995a-09c169f434f6\" (UID: \"ae0e5a86-a329-48d5-995a-09c169f434f6\") " Aug 19 08:17:31.762712 kubelet[2744]: I0819 08:17:31.762571 2744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ae0e5a86-a329-48d5-995a-09c169f434f6-cilium-cgroup\") pod \"ae0e5a86-a329-48d5-995a-09c169f434f6\" (UID: \"ae0e5a86-a329-48d5-995a-09c169f434f6\") " Aug 19 08:17:31.762712 kubelet[2744]: I0819 08:17:31.762590 2744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ae0e5a86-a329-48d5-995a-09c169f434f6-hostproc\") pod \"ae0e5a86-a329-48d5-995a-09c169f434f6\" (UID: \"ae0e5a86-a329-48d5-995a-09c169f434f6\") " Aug 19 08:17:31.762883 kubelet[2744]: I0819 08:17:31.762608 2744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ae0e5a86-a329-48d5-995a-09c169f434f6-bpf-maps\") pod \"ae0e5a86-a329-48d5-995a-09c169f434f6\" (UID: \"ae0e5a86-a329-48d5-995a-09c169f434f6\") " Aug 19 08:17:31.762883 kubelet[2744]: I0819 08:17:31.762629 2744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ae0e5a86-a329-48d5-995a-09c169f434f6-cilium-config-path\") pod \"ae0e5a86-a329-48d5-995a-09c169f434f6\" (UID: \"ae0e5a86-a329-48d5-995a-09c169f434f6\") " Aug 19 08:17:31.762883 kubelet[2744]: I0819 08:17:31.762653 2744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ae0e5a86-a329-48d5-995a-09c169f434f6-cilium-run\") pod \"ae0e5a86-a329-48d5-995a-09c169f434f6\" (UID: \"ae0e5a86-a329-48d5-995a-09c169f434f6\") " Aug 19 08:17:31.762883 kubelet[2744]: I0819 08:17:31.762680 2744 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vmp2\" (UniqueName: \"kubernetes.io/projected/29eba499-2eb0-45cc-adc2-ce2cef4738e8-kube-api-access-8vmp2\") pod \"29eba499-2eb0-45cc-adc2-ce2cef4738e8\" (UID: \"29eba499-2eb0-45cc-adc2-ce2cef4738e8\") " Aug 19 08:17:31.762883 kubelet[2744]: I0819 08:17:31.762714 2744 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ae0e5a86-a329-48d5-995a-09c169f434f6-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Aug 19 08:17:31.762883 kubelet[2744]: I0819 08:17:31.762727 2744 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ae0e5a86-a329-48d5-995a-09c169f434f6-xtables-lock\") on node \"localhost\" DevicePath \"\"" Aug 19 08:17:31.763341 kubelet[2744]: I0819 08:17:31.763067 2744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae0e5a86-a329-48d5-995a-09c169f434f6-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "ae0e5a86-a329-48d5-995a-09c169f434f6" (UID: "ae0e5a86-a329-48d5-995a-09c169f434f6"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 19 08:17:31.763341 kubelet[2744]: I0819 08:17:31.763099 2744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae0e5a86-a329-48d5-995a-09c169f434f6-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ae0e5a86-a329-48d5-995a-09c169f434f6" (UID: "ae0e5a86-a329-48d5-995a-09c169f434f6"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 19 08:17:31.763805 kubelet[2744]: I0819 08:17:31.763767 2744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae0e5a86-a329-48d5-995a-09c169f434f6-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "ae0e5a86-a329-48d5-995a-09c169f434f6" (UID: "ae0e5a86-a329-48d5-995a-09c169f434f6"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 19 08:17:31.763805 kubelet[2744]: I0819 08:17:31.763783 2744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae0e5a86-a329-48d5-995a-09c169f434f6-hostproc" (OuterVolumeSpecName: "hostproc") pod "ae0e5a86-a329-48d5-995a-09c169f434f6" (UID: "ae0e5a86-a329-48d5-995a-09c169f434f6"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 19 08:17:31.763805 kubelet[2744]: I0819 08:17:31.763767 2744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae0e5a86-a329-48d5-995a-09c169f434f6-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "ae0e5a86-a329-48d5-995a-09c169f434f6" (UID: "ae0e5a86-a329-48d5-995a-09c169f434f6"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 19 08:17:31.763805 kubelet[2744]: I0819 08:17:31.763807 2744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae0e5a86-a329-48d5-995a-09c169f434f6-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "ae0e5a86-a329-48d5-995a-09c169f434f6" (UID: "ae0e5a86-a329-48d5-995a-09c169f434f6"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 19 08:17:31.764091 kubelet[2744]: I0819 08:17:31.764044 2744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae0e5a86-a329-48d5-995a-09c169f434f6-cni-path" (OuterVolumeSpecName: "cni-path") pod "ae0e5a86-a329-48d5-995a-09c169f434f6" (UID: "ae0e5a86-a329-48d5-995a-09c169f434f6"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 19 08:17:31.764091 kubelet[2744]: I0819 08:17:31.764081 2744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae0e5a86-a329-48d5-995a-09c169f434f6-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "ae0e5a86-a329-48d5-995a-09c169f434f6" (UID: "ae0e5a86-a329-48d5-995a-09c169f434f6"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 19 08:17:31.766871 kubelet[2744]: I0819 08:17:31.766786 2744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29eba499-2eb0-45cc-adc2-ce2cef4738e8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "29eba499-2eb0-45cc-adc2-ce2cef4738e8" (UID: "29eba499-2eb0-45cc-adc2-ce2cef4738e8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 19 08:17:31.767706 kubelet[2744]: I0819 08:17:31.767662 2744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae0e5a86-a329-48d5-995a-09c169f434f6-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "ae0e5a86-a329-48d5-995a-09c169f434f6" (UID: "ae0e5a86-a329-48d5-995a-09c169f434f6"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 19 08:17:31.767706 kubelet[2744]: I0819 08:17:31.767695 2744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae0e5a86-a329-48d5-995a-09c169f434f6-kube-api-access-6tkfc" (OuterVolumeSpecName: "kube-api-access-6tkfc") pod "ae0e5a86-a329-48d5-995a-09c169f434f6" (UID: "ae0e5a86-a329-48d5-995a-09c169f434f6"). InnerVolumeSpecName "kube-api-access-6tkfc". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 19 08:17:31.768055 kubelet[2744]: I0819 08:17:31.768033 2744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29eba499-2eb0-45cc-adc2-ce2cef4738e8-kube-api-access-8vmp2" (OuterVolumeSpecName: "kube-api-access-8vmp2") pod "29eba499-2eb0-45cc-adc2-ce2cef4738e8" (UID: "29eba499-2eb0-45cc-adc2-ce2cef4738e8"). InnerVolumeSpecName "kube-api-access-8vmp2". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 19 08:17:31.769009 kubelet[2744]: I0819 08:17:31.768979 2744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae0e5a86-a329-48d5-995a-09c169f434f6-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "ae0e5a86-a329-48d5-995a-09c169f434f6" (UID: "ae0e5a86-a329-48d5-995a-09c169f434f6"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 19 08:17:31.769221 kubelet[2744]: I0819 08:17:31.769200 2744 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae0e5a86-a329-48d5-995a-09c169f434f6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ae0e5a86-a329-48d5-995a-09c169f434f6" (UID: "ae0e5a86-a329-48d5-995a-09c169f434f6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 19 08:17:31.863592 kubelet[2744]: I0819 08:17:31.863515 2744 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ae0e5a86-a329-48d5-995a-09c169f434f6-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Aug 19 08:17:31.863592 kubelet[2744]: I0819 08:17:31.863560 2744 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ae0e5a86-a329-48d5-995a-09c169f434f6-hostproc\") on node \"localhost\" DevicePath \"\"" Aug 19 08:17:31.863592 kubelet[2744]: I0819 08:17:31.863597 2744 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ae0e5a86-a329-48d5-995a-09c169f434f6-bpf-maps\") on node \"localhost\" DevicePath \"\"" Aug 19 08:17:31.863592 kubelet[2744]: I0819 08:17:31.863606 2744 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ae0e5a86-a329-48d5-995a-09c169f434f6-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Aug 19 08:17:31.863592 kubelet[2744]: I0819 08:17:31.863616 2744 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ae0e5a86-a329-48d5-995a-09c169f434f6-cilium-run\") on node \"localhost\" DevicePath \"\"" Aug 19 08:17:31.863958 kubelet[2744]: I0819 08:17:31.863625 2744 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8vmp2\" (UniqueName: \"kubernetes.io/projected/29eba499-2eb0-45cc-adc2-ce2cef4738e8-kube-api-access-8vmp2\") on node \"localhost\" DevicePath \"\"" Aug 19 08:17:31.863958 kubelet[2744]: I0819 08:17:31.863633 2744 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/29eba499-2eb0-45cc-adc2-ce2cef4738e8-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Aug 19 08:17:31.863958 kubelet[2744]: I0819 08:17:31.863640 2744 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ae0e5a86-a329-48d5-995a-09c169f434f6-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Aug 19 08:17:31.863958 kubelet[2744]: I0819 08:17:31.863648 2744 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ae0e5a86-a329-48d5-995a-09c169f434f6-hubble-tls\") on node \"localhost\" DevicePath \"\"" Aug 19 08:17:31.863958 kubelet[2744]: I0819 08:17:31.863657 2744 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ae0e5a86-a329-48d5-995a-09c169f434f6-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Aug 19 08:17:31.863958 kubelet[2744]: I0819 08:17:31.863665 2744 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ae0e5a86-a329-48d5-995a-09c169f434f6-cni-path\") on node \"localhost\" DevicePath \"\"" Aug 19 08:17:31.863958 kubelet[2744]: I0819 08:17:31.863672 2744 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ae0e5a86-a329-48d5-995a-09c169f434f6-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Aug 19 08:17:31.863958 kubelet[2744]: I0819 08:17:31.863679 2744 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6tkfc\" (UniqueName: \"kubernetes.io/projected/ae0e5a86-a329-48d5-995a-09c169f434f6-kube-api-access-6tkfc\") on node \"localhost\" DevicePath \"\"" Aug 19 08:17:31.864215 kubelet[2744]: I0819 08:17:31.863687 2744 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ae0e5a86-a329-48d5-995a-09c169f434f6-lib-modules\") on node \"localhost\" DevicePath \"\"" Aug 19 08:17:32.085493 kubelet[2744]: I0819 08:17:32.085348 2744 scope.go:117] "RemoveContainer" containerID="a59d04d788cbc817623c3588adae1686d994e14f32c950589a8b32a7e984d45a" Aug 19 08:17:32.088235 containerd[1602]: time="2025-08-19T08:17:32.088196178Z" level=info msg="RemoveContainer for \"a59d04d788cbc817623c3588adae1686d994e14f32c950589a8b32a7e984d45a\"" Aug 19 08:17:32.092610 systemd[1]: Removed slice kubepods-burstable-podae0e5a86_a329_48d5_995a_09c169f434f6.slice - libcontainer container kubepods-burstable-podae0e5a86_a329_48d5_995a_09c169f434f6.slice. Aug 19 08:17:32.092714 systemd[1]: kubepods-burstable-podae0e5a86_a329_48d5_995a_09c169f434f6.slice: Consumed 6.867s CPU time, 124.4M memory peak, 192K read from disk, 16.6M written to disk. Aug 19 08:17:32.171654 systemd[1]: Removed slice kubepods-besteffort-pod29eba499_2eb0_45cc_adc2_ce2cef4738e8.slice - libcontainer container kubepods-besteffort-pod29eba499_2eb0_45cc_adc2_ce2cef4738e8.slice. Aug 19 08:17:32.264927 containerd[1602]: time="2025-08-19T08:17:32.264865085Z" level=info msg="RemoveContainer for \"a59d04d788cbc817623c3588adae1686d994e14f32c950589a8b32a7e984d45a\" returns successfully" Aug 19 08:17:32.286061 kubelet[2744]: I0819 08:17:32.286015 2744 scope.go:117] "RemoveContainer" containerID="962e6e8b18d749d5051851318bce0a403313fdc9a887c6fe720810c4a014a054" Aug 19 08:17:32.287585 containerd[1602]: time="2025-08-19T08:17:32.287551053Z" level=info msg="RemoveContainer for \"962e6e8b18d749d5051851318bce0a403313fdc9a887c6fe720810c4a014a054\"" Aug 19 08:17:32.292834 containerd[1602]: time="2025-08-19T08:17:32.292785128Z" level=info msg="RemoveContainer for \"962e6e8b18d749d5051851318bce0a403313fdc9a887c6fe720810c4a014a054\" returns successfully" Aug 19 08:17:32.293051 kubelet[2744]: I0819 08:17:32.292999 2744 scope.go:117] "RemoveContainer" containerID="1d7f2539f7e96c8d41d0a3608ec9db58c09450387d559890e52d8c20c896db53" Aug 19 08:17:32.295028 containerd[1602]: time="2025-08-19T08:17:32.294988932Z" level=info msg="RemoveContainer for \"1d7f2539f7e96c8d41d0a3608ec9db58c09450387d559890e52d8c20c896db53\"" Aug 19 08:17:32.299444 containerd[1602]: time="2025-08-19T08:17:32.299403393Z" level=info msg="RemoveContainer for \"1d7f2539f7e96c8d41d0a3608ec9db58c09450387d559890e52d8c20c896db53\" returns successfully" Aug 19 08:17:32.299600 kubelet[2744]: I0819 08:17:32.299571 2744 scope.go:117] "RemoveContainer" containerID="35b334275c250f165f9c1749b7bafe242b2f31a121e6283e66624cf4a9a5afa8" Aug 19 08:17:32.300837 containerd[1602]: time="2025-08-19T08:17:32.300811079Z" level=info msg="RemoveContainer for \"35b334275c250f165f9c1749b7bafe242b2f31a121e6283e66624cf4a9a5afa8\"" Aug 19 08:17:32.317522 containerd[1602]: time="2025-08-19T08:17:32.317473435Z" level=info msg="RemoveContainer for \"35b334275c250f165f9c1749b7bafe242b2f31a121e6283e66624cf4a9a5afa8\" returns successfully" Aug 19 08:17:32.317694 kubelet[2744]: I0819 08:17:32.317662 2744 scope.go:117] "RemoveContainer" containerID="6287a972b5797cf4261f4208f45cb10c97b039b9710872e621199d4418cc9fef" Aug 19 08:17:32.318796 containerd[1602]: time="2025-08-19T08:17:32.318774518Z" level=info msg="RemoveContainer for \"6287a972b5797cf4261f4208f45cb10c97b039b9710872e621199d4418cc9fef\"" Aug 19 08:17:32.324751 containerd[1602]: time="2025-08-19T08:17:32.324706253Z" level=info msg="RemoveContainer for \"6287a972b5797cf4261f4208f45cb10c97b039b9710872e621199d4418cc9fef\" returns successfully" Aug 19 08:17:32.324883 kubelet[2744]: I0819 08:17:32.324849 2744 scope.go:117] "RemoveContainer" containerID="a59d04d788cbc817623c3588adae1686d994e14f32c950589a8b32a7e984d45a" Aug 19 08:17:32.325270 containerd[1602]: time="2025-08-19T08:17:32.325195617Z" level=error msg="ContainerStatus for \"a59d04d788cbc817623c3588adae1686d994e14f32c950589a8b32a7e984d45a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a59d04d788cbc817623c3588adae1686d994e14f32c950589a8b32a7e984d45a\": not found" Aug 19 08:17:32.326052 kubelet[2744]: E0819 08:17:32.325994 2744 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a59d04d788cbc817623c3588adae1686d994e14f32c950589a8b32a7e984d45a\": not found" containerID="a59d04d788cbc817623c3588adae1686d994e14f32c950589a8b32a7e984d45a" Aug 19 08:17:32.326159 kubelet[2744]: I0819 08:17:32.326046 2744 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a59d04d788cbc817623c3588adae1686d994e14f32c950589a8b32a7e984d45a"} err="failed to get container status \"a59d04d788cbc817623c3588adae1686d994e14f32c950589a8b32a7e984d45a\": rpc error: code = NotFound desc = an error occurred when try to find container \"a59d04d788cbc817623c3588adae1686d994e14f32c950589a8b32a7e984d45a\": not found" Aug 19 08:17:32.326159 kubelet[2744]: I0819 08:17:32.326149 2744 scope.go:117] "RemoveContainer" containerID="962e6e8b18d749d5051851318bce0a403313fdc9a887c6fe720810c4a014a054" Aug 19 08:17:32.326378 containerd[1602]: time="2025-08-19T08:17:32.326321855Z" level=error msg="ContainerStatus for \"962e6e8b18d749d5051851318bce0a403313fdc9a887c6fe720810c4a014a054\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"962e6e8b18d749d5051851318bce0a403313fdc9a887c6fe720810c4a014a054\": not found" Aug 19 08:17:32.326581 kubelet[2744]: E0819 08:17:32.326453 2744 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"962e6e8b18d749d5051851318bce0a403313fdc9a887c6fe720810c4a014a054\": not found" containerID="962e6e8b18d749d5051851318bce0a403313fdc9a887c6fe720810c4a014a054" Aug 19 08:17:32.326581 kubelet[2744]: I0819 08:17:32.326476 2744 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"962e6e8b18d749d5051851318bce0a403313fdc9a887c6fe720810c4a014a054"} err="failed to get container status \"962e6e8b18d749d5051851318bce0a403313fdc9a887c6fe720810c4a014a054\": rpc error: code = NotFound desc = an error occurred when try to find container \"962e6e8b18d749d5051851318bce0a403313fdc9a887c6fe720810c4a014a054\": not found" Aug 19 08:17:32.326581 kubelet[2744]: I0819 08:17:32.326495 2744 scope.go:117] "RemoveContainer" containerID="1d7f2539f7e96c8d41d0a3608ec9db58c09450387d559890e52d8c20c896db53" Aug 19 08:17:32.326677 containerd[1602]: time="2025-08-19T08:17:32.326646775Z" level=error msg="ContainerStatus for \"1d7f2539f7e96c8d41d0a3608ec9db58c09450387d559890e52d8c20c896db53\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1d7f2539f7e96c8d41d0a3608ec9db58c09450387d559890e52d8c20c896db53\": not found" Aug 19 08:17:32.326793 kubelet[2744]: E0819 08:17:32.326770 2744 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1d7f2539f7e96c8d41d0a3608ec9db58c09450387d559890e52d8c20c896db53\": not found" containerID="1d7f2539f7e96c8d41d0a3608ec9db58c09450387d559890e52d8c20c896db53" Aug 19 08:17:32.326850 kubelet[2744]: I0819 08:17:32.326794 2744 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1d7f2539f7e96c8d41d0a3608ec9db58c09450387d559890e52d8c20c896db53"} err="failed to get container status \"1d7f2539f7e96c8d41d0a3608ec9db58c09450387d559890e52d8c20c896db53\": rpc error: code = NotFound desc = an error occurred when try to find container \"1d7f2539f7e96c8d41d0a3608ec9db58c09450387d559890e52d8c20c896db53\": not found" Aug 19 08:17:32.326850 kubelet[2744]: I0819 08:17:32.326808 2744 scope.go:117] "RemoveContainer" containerID="35b334275c250f165f9c1749b7bafe242b2f31a121e6283e66624cf4a9a5afa8" Aug 19 08:17:32.326992 containerd[1602]: time="2025-08-19T08:17:32.326954562Z" level=error msg="ContainerStatus for \"35b334275c250f165f9c1749b7bafe242b2f31a121e6283e66624cf4a9a5afa8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"35b334275c250f165f9c1749b7bafe242b2f31a121e6283e66624cf4a9a5afa8\": not found" Aug 19 08:17:32.327122 kubelet[2744]: E0819 08:17:32.327083 2744 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"35b334275c250f165f9c1749b7bafe242b2f31a121e6283e66624cf4a9a5afa8\": not found" containerID="35b334275c250f165f9c1749b7bafe242b2f31a121e6283e66624cf4a9a5afa8" Aug 19 08:17:32.327122 kubelet[2744]: I0819 08:17:32.327103 2744 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"35b334275c250f165f9c1749b7bafe242b2f31a121e6283e66624cf4a9a5afa8"} err="failed to get container status \"35b334275c250f165f9c1749b7bafe242b2f31a121e6283e66624cf4a9a5afa8\": rpc error: code = NotFound desc = an error occurred when try to find container \"35b334275c250f165f9c1749b7bafe242b2f31a121e6283e66624cf4a9a5afa8\": not found" Aug 19 08:17:32.327122 kubelet[2744]: I0819 08:17:32.327118 2744 scope.go:117] "RemoveContainer" containerID="6287a972b5797cf4261f4208f45cb10c97b039b9710872e621199d4418cc9fef" Aug 19 08:17:32.327304 containerd[1602]: time="2025-08-19T08:17:32.327270846Z" level=error msg="ContainerStatus for \"6287a972b5797cf4261f4208f45cb10c97b039b9710872e621199d4418cc9fef\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6287a972b5797cf4261f4208f45cb10c97b039b9710872e621199d4418cc9fef\": not found" Aug 19 08:17:32.327425 kubelet[2744]: E0819 08:17:32.327394 2744 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6287a972b5797cf4261f4208f45cb10c97b039b9710872e621199d4418cc9fef\": not found" containerID="6287a972b5797cf4261f4208f45cb10c97b039b9710872e621199d4418cc9fef" Aug 19 08:17:32.327425 kubelet[2744]: I0819 08:17:32.327411 2744 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6287a972b5797cf4261f4208f45cb10c97b039b9710872e621199d4418cc9fef"} err="failed to get container status \"6287a972b5797cf4261f4208f45cb10c97b039b9710872e621199d4418cc9fef\": rpc error: code = NotFound desc = an error occurred when try to find container \"6287a972b5797cf4261f4208f45cb10c97b039b9710872e621199d4418cc9fef\": not found" Aug 19 08:17:32.327425 kubelet[2744]: I0819 08:17:32.327425 2744 scope.go:117] "RemoveContainer" containerID="a75e99e0f877f10cdc0b8b99e2aef78381a9ad8ec06b0428ba9226b5a4f8d1c2" Aug 19 08:17:32.328762 containerd[1602]: time="2025-08-19T08:17:32.328719440Z" level=info msg="RemoveContainer for \"a75e99e0f877f10cdc0b8b99e2aef78381a9ad8ec06b0428ba9226b5a4f8d1c2\"" Aug 19 08:17:32.334652 containerd[1602]: time="2025-08-19T08:17:32.334602913Z" level=info msg="RemoveContainer for \"a75e99e0f877f10cdc0b8b99e2aef78381a9ad8ec06b0428ba9226b5a4f8d1c2\" returns successfully" Aug 19 08:17:32.334870 kubelet[2744]: I0819 08:17:32.334833 2744 scope.go:117] "RemoveContainer" containerID="a75e99e0f877f10cdc0b8b99e2aef78381a9ad8ec06b0428ba9226b5a4f8d1c2" Aug 19 08:17:32.335118 containerd[1602]: time="2025-08-19T08:17:32.335067550Z" level=error msg="ContainerStatus for \"a75e99e0f877f10cdc0b8b99e2aef78381a9ad8ec06b0428ba9226b5a4f8d1c2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a75e99e0f877f10cdc0b8b99e2aef78381a9ad8ec06b0428ba9226b5a4f8d1c2\": not found" Aug 19 08:17:32.335263 kubelet[2744]: E0819 08:17:32.335216 2744 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a75e99e0f877f10cdc0b8b99e2aef78381a9ad8ec06b0428ba9226b5a4f8d1c2\": not found" containerID="a75e99e0f877f10cdc0b8b99e2aef78381a9ad8ec06b0428ba9226b5a4f8d1c2" Aug 19 08:17:32.335311 kubelet[2744]: I0819 08:17:32.335261 2744 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a75e99e0f877f10cdc0b8b99e2aef78381a9ad8ec06b0428ba9226b5a4f8d1c2"} err="failed to get container status \"a75e99e0f877f10cdc0b8b99e2aef78381a9ad8ec06b0428ba9226b5a4f8d1c2\": rpc error: code = NotFound desc = an error occurred when try to find container \"a75e99e0f877f10cdc0b8b99e2aef78381a9ad8ec06b0428ba9226b5a4f8d1c2\": not found" Aug 19 08:17:32.388842 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-415077c815ad899b18629b761c61c2a2a894bd3a6ba4508d3a76241a3f16a531-shm.mount: Deactivated successfully. Aug 19 08:17:32.388995 systemd[1]: var-lib-kubelet-pods-29eba499\x2d2eb0\x2d45cc\x2dadc2\x2dce2cef4738e8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8vmp2.mount: Deactivated successfully. Aug 19 08:17:32.389110 systemd[1]: var-lib-kubelet-pods-ae0e5a86\x2da329\x2d48d5\x2d995a\x2d09c169f434f6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6tkfc.mount: Deactivated successfully. Aug 19 08:17:32.389206 systemd[1]: var-lib-kubelet-pods-ae0e5a86\x2da329\x2d48d5\x2d995a\x2d09c169f434f6-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Aug 19 08:17:32.389321 systemd[1]: var-lib-kubelet-pods-ae0e5a86\x2da329\x2d48d5\x2d995a\x2d09c169f434f6-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Aug 19 08:17:32.770852 kubelet[2744]: I0819 08:17:32.770800 2744 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29eba499-2eb0-45cc-adc2-ce2cef4738e8" path="/var/lib/kubelet/pods/29eba499-2eb0-45cc-adc2-ce2cef4738e8/volumes" Aug 19 08:17:32.771366 kubelet[2744]: I0819 08:17:32.771344 2744 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae0e5a86-a329-48d5-995a-09c169f434f6" path="/var/lib/kubelet/pods/ae0e5a86-a329-48d5-995a-09c169f434f6/volumes" Aug 19 08:17:33.297326 sshd[4381]: Connection closed by 10.0.0.1 port 58988 Aug 19 08:17:33.297838 sshd-session[4378]: pam_unix(sshd:session): session closed for user core Aug 19 08:17:33.316931 systemd[1]: sshd@24-10.0.0.113:22-10.0.0.1:58988.service: Deactivated successfully. Aug 19 08:17:33.319173 systemd[1]: session-25.scope: Deactivated successfully. Aug 19 08:17:33.320166 systemd-logind[1582]: Session 25 logged out. Waiting for processes to exit. Aug 19 08:17:33.323140 systemd[1]: Started sshd@25-10.0.0.113:22-10.0.0.1:58994.service - OpenSSH per-connection server daemon (10.0.0.1:58994). Aug 19 08:17:33.323778 systemd-logind[1582]: Removed session 25. Aug 19 08:17:33.370678 sshd[4526]: Accepted publickey for core from 10.0.0.1 port 58994 ssh2: RSA SHA256:kecLVWRG1G7MHrHN/yG6X078KPWjs/jTMbEJqAmOzyM Aug 19 08:17:33.372358 sshd-session[4526]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:17:33.377317 systemd-logind[1582]: New session 26 of user core. Aug 19 08:17:33.386872 systemd[1]: Started session-26.scope - Session 26 of User core. Aug 19 08:17:34.060485 sshd[4529]: Connection closed by 10.0.0.1 port 58994 Aug 19 08:17:34.061676 sshd-session[4526]: pam_unix(sshd:session): session closed for user core Aug 19 08:17:34.074659 systemd[1]: sshd@25-10.0.0.113:22-10.0.0.1:58994.service: Deactivated successfully. Aug 19 08:17:34.077708 systemd[1]: session-26.scope: Deactivated successfully. Aug 19 08:17:34.079606 systemd-logind[1582]: Session 26 logged out. Waiting for processes to exit. Aug 19 08:17:34.083039 kubelet[2744]: E0819 08:17:34.082985 2744 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ae0e5a86-a329-48d5-995a-09c169f434f6" containerName="mount-cgroup" Aug 19 08:17:34.083039 kubelet[2744]: E0819 08:17:34.083023 2744 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ae0e5a86-a329-48d5-995a-09c169f434f6" containerName="apply-sysctl-overwrites" Aug 19 08:17:34.083039 kubelet[2744]: E0819 08:17:34.083030 2744 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ae0e5a86-a329-48d5-995a-09c169f434f6" containerName="cilium-agent" Aug 19 08:17:34.083039 kubelet[2744]: E0819 08:17:34.083037 2744 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ae0e5a86-a329-48d5-995a-09c169f434f6" containerName="mount-bpf-fs" Aug 19 08:17:34.083039 kubelet[2744]: E0819 08:17:34.083043 2744 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="29eba499-2eb0-45cc-adc2-ce2cef4738e8" containerName="cilium-operator" Aug 19 08:17:34.085114 kubelet[2744]: E0819 08:17:34.083049 2744 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ae0e5a86-a329-48d5-995a-09c169f434f6" containerName="clean-cilium-state" Aug 19 08:17:34.085114 kubelet[2744]: I0819 08:17:34.083081 2744 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae0e5a86-a329-48d5-995a-09c169f434f6" containerName="cilium-agent" Aug 19 08:17:34.085114 kubelet[2744]: I0819 08:17:34.083088 2744 memory_manager.go:354] "RemoveStaleState removing state" podUID="29eba499-2eb0-45cc-adc2-ce2cef4738e8" containerName="cilium-operator" Aug 19 08:17:34.084433 systemd[1]: Started sshd@26-10.0.0.113:22-10.0.0.1:58996.service - OpenSSH per-connection server daemon (10.0.0.1:58996). Aug 19 08:17:34.089330 systemd-logind[1582]: Removed session 26. Aug 19 08:17:34.101322 systemd[1]: Created slice kubepods-burstable-podc098ae45_774e_4cd5_abbf_2d66a8c85725.slice - libcontainer container kubepods-burstable-podc098ae45_774e_4cd5_abbf_2d66a8c85725.slice. Aug 19 08:17:34.136885 sshd[4541]: Accepted publickey for core from 10.0.0.1 port 58996 ssh2: RSA SHA256:kecLVWRG1G7MHrHN/yG6X078KPWjs/jTMbEJqAmOzyM Aug 19 08:17:34.138224 sshd-session[4541]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:17:34.142931 systemd-logind[1582]: New session 27 of user core. Aug 19 08:17:34.156884 systemd[1]: Started session-27.scope - Session 27 of User core. Aug 19 08:17:34.208345 sshd[4544]: Connection closed by 10.0.0.1 port 58996 Aug 19 08:17:34.208749 sshd-session[4541]: pam_unix(sshd:session): session closed for user core Aug 19 08:17:34.221595 systemd[1]: sshd@26-10.0.0.113:22-10.0.0.1:58996.service: Deactivated successfully. Aug 19 08:17:34.223361 systemd[1]: session-27.scope: Deactivated successfully. Aug 19 08:17:34.224285 systemd-logind[1582]: Session 27 logged out. Waiting for processes to exit. Aug 19 08:17:34.227087 systemd[1]: Started sshd@27-10.0.0.113:22-10.0.0.1:59006.service - OpenSSH per-connection server daemon (10.0.0.1:59006). Aug 19 08:17:34.227778 systemd-logind[1582]: Removed session 27. Aug 19 08:17:34.283479 kubelet[2744]: I0819 08:17:34.283387 2744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4m8c\" (UniqueName: \"kubernetes.io/projected/c098ae45-774e-4cd5-abbf-2d66a8c85725-kube-api-access-l4m8c\") pod \"cilium-79kb6\" (UID: \"c098ae45-774e-4cd5-abbf-2d66a8c85725\") " pod="kube-system/cilium-79kb6" Aug 19 08:17:34.283479 kubelet[2744]: I0819 08:17:34.283457 2744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c098ae45-774e-4cd5-abbf-2d66a8c85725-cni-path\") pod \"cilium-79kb6\" (UID: \"c098ae45-774e-4cd5-abbf-2d66a8c85725\") " pod="kube-system/cilium-79kb6" Aug 19 08:17:34.283479 kubelet[2744]: I0819 08:17:34.283481 2744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c098ae45-774e-4cd5-abbf-2d66a8c85725-host-proc-sys-net\") pod \"cilium-79kb6\" (UID: \"c098ae45-774e-4cd5-abbf-2d66a8c85725\") " pod="kube-system/cilium-79kb6" Aug 19 08:17:34.283884 kubelet[2744]: I0819 08:17:34.283502 2744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c098ae45-774e-4cd5-abbf-2d66a8c85725-etc-cni-netd\") pod \"cilium-79kb6\" (UID: \"c098ae45-774e-4cd5-abbf-2d66a8c85725\") " pod="kube-system/cilium-79kb6" Aug 19 08:17:34.283884 kubelet[2744]: I0819 08:17:34.283525 2744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c098ae45-774e-4cd5-abbf-2d66a8c85725-host-proc-sys-kernel\") pod \"cilium-79kb6\" (UID: \"c098ae45-774e-4cd5-abbf-2d66a8c85725\") " pod="kube-system/cilium-79kb6" Aug 19 08:17:34.283884 kubelet[2744]: I0819 08:17:34.283585 2744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c098ae45-774e-4cd5-abbf-2d66a8c85725-cilium-run\") pod \"cilium-79kb6\" (UID: \"c098ae45-774e-4cd5-abbf-2d66a8c85725\") " pod="kube-system/cilium-79kb6" Aug 19 08:17:34.283884 kubelet[2744]: I0819 08:17:34.283620 2744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c098ae45-774e-4cd5-abbf-2d66a8c85725-hubble-tls\") pod \"cilium-79kb6\" (UID: \"c098ae45-774e-4cd5-abbf-2d66a8c85725\") " pod="kube-system/cilium-79kb6" Aug 19 08:17:34.283884 kubelet[2744]: I0819 08:17:34.283661 2744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c098ae45-774e-4cd5-abbf-2d66a8c85725-xtables-lock\") pod \"cilium-79kb6\" (UID: \"c098ae45-774e-4cd5-abbf-2d66a8c85725\") " pod="kube-system/cilium-79kb6" Aug 19 08:17:34.283884 kubelet[2744]: I0819 08:17:34.283683 2744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c098ae45-774e-4cd5-abbf-2d66a8c85725-clustermesh-secrets\") pod \"cilium-79kb6\" (UID: \"c098ae45-774e-4cd5-abbf-2d66a8c85725\") " pod="kube-system/cilium-79kb6" Aug 19 08:17:34.284129 kubelet[2744]: I0819 08:17:34.283710 2744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c098ae45-774e-4cd5-abbf-2d66a8c85725-cilium-config-path\") pod \"cilium-79kb6\" (UID: \"c098ae45-774e-4cd5-abbf-2d66a8c85725\") " pod="kube-system/cilium-79kb6" Aug 19 08:17:34.284129 kubelet[2744]: I0819 08:17:34.283725 2744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c098ae45-774e-4cd5-abbf-2d66a8c85725-lib-modules\") pod \"cilium-79kb6\" (UID: \"c098ae45-774e-4cd5-abbf-2d66a8c85725\") " pod="kube-system/cilium-79kb6" Aug 19 08:17:34.284129 kubelet[2744]: I0819 08:17:34.283761 2744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c098ae45-774e-4cd5-abbf-2d66a8c85725-hostproc\") pod \"cilium-79kb6\" (UID: \"c098ae45-774e-4cd5-abbf-2d66a8c85725\") " pod="kube-system/cilium-79kb6" Aug 19 08:17:34.284129 kubelet[2744]: I0819 08:17:34.283778 2744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c098ae45-774e-4cd5-abbf-2d66a8c85725-cilium-ipsec-secrets\") pod \"cilium-79kb6\" (UID: \"c098ae45-774e-4cd5-abbf-2d66a8c85725\") " pod="kube-system/cilium-79kb6" Aug 19 08:17:34.284129 kubelet[2744]: I0819 08:17:34.283793 2744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c098ae45-774e-4cd5-abbf-2d66a8c85725-bpf-maps\") pod \"cilium-79kb6\" (UID: \"c098ae45-774e-4cd5-abbf-2d66a8c85725\") " pod="kube-system/cilium-79kb6" Aug 19 08:17:34.284129 kubelet[2744]: I0819 08:17:34.283817 2744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c098ae45-774e-4cd5-abbf-2d66a8c85725-cilium-cgroup\") pod \"cilium-79kb6\" (UID: \"c098ae45-774e-4cd5-abbf-2d66a8c85725\") " pod="kube-system/cilium-79kb6" Aug 19 08:17:34.284645 sshd[4551]: Accepted publickey for core from 10.0.0.1 port 59006 ssh2: RSA SHA256:kecLVWRG1G7MHrHN/yG6X078KPWjs/jTMbEJqAmOzyM Aug 19 08:17:34.285256 sshd-session[4551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:17:34.290006 systemd-logind[1582]: New session 28 of user core. Aug 19 08:17:34.295960 systemd[1]: Started session-28.scope - Session 28 of User core. Aug 19 08:17:34.406162 kubelet[2744]: E0819 08:17:34.406008 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:17:34.407184 containerd[1602]: time="2025-08-19T08:17:34.406587337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-79kb6,Uid:c098ae45-774e-4cd5-abbf-2d66a8c85725,Namespace:kube-system,Attempt:0,}" Aug 19 08:17:34.431494 containerd[1602]: time="2025-08-19T08:17:34.431442665Z" level=info msg="connecting to shim 7155a9e6db1ec63b2156bdb3321b65e165049102c043b168a7b56e2a2dc88be7" address="unix:///run/containerd/s/ec7360c2b33edba2ae2e7b2e55452248ceb7535b27ce7028b44fbdf12d486de5" namespace=k8s.io protocol=ttrpc version=3 Aug 19 08:17:34.459881 systemd[1]: Started cri-containerd-7155a9e6db1ec63b2156bdb3321b65e165049102c043b168a7b56e2a2dc88be7.scope - libcontainer container 7155a9e6db1ec63b2156bdb3321b65e165049102c043b168a7b56e2a2dc88be7. Aug 19 08:17:34.487969 containerd[1602]: time="2025-08-19T08:17:34.487908354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-79kb6,Uid:c098ae45-774e-4cd5-abbf-2d66a8c85725,Namespace:kube-system,Attempt:0,} returns sandbox id \"7155a9e6db1ec63b2156bdb3321b65e165049102c043b168a7b56e2a2dc88be7\"" Aug 19 08:17:34.488688 kubelet[2744]: E0819 08:17:34.488659 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:17:34.491133 containerd[1602]: time="2025-08-19T08:17:34.491073528Z" level=info msg="CreateContainer within sandbox \"7155a9e6db1ec63b2156bdb3321b65e165049102c043b168a7b56e2a2dc88be7\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 19 08:17:34.500551 containerd[1602]: time="2025-08-19T08:17:34.500305194Z" level=info msg="Container 876de93c7d67d8238b2c7d30cbe4e9f075f398caa0fca1e65d62b5403f478678: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:17:34.509643 containerd[1602]: time="2025-08-19T08:17:34.509594240Z" level=info msg="CreateContainer within sandbox \"7155a9e6db1ec63b2156bdb3321b65e165049102c043b168a7b56e2a2dc88be7\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"876de93c7d67d8238b2c7d30cbe4e9f075f398caa0fca1e65d62b5403f478678\"" Aug 19 08:17:34.510189 containerd[1602]: time="2025-08-19T08:17:34.510146753Z" level=info msg="StartContainer for \"876de93c7d67d8238b2c7d30cbe4e9f075f398caa0fca1e65d62b5403f478678\"" Aug 19 08:17:34.514996 containerd[1602]: time="2025-08-19T08:17:34.514948576Z" level=info msg="connecting to shim 876de93c7d67d8238b2c7d30cbe4e9f075f398caa0fca1e65d62b5403f478678" address="unix:///run/containerd/s/ec7360c2b33edba2ae2e7b2e55452248ceb7535b27ce7028b44fbdf12d486de5" protocol=ttrpc version=3 Aug 19 08:17:34.541919 systemd[1]: Started cri-containerd-876de93c7d67d8238b2c7d30cbe4e9f075f398caa0fca1e65d62b5403f478678.scope - libcontainer container 876de93c7d67d8238b2c7d30cbe4e9f075f398caa0fca1e65d62b5403f478678. Aug 19 08:17:34.573766 containerd[1602]: time="2025-08-19T08:17:34.573662431Z" level=info msg="StartContainer for \"876de93c7d67d8238b2c7d30cbe4e9f075f398caa0fca1e65d62b5403f478678\" returns successfully" Aug 19 08:17:34.583182 systemd[1]: cri-containerd-876de93c7d67d8238b2c7d30cbe4e9f075f398caa0fca1e65d62b5403f478678.scope: Deactivated successfully. Aug 19 08:17:34.584473 containerd[1602]: time="2025-08-19T08:17:34.584416950Z" level=info msg="received exit event container_id:\"876de93c7d67d8238b2c7d30cbe4e9f075f398caa0fca1e65d62b5403f478678\" id:\"876de93c7d67d8238b2c7d30cbe4e9f075f398caa0fca1e65d62b5403f478678\" pid:4625 exited_at:{seconds:1755591454 nanos:583989616}" Aug 19 08:17:34.589130 containerd[1602]: time="2025-08-19T08:17:34.589083986Z" level=info msg="TaskExit event in podsandbox handler container_id:\"876de93c7d67d8238b2c7d30cbe4e9f075f398caa0fca1e65d62b5403f478678\" id:\"876de93c7d67d8238b2c7d30cbe4e9f075f398caa0fca1e65d62b5403f478678\" pid:4625 exited_at:{seconds:1755591454 nanos:583989616}" Aug 19 08:17:34.849891 kubelet[2744]: E0819 08:17:34.849795 2744 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 19 08:17:35.100257 kubelet[2744]: E0819 08:17:35.100097 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:17:35.101667 containerd[1602]: time="2025-08-19T08:17:35.101628473Z" level=info msg="CreateContainer within sandbox \"7155a9e6db1ec63b2156bdb3321b65e165049102c043b168a7b56e2a2dc88be7\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 19 08:17:35.403633 containerd[1602]: time="2025-08-19T08:17:35.403330874Z" level=info msg="Container bad0f3c0bdc885eee02233e608d50d98606d8d899c1d49e2a431779f62e43b36: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:17:35.411990 containerd[1602]: time="2025-08-19T08:17:35.411935680Z" level=info msg="CreateContainer within sandbox \"7155a9e6db1ec63b2156bdb3321b65e165049102c043b168a7b56e2a2dc88be7\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"bad0f3c0bdc885eee02233e608d50d98606d8d899c1d49e2a431779f62e43b36\"" Aug 19 08:17:35.412621 containerd[1602]: time="2025-08-19T08:17:35.412571742Z" level=info msg="StartContainer for \"bad0f3c0bdc885eee02233e608d50d98606d8d899c1d49e2a431779f62e43b36\"" Aug 19 08:17:35.413452 containerd[1602]: time="2025-08-19T08:17:35.413407565Z" level=info msg="connecting to shim bad0f3c0bdc885eee02233e608d50d98606d8d899c1d49e2a431779f62e43b36" address="unix:///run/containerd/s/ec7360c2b33edba2ae2e7b2e55452248ceb7535b27ce7028b44fbdf12d486de5" protocol=ttrpc version=3 Aug 19 08:17:35.437877 systemd[1]: Started cri-containerd-bad0f3c0bdc885eee02233e608d50d98606d8d899c1d49e2a431779f62e43b36.scope - libcontainer container bad0f3c0bdc885eee02233e608d50d98606d8d899c1d49e2a431779f62e43b36. Aug 19 08:17:35.467938 containerd[1602]: time="2025-08-19T08:17:35.467897026Z" level=info msg="StartContainer for \"bad0f3c0bdc885eee02233e608d50d98606d8d899c1d49e2a431779f62e43b36\" returns successfully" Aug 19 08:17:35.474621 systemd[1]: cri-containerd-bad0f3c0bdc885eee02233e608d50d98606d8d899c1d49e2a431779f62e43b36.scope: Deactivated successfully. Aug 19 08:17:35.475179 containerd[1602]: time="2025-08-19T08:17:35.475147261Z" level=info msg="received exit event container_id:\"bad0f3c0bdc885eee02233e608d50d98606d8d899c1d49e2a431779f62e43b36\" id:\"bad0f3c0bdc885eee02233e608d50d98606d8d899c1d49e2a431779f62e43b36\" pid:4671 exited_at:{seconds:1755591455 nanos:474934996}" Aug 19 08:17:35.475312 containerd[1602]: time="2025-08-19T08:17:35.475281357Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bad0f3c0bdc885eee02233e608d50d98606d8d899c1d49e2a431779f62e43b36\" id:\"bad0f3c0bdc885eee02233e608d50d98606d8d899c1d49e2a431779f62e43b36\" pid:4671 exited_at:{seconds:1755591455 nanos:474934996}" Aug 19 08:17:35.496282 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bad0f3c0bdc885eee02233e608d50d98606d8d899c1d49e2a431779f62e43b36-rootfs.mount: Deactivated successfully. Aug 19 08:17:36.104600 kubelet[2744]: E0819 08:17:36.104545 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:17:36.107230 containerd[1602]: time="2025-08-19T08:17:36.106690854Z" level=info msg="CreateContainer within sandbox \"7155a9e6db1ec63b2156bdb3321b65e165049102c043b168a7b56e2a2dc88be7\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 19 08:17:36.251478 containerd[1602]: time="2025-08-19T08:17:36.251406824Z" level=info msg="Container d3bc0748c3051631c044f51a6da86383de5f8af668a3ce60476aaceed39b7810: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:17:36.313819 containerd[1602]: time="2025-08-19T08:17:36.313758812Z" level=info msg="CreateContainer within sandbox \"7155a9e6db1ec63b2156bdb3321b65e165049102c043b168a7b56e2a2dc88be7\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d3bc0748c3051631c044f51a6da86383de5f8af668a3ce60476aaceed39b7810\"" Aug 19 08:17:36.314386 containerd[1602]: time="2025-08-19T08:17:36.314307006Z" level=info msg="StartContainer for \"d3bc0748c3051631c044f51a6da86383de5f8af668a3ce60476aaceed39b7810\"" Aug 19 08:17:36.315677 containerd[1602]: time="2025-08-19T08:17:36.315645506Z" level=info msg="connecting to shim d3bc0748c3051631c044f51a6da86383de5f8af668a3ce60476aaceed39b7810" address="unix:///run/containerd/s/ec7360c2b33edba2ae2e7b2e55452248ceb7535b27ce7028b44fbdf12d486de5" protocol=ttrpc version=3 Aug 19 08:17:36.342810 kubelet[2744]: I0819 08:17:36.342166 2744 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-19T08:17:36Z","lastTransitionTime":"2025-08-19T08:17:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Aug 19 08:17:36.352983 systemd[1]: Started cri-containerd-d3bc0748c3051631c044f51a6da86383de5f8af668a3ce60476aaceed39b7810.scope - libcontainer container d3bc0748c3051631c044f51a6da86383de5f8af668a3ce60476aaceed39b7810. Aug 19 08:17:36.400375 systemd[1]: cri-containerd-d3bc0748c3051631c044f51a6da86383de5f8af668a3ce60476aaceed39b7810.scope: Deactivated successfully. Aug 19 08:17:36.401394 containerd[1602]: time="2025-08-19T08:17:36.401309194Z" level=info msg="received exit event container_id:\"d3bc0748c3051631c044f51a6da86383de5f8af668a3ce60476aaceed39b7810\" id:\"d3bc0748c3051631c044f51a6da86383de5f8af668a3ce60476aaceed39b7810\" pid:4715 exited_at:{seconds:1755591456 nanos:401056232}" Aug 19 08:17:36.401713 containerd[1602]: time="2025-08-19T08:17:36.401406009Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d3bc0748c3051631c044f51a6da86383de5f8af668a3ce60476aaceed39b7810\" id:\"d3bc0748c3051631c044f51a6da86383de5f8af668a3ce60476aaceed39b7810\" pid:4715 exited_at:{seconds:1755591456 nanos:401056232}" Aug 19 08:17:36.401713 containerd[1602]: time="2025-08-19T08:17:36.401515617Z" level=info msg="StartContainer for \"d3bc0748c3051631c044f51a6da86383de5f8af668a3ce60476aaceed39b7810\" returns successfully" Aug 19 08:17:36.425939 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d3bc0748c3051631c044f51a6da86383de5f8af668a3ce60476aaceed39b7810-rootfs.mount: Deactivated successfully. Aug 19 08:17:37.109709 kubelet[2744]: E0819 08:17:37.109667 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:17:37.115887 containerd[1602]: time="2025-08-19T08:17:37.115816048Z" level=info msg="CreateContainer within sandbox \"7155a9e6db1ec63b2156bdb3321b65e165049102c043b168a7b56e2a2dc88be7\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 19 08:17:37.125916 containerd[1602]: time="2025-08-19T08:17:37.125867896Z" level=info msg="Container b55d544c092e33e253f0a2e7e3596192c0a2132861528685a6f80934711a3fae: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:17:37.130198 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1662618838.mount: Deactivated successfully. Aug 19 08:17:37.134824 containerd[1602]: time="2025-08-19T08:17:37.134785826Z" level=info msg="CreateContainer within sandbox \"7155a9e6db1ec63b2156bdb3321b65e165049102c043b168a7b56e2a2dc88be7\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b55d544c092e33e253f0a2e7e3596192c0a2132861528685a6f80934711a3fae\"" Aug 19 08:17:37.135358 containerd[1602]: time="2025-08-19T08:17:37.135317799Z" level=info msg="StartContainer for \"b55d544c092e33e253f0a2e7e3596192c0a2132861528685a6f80934711a3fae\"" Aug 19 08:17:37.136178 containerd[1602]: time="2025-08-19T08:17:37.136146076Z" level=info msg="connecting to shim b55d544c092e33e253f0a2e7e3596192c0a2132861528685a6f80934711a3fae" address="unix:///run/containerd/s/ec7360c2b33edba2ae2e7b2e55452248ceb7535b27ce7028b44fbdf12d486de5" protocol=ttrpc version=3 Aug 19 08:17:37.166894 systemd[1]: Started cri-containerd-b55d544c092e33e253f0a2e7e3596192c0a2132861528685a6f80934711a3fae.scope - libcontainer container b55d544c092e33e253f0a2e7e3596192c0a2132861528685a6f80934711a3fae. Aug 19 08:17:37.192537 systemd[1]: cri-containerd-b55d544c092e33e253f0a2e7e3596192c0a2132861528685a6f80934711a3fae.scope: Deactivated successfully. Aug 19 08:17:37.193980 containerd[1602]: time="2025-08-19T08:17:37.193929061Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b55d544c092e33e253f0a2e7e3596192c0a2132861528685a6f80934711a3fae\" id:\"b55d544c092e33e253f0a2e7e3596192c0a2132861528685a6f80934711a3fae\" pid:4754 exited_at:{seconds:1755591457 nanos:193047783}" Aug 19 08:17:37.195195 containerd[1602]: time="2025-08-19T08:17:37.195141118Z" level=info msg="received exit event container_id:\"b55d544c092e33e253f0a2e7e3596192c0a2132861528685a6f80934711a3fae\" id:\"b55d544c092e33e253f0a2e7e3596192c0a2132861528685a6f80934711a3fae\" pid:4754 exited_at:{seconds:1755591457 nanos:193047783}" Aug 19 08:17:37.202914 containerd[1602]: time="2025-08-19T08:17:37.202866236Z" level=info msg="StartContainer for \"b55d544c092e33e253f0a2e7e3596192c0a2132861528685a6f80934711a3fae\" returns successfully" Aug 19 08:17:37.392259 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b55d544c092e33e253f0a2e7e3596192c0a2132861528685a6f80934711a3fae-rootfs.mount: Deactivated successfully. Aug 19 08:17:38.114232 kubelet[2744]: E0819 08:17:38.114190 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:17:38.116306 containerd[1602]: time="2025-08-19T08:17:38.116249209Z" level=info msg="CreateContainer within sandbox \"7155a9e6db1ec63b2156bdb3321b65e165049102c043b168a7b56e2a2dc88be7\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 19 08:17:38.144181 containerd[1602]: time="2025-08-19T08:17:38.144108553Z" level=info msg="Container d6e0b02f2b91131e98dc7e1d5ff839c8e195824ca15734fe51ad666df43593a6: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:17:38.152268 containerd[1602]: time="2025-08-19T08:17:38.152212517Z" level=info msg="CreateContainer within sandbox \"7155a9e6db1ec63b2156bdb3321b65e165049102c043b168a7b56e2a2dc88be7\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d6e0b02f2b91131e98dc7e1d5ff839c8e195824ca15734fe51ad666df43593a6\"" Aug 19 08:17:38.152756 containerd[1602]: time="2025-08-19T08:17:38.152699924Z" level=info msg="StartContainer for \"d6e0b02f2b91131e98dc7e1d5ff839c8e195824ca15734fe51ad666df43593a6\"" Aug 19 08:17:38.153864 containerd[1602]: time="2025-08-19T08:17:38.153829655Z" level=info msg="connecting to shim d6e0b02f2b91131e98dc7e1d5ff839c8e195824ca15734fe51ad666df43593a6" address="unix:///run/containerd/s/ec7360c2b33edba2ae2e7b2e55452248ceb7535b27ce7028b44fbdf12d486de5" protocol=ttrpc version=3 Aug 19 08:17:38.175900 systemd[1]: Started cri-containerd-d6e0b02f2b91131e98dc7e1d5ff839c8e195824ca15734fe51ad666df43593a6.scope - libcontainer container d6e0b02f2b91131e98dc7e1d5ff839c8e195824ca15734fe51ad666df43593a6. Aug 19 08:17:38.211925 containerd[1602]: time="2025-08-19T08:17:38.211872775Z" level=info msg="StartContainer for \"d6e0b02f2b91131e98dc7e1d5ff839c8e195824ca15734fe51ad666df43593a6\" returns successfully" Aug 19 08:17:38.284481 containerd[1602]: time="2025-08-19T08:17:38.284421053Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d6e0b02f2b91131e98dc7e1d5ff839c8e195824ca15734fe51ad666df43593a6\" id:\"dbfeb187cdd4ef744ad405121d99f418e0eaf194b69b8f2fd992b89804d84a24\" pid:4822 exited_at:{seconds:1755591458 nanos:284004711}" Aug 19 08:17:38.653762 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Aug 19 08:17:39.120994 kubelet[2744]: E0819 08:17:39.120960 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:17:39.134486 kubelet[2744]: I0819 08:17:39.134382 2744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-79kb6" podStartSLOduration=5.134347915 podStartE2EDuration="5.134347915s" podCreationTimestamp="2025-08-19 08:17:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-19 08:17:39.134033567 +0000 UTC m=+94.457725127" watchObservedRunningTime="2025-08-19 08:17:39.134347915 +0000 UTC m=+94.458039475" Aug 19 08:17:40.414314 kubelet[2744]: E0819 08:17:40.414176 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:17:40.715498 containerd[1602]: time="2025-08-19T08:17:40.715336691Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d6e0b02f2b91131e98dc7e1d5ff839c8e195824ca15734fe51ad666df43593a6\" id:\"d7d245fa6a4d9bcc785605577db87108ae4fc770109cd5a99ca8e81c6ed3cb03\" pid:5027 exit_status:1 exited_at:{seconds:1755591460 nanos:714857640}" Aug 19 08:17:41.789366 systemd-networkd[1490]: lxc_health: Link UP Aug 19 08:17:41.791572 systemd-networkd[1490]: lxc_health: Gained carrier Aug 19 08:17:42.410309 kubelet[2744]: E0819 08:17:42.410138 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:17:42.947565 containerd[1602]: time="2025-08-19T08:17:42.947421586Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d6e0b02f2b91131e98dc7e1d5ff839c8e195824ca15734fe51ad666df43593a6\" id:\"9c680ca868e6d02e772604178161c7ed39b3abef7ac3699d91bcfd7322ee68cc\" pid:5363 exited_at:{seconds:1755591462 nanos:947089975}" Aug 19 08:17:43.136126 kubelet[2744]: E0819 08:17:43.136069 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:17:43.746233 systemd-networkd[1490]: lxc_health: Gained IPv6LL Aug 19 08:17:44.137788 kubelet[2744]: E0819 08:17:44.137622 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 08:17:45.050138 containerd[1602]: time="2025-08-19T08:17:45.049718526Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d6e0b02f2b91131e98dc7e1d5ff839c8e195824ca15734fe51ad666df43593a6\" id:\"a79939d9696c8d061acde7c1144b1cf4afefca662514b2a8a557e108199b2a2d\" pid:5391 exited_at:{seconds:1755591465 nanos:48249877}" Aug 19 08:17:47.141311 containerd[1602]: time="2025-08-19T08:17:47.141257993Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d6e0b02f2b91131e98dc7e1d5ff839c8e195824ca15734fe51ad666df43593a6\" id:\"0362fd0e8c6fd9962715e7be2e4d910d2e95e451f8ea9fe453f61fd1e839e575\" pid:5422 exited_at:{seconds:1755591467 nanos:140899392}" Aug 19 08:17:47.153603 sshd[4555]: Connection closed by 10.0.0.1 port 59006 Aug 19 08:17:47.154130 sshd-session[4551]: pam_unix(sshd:session): session closed for user core Aug 19 08:17:47.157631 systemd[1]: sshd@27-10.0.0.113:22-10.0.0.1:59006.service: Deactivated successfully. Aug 19 08:17:47.159802 systemd[1]: session-28.scope: Deactivated successfully. Aug 19 08:17:47.162337 systemd-logind[1582]: Session 28 logged out. Waiting for processes to exit. Aug 19 08:17:47.163321 systemd-logind[1582]: Removed session 28.