Jul 7 00:21:27.821446 kernel: Linux version 6.12.35-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Sun Jul 6 21:58:13 -00 2025 Jul 7 00:21:27.821493 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e91aabf5a2d4674d97b8508f9502216224d5fb9433440e4c8f906b950e21abf8 Jul 7 00:21:27.821505 kernel: BIOS-provided physical RAM map: Jul 7 00:21:27.821512 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000002ffff] usable Jul 7 00:21:27.821518 kernel: BIOS-e820: [mem 0x0000000000030000-0x000000000004ffff] reserved Jul 7 00:21:27.821524 kernel: BIOS-e820: [mem 0x0000000000050000-0x000000000009efff] usable Jul 7 00:21:27.821532 kernel: BIOS-e820: [mem 0x000000000009f000-0x000000000009ffff] reserved Jul 7 00:21:27.821539 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009b8ecfff] usable Jul 7 00:21:27.821545 kernel: BIOS-e820: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Jul 7 00:21:27.821551 kernel: BIOS-e820: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Jul 7 00:21:27.821558 kernel: BIOS-e820: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Jul 7 00:21:27.821566 kernel: BIOS-e820: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Jul 7 00:21:27.821573 kernel: BIOS-e820: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Jul 7 00:21:27.821579 kernel: BIOS-e820: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Jul 7 00:21:27.821587 kernel: BIOS-e820: [mem 0x000000009bfb7000-0x000000009bffffff] usable Jul 7 00:21:27.821594 kernel: BIOS-e820: [mem 0x000000009c000000-0x000000009cffffff] reserved Jul 7 00:21:27.821603 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jul 7 00:21:27.821610 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 7 00:21:27.821617 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 7 00:21:27.821624 kernel: NX (Execute Disable) protection: active Jul 7 00:21:27.821631 kernel: APIC: Static calls initialized Jul 7 00:21:27.821638 kernel: e820: update [mem 0x9a13f018-0x9a148c57] usable ==> usable Jul 7 00:21:27.821645 kernel: e820: update [mem 0x9a102018-0x9a13ee57] usable ==> usable Jul 7 00:21:27.821652 kernel: extended physical RAM map: Jul 7 00:21:27.821659 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000002ffff] usable Jul 7 00:21:27.821666 kernel: reserve setup_data: [mem 0x0000000000030000-0x000000000004ffff] reserved Jul 7 00:21:27.821673 kernel: reserve setup_data: [mem 0x0000000000050000-0x000000000009efff] usable Jul 7 00:21:27.821682 kernel: reserve setup_data: [mem 0x000000000009f000-0x000000000009ffff] reserved Jul 7 00:21:27.821689 kernel: reserve setup_data: [mem 0x0000000000100000-0x000000009a102017] usable Jul 7 00:21:27.821696 kernel: reserve setup_data: [mem 0x000000009a102018-0x000000009a13ee57] usable Jul 7 00:21:27.821703 kernel: reserve setup_data: [mem 0x000000009a13ee58-0x000000009a13f017] usable Jul 7 00:21:27.821710 kernel: reserve setup_data: [mem 0x000000009a13f018-0x000000009a148c57] usable Jul 7 00:21:27.821716 kernel: reserve setup_data: [mem 0x000000009a148c58-0x000000009b8ecfff] usable Jul 7 00:21:27.821723 kernel: reserve setup_data: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Jul 7 00:21:27.821730 kernel: reserve setup_data: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Jul 7 00:21:27.821737 kernel: reserve setup_data: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Jul 7 00:21:27.821744 kernel: reserve setup_data: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Jul 7 00:21:27.821751 kernel: reserve setup_data: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Jul 7 00:21:27.821760 kernel: reserve setup_data: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Jul 7 00:21:27.821767 kernel: reserve setup_data: [mem 0x000000009bfb7000-0x000000009bffffff] usable Jul 7 00:21:27.821777 kernel: reserve setup_data: [mem 0x000000009c000000-0x000000009cffffff] reserved Jul 7 00:21:27.821784 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jul 7 00:21:27.821792 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 7 00:21:27.821799 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 7 00:21:27.821808 kernel: efi: EFI v2.7 by EDK II Jul 7 00:21:27.821815 kernel: efi: SMBIOS=0x9b9d5000 ACPI=0x9bb7e000 ACPI 2.0=0x9bb7e014 MEMATTR=0x9a1af018 RNG=0x9bb73018 Jul 7 00:21:27.821823 kernel: random: crng init done Jul 7 00:21:27.821830 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Jul 7 00:21:27.821837 kernel: secureboot: Secure boot enabled Jul 7 00:21:27.821844 kernel: SMBIOS 2.8 present. Jul 7 00:21:27.821851 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Jul 7 00:21:27.821858 kernel: DMI: Memory slots populated: 1/1 Jul 7 00:21:27.821866 kernel: Hypervisor detected: KVM Jul 7 00:21:27.821873 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 7 00:21:27.821880 kernel: kvm-clock: using sched offset of 5115290052 cycles Jul 7 00:21:27.821890 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 7 00:21:27.821897 kernel: tsc: Detected 2794.750 MHz processor Jul 7 00:21:27.821905 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 7 00:21:27.821912 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 7 00:21:27.821920 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x400000000 Jul 7 00:21:27.821927 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jul 7 00:21:27.821934 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 7 00:21:27.821942 kernel: Using GB pages for direct mapping Jul 7 00:21:27.821949 kernel: ACPI: Early table checksum verification disabled Jul 7 00:21:27.821958 kernel: ACPI: RSDP 0x000000009BB7E014 000024 (v02 BOCHS ) Jul 7 00:21:27.821966 kernel: ACPI: XSDT 0x000000009BB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jul 7 00:21:27.821973 kernel: ACPI: FACP 0x000000009BB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 00:21:27.821981 kernel: ACPI: DSDT 0x000000009BB7A000 002237 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 00:21:27.821988 kernel: ACPI: FACS 0x000000009BBDD000 000040 Jul 7 00:21:27.821995 kernel: ACPI: APIC 0x000000009BB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 00:21:27.822003 kernel: ACPI: HPET 0x000000009BB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 00:21:27.822010 kernel: ACPI: MCFG 0x000000009BB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 00:21:27.822018 kernel: ACPI: WAET 0x000000009BB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 00:21:27.822027 kernel: ACPI: BGRT 0x000000009BB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jul 7 00:21:27.822034 kernel: ACPI: Reserving FACP table memory at [mem 0x9bb79000-0x9bb790f3] Jul 7 00:21:27.822042 kernel: ACPI: Reserving DSDT table memory at [mem 0x9bb7a000-0x9bb7c236] Jul 7 00:21:27.822049 kernel: ACPI: Reserving FACS table memory at [mem 0x9bbdd000-0x9bbdd03f] Jul 7 00:21:27.822056 kernel: ACPI: Reserving APIC table memory at [mem 0x9bb78000-0x9bb7808f] Jul 7 00:21:27.822063 kernel: ACPI: Reserving HPET table memory at [mem 0x9bb77000-0x9bb77037] Jul 7 00:21:27.822078 kernel: ACPI: Reserving MCFG table memory at [mem 0x9bb76000-0x9bb7603b] Jul 7 00:21:27.822086 kernel: ACPI: Reserving WAET table memory at [mem 0x9bb75000-0x9bb75027] Jul 7 00:21:27.822093 kernel: ACPI: Reserving BGRT table memory at [mem 0x9bb74000-0x9bb74037] Jul 7 00:21:27.822103 kernel: No NUMA configuration found Jul 7 00:21:27.822110 kernel: Faking a node at [mem 0x0000000000000000-0x000000009bffffff] Jul 7 00:21:27.822118 kernel: NODE_DATA(0) allocated [mem 0x9bf57dc0-0x9bf5efff] Jul 7 00:21:27.822125 kernel: Zone ranges: Jul 7 00:21:27.822133 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 7 00:21:27.822140 kernel: DMA32 [mem 0x0000000001000000-0x000000009bffffff] Jul 7 00:21:27.822147 kernel: Normal empty Jul 7 00:21:27.822155 kernel: Device empty Jul 7 00:21:27.822162 kernel: Movable zone start for each node Jul 7 00:21:27.822171 kernel: Early memory node ranges Jul 7 00:21:27.822178 kernel: node 0: [mem 0x0000000000001000-0x000000000002ffff] Jul 7 00:21:27.822186 kernel: node 0: [mem 0x0000000000050000-0x000000000009efff] Jul 7 00:21:27.822193 kernel: node 0: [mem 0x0000000000100000-0x000000009b8ecfff] Jul 7 00:21:27.822200 kernel: node 0: [mem 0x000000009bbff000-0x000000009bfb0fff] Jul 7 00:21:27.822208 kernel: node 0: [mem 0x000000009bfb7000-0x000000009bffffff] Jul 7 00:21:27.822215 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009bffffff] Jul 7 00:21:27.822222 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 7 00:21:27.822230 kernel: On node 0, zone DMA: 32 pages in unavailable ranges Jul 7 00:21:27.822237 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 7 00:21:27.822246 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jul 7 00:21:27.822254 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Jul 7 00:21:27.822261 kernel: On node 0, zone DMA32: 16384 pages in unavailable ranges Jul 7 00:21:27.822268 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 7 00:21:27.822276 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 7 00:21:27.822283 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 7 00:21:27.822290 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 7 00:21:27.822298 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 7 00:21:27.822305 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 7 00:21:27.822314 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 7 00:21:27.822322 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 7 00:21:27.822329 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 7 00:21:27.822336 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 7 00:21:27.822344 kernel: TSC deadline timer available Jul 7 00:21:27.822351 kernel: CPU topo: Max. logical packages: 1 Jul 7 00:21:27.822358 kernel: CPU topo: Max. logical dies: 1 Jul 7 00:21:27.822367 kernel: CPU topo: Max. dies per package: 1 Jul 7 00:21:27.822382 kernel: CPU topo: Max. threads per core: 1 Jul 7 00:21:27.822390 kernel: CPU topo: Num. cores per package: 4 Jul 7 00:21:27.822397 kernel: CPU topo: Num. threads per package: 4 Jul 7 00:21:27.822405 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jul 7 00:21:27.822414 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 7 00:21:27.822422 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 7 00:21:27.822429 kernel: kvm-guest: setup PV sched yield Jul 7 00:21:27.822437 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Jul 7 00:21:27.822445 kernel: Booting paravirtualized kernel on KVM Jul 7 00:21:27.822455 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 7 00:21:27.822530 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jul 7 00:21:27.822540 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jul 7 00:21:27.822548 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jul 7 00:21:27.822557 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 7 00:21:27.822566 kernel: kvm-guest: PV spinlocks enabled Jul 7 00:21:27.822573 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 7 00:21:27.822589 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e91aabf5a2d4674d97b8508f9502216224d5fb9433440e4c8f906b950e21abf8 Jul 7 00:21:27.822603 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 7 00:21:27.822618 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 7 00:21:27.822633 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 7 00:21:27.822641 kernel: Fallback order for Node 0: 0 Jul 7 00:21:27.822649 kernel: Built 1 zonelists, mobility grouping on. Total pages: 638054 Jul 7 00:21:27.822657 kernel: Policy zone: DMA32 Jul 7 00:21:27.822665 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 7 00:21:27.822672 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 7 00:21:27.822680 kernel: ftrace: allocating 40095 entries in 157 pages Jul 7 00:21:27.822690 kernel: ftrace: allocated 157 pages with 5 groups Jul 7 00:21:27.822702 kernel: Dynamic Preempt: voluntary Jul 7 00:21:27.822710 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 7 00:21:27.822718 kernel: rcu: RCU event tracing is enabled. Jul 7 00:21:27.822726 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 7 00:21:27.822734 kernel: Trampoline variant of Tasks RCU enabled. Jul 7 00:21:27.822742 kernel: Rude variant of Tasks RCU enabled. Jul 7 00:21:27.822749 kernel: Tracing variant of Tasks RCU enabled. Jul 7 00:21:27.822757 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 7 00:21:27.822767 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 7 00:21:27.822775 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 7 00:21:27.822783 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 7 00:21:27.822791 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 7 00:21:27.822799 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 7 00:21:27.822807 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 7 00:21:27.822814 kernel: Console: colour dummy device 80x25 Jul 7 00:21:27.822822 kernel: printk: legacy console [ttyS0] enabled Jul 7 00:21:27.822830 kernel: ACPI: Core revision 20240827 Jul 7 00:21:27.822839 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 7 00:21:27.822847 kernel: APIC: Switch to symmetric I/O mode setup Jul 7 00:21:27.822855 kernel: x2apic enabled Jul 7 00:21:27.822863 kernel: APIC: Switched APIC routing to: physical x2apic Jul 7 00:21:27.822870 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jul 7 00:21:27.822878 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jul 7 00:21:27.822886 kernel: kvm-guest: setup PV IPIs Jul 7 00:21:27.822893 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 7 00:21:27.822901 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Jul 7 00:21:27.822911 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Jul 7 00:21:27.822919 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 7 00:21:27.822927 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 7 00:21:27.822935 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 7 00:21:27.822942 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 7 00:21:27.822950 kernel: Spectre V2 : Mitigation: Retpolines Jul 7 00:21:27.822958 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 7 00:21:27.822965 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 7 00:21:27.822973 kernel: RETBleed: Mitigation: untrained return thunk Jul 7 00:21:27.822983 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 7 00:21:27.822991 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 7 00:21:27.822999 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jul 7 00:21:27.823007 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jul 7 00:21:27.823015 kernel: x86/bugs: return thunk changed Jul 7 00:21:27.823022 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jul 7 00:21:27.823030 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 7 00:21:27.823038 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 7 00:21:27.823048 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 7 00:21:27.823055 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 7 00:21:27.823063 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 7 00:21:27.823079 kernel: Freeing SMP alternatives memory: 32K Jul 7 00:21:27.823086 kernel: pid_max: default: 32768 minimum: 301 Jul 7 00:21:27.823094 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 7 00:21:27.823101 kernel: landlock: Up and running. Jul 7 00:21:27.823109 kernel: SELinux: Initializing. Jul 7 00:21:27.823117 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 7 00:21:27.823132 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 7 00:21:27.823140 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 7 00:21:27.823147 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 7 00:21:27.823155 kernel: ... version: 0 Jul 7 00:21:27.823162 kernel: ... bit width: 48 Jul 7 00:21:27.823170 kernel: ... generic registers: 6 Jul 7 00:21:27.823182 kernel: ... value mask: 0000ffffffffffff Jul 7 00:21:27.823190 kernel: ... max period: 00007fffffffffff Jul 7 00:21:27.823198 kernel: ... fixed-purpose events: 0 Jul 7 00:21:27.823207 kernel: ... event mask: 000000000000003f Jul 7 00:21:27.823215 kernel: signal: max sigframe size: 1776 Jul 7 00:21:27.823222 kernel: rcu: Hierarchical SRCU implementation. Jul 7 00:21:27.823230 kernel: rcu: Max phase no-delay instances is 400. Jul 7 00:21:27.823238 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 7 00:21:27.823246 kernel: smp: Bringing up secondary CPUs ... Jul 7 00:21:27.823253 kernel: smpboot: x86: Booting SMP configuration: Jul 7 00:21:27.823261 kernel: .... node #0, CPUs: #1 #2 #3 Jul 7 00:21:27.823268 kernel: smp: Brought up 1 node, 4 CPUs Jul 7 00:21:27.823276 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Jul 7 00:21:27.823286 kernel: Memory: 2409216K/2552216K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54432K init, 2536K bss, 137064K reserved, 0K cma-reserved) Jul 7 00:21:27.823294 kernel: devtmpfs: initialized Jul 7 00:21:27.823301 kernel: x86/mm: Memory block size: 128MB Jul 7 00:21:27.823309 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bb7f000-0x9bbfefff] (524288 bytes) Jul 7 00:21:27.823317 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bfb5000-0x9bfb6fff] (8192 bytes) Jul 7 00:21:27.823325 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 7 00:21:27.823333 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 7 00:21:27.823340 kernel: pinctrl core: initialized pinctrl subsystem Jul 7 00:21:27.823350 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 7 00:21:27.823358 kernel: audit: initializing netlink subsys (disabled) Jul 7 00:21:27.823365 kernel: audit: type=2000 audit(1751847686.011:1): state=initialized audit_enabled=0 res=1 Jul 7 00:21:27.823373 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 7 00:21:27.823381 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 7 00:21:27.823388 kernel: cpuidle: using governor menu Jul 7 00:21:27.823396 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 7 00:21:27.823404 kernel: dca service started, version 1.12.1 Jul 7 00:21:27.823412 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Jul 7 00:21:27.823421 kernel: PCI: Using configuration type 1 for base access Jul 7 00:21:27.823429 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 7 00:21:27.823437 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 7 00:21:27.823444 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 7 00:21:27.823452 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 7 00:21:27.823483 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 7 00:21:27.823499 kernel: ACPI: Added _OSI(Module Device) Jul 7 00:21:27.823508 kernel: ACPI: Added _OSI(Processor Device) Jul 7 00:21:27.823516 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 7 00:21:27.823526 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 7 00:21:27.823534 kernel: ACPI: Interpreter enabled Jul 7 00:21:27.823541 kernel: ACPI: PM: (supports S0 S5) Jul 7 00:21:27.823552 kernel: ACPI: Using IOAPIC for interrupt routing Jul 7 00:21:27.823560 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 7 00:21:27.823568 kernel: PCI: Using E820 reservations for host bridge windows Jul 7 00:21:27.823576 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 7 00:21:27.823583 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 7 00:21:27.823764 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 7 00:21:27.823890 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jul 7 00:21:27.824005 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jul 7 00:21:27.824015 kernel: PCI host bridge to bus 0000:00 Jul 7 00:21:27.824141 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 7 00:21:27.824247 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 7 00:21:27.824359 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 7 00:21:27.824488 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Jul 7 00:21:27.824606 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jul 7 00:21:27.824711 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Jul 7 00:21:27.824816 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 7 00:21:27.824945 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jul 7 00:21:27.825086 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jul 7 00:21:27.825208 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Jul 7 00:21:27.825323 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Jul 7 00:21:27.825437 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Jul 7 00:21:27.825570 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 7 00:21:27.825701 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jul 7 00:21:27.825818 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Jul 7 00:21:27.825934 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Jul 7 00:21:27.826054 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Jul 7 00:21:27.826190 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jul 7 00:21:27.826309 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Jul 7 00:21:27.826424 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Jul 7 00:21:27.826566 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Jul 7 00:21:27.826699 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jul 7 00:21:27.826818 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Jul 7 00:21:27.826939 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Jul 7 00:21:27.827055 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Jul 7 00:21:27.827183 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Jul 7 00:21:27.827307 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jul 7 00:21:27.827424 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 7 00:21:27.827565 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jul 7 00:21:27.827681 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Jul 7 00:21:27.827800 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Jul 7 00:21:27.827923 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jul 7 00:21:27.828039 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Jul 7 00:21:27.828050 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 7 00:21:27.828058 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 7 00:21:27.828066 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 7 00:21:27.828083 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 7 00:21:27.828094 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 7 00:21:27.828102 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 7 00:21:27.828110 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 7 00:21:27.828117 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 7 00:21:27.828125 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 7 00:21:27.828133 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 7 00:21:27.828141 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 7 00:21:27.828148 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 7 00:21:27.828156 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 7 00:21:27.828166 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 7 00:21:27.828174 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 7 00:21:27.828181 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 7 00:21:27.828189 kernel: iommu: Default domain type: Translated Jul 7 00:21:27.828197 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 7 00:21:27.828205 kernel: efivars: Registered efivars operations Jul 7 00:21:27.828212 kernel: PCI: Using ACPI for IRQ routing Jul 7 00:21:27.828220 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 7 00:21:27.828228 kernel: e820: reserve RAM buffer [mem 0x0009f000-0x0009ffff] Jul 7 00:21:27.828235 kernel: e820: reserve RAM buffer [mem 0x9a102018-0x9bffffff] Jul 7 00:21:27.828245 kernel: e820: reserve RAM buffer [mem 0x9a13f018-0x9bffffff] Jul 7 00:21:27.828253 kernel: e820: reserve RAM buffer [mem 0x9b8ed000-0x9bffffff] Jul 7 00:21:27.828260 kernel: e820: reserve RAM buffer [mem 0x9bfb1000-0x9bffffff] Jul 7 00:21:27.828378 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 7 00:21:27.828527 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 7 00:21:27.828645 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 7 00:21:27.828655 kernel: vgaarb: loaded Jul 7 00:21:27.828663 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 7 00:21:27.828675 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 7 00:21:27.828683 kernel: clocksource: Switched to clocksource kvm-clock Jul 7 00:21:27.828691 kernel: VFS: Disk quotas dquot_6.6.0 Jul 7 00:21:27.828699 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 7 00:21:27.828707 kernel: pnp: PnP ACPI init Jul 7 00:21:27.828849 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Jul 7 00:21:27.828860 kernel: pnp: PnP ACPI: found 6 devices Jul 7 00:21:27.828868 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 7 00:21:27.828879 kernel: NET: Registered PF_INET protocol family Jul 7 00:21:27.828887 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 7 00:21:27.828895 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 7 00:21:27.828902 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 7 00:21:27.828910 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 7 00:21:27.828918 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 7 00:21:27.828926 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 7 00:21:27.828933 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 7 00:21:27.828941 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 7 00:21:27.828952 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 7 00:21:27.828959 kernel: NET: Registered PF_XDP protocol family Jul 7 00:21:27.829087 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Jul 7 00:21:27.829206 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Jul 7 00:21:27.829313 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 7 00:21:27.829436 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 7 00:21:27.829566 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 7 00:21:27.829673 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Jul 7 00:21:27.829782 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jul 7 00:21:27.829887 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Jul 7 00:21:27.829897 kernel: PCI: CLS 0 bytes, default 64 Jul 7 00:21:27.829905 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Jul 7 00:21:27.829913 kernel: Initialise system trusted keyrings Jul 7 00:21:27.829921 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 7 00:21:27.829929 kernel: Key type asymmetric registered Jul 7 00:21:27.829937 kernel: Asymmetric key parser 'x509' registered Jul 7 00:21:27.829945 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 7 00:21:27.829966 kernel: io scheduler mq-deadline registered Jul 7 00:21:27.829975 kernel: io scheduler kyber registered Jul 7 00:21:27.829983 kernel: io scheduler bfq registered Jul 7 00:21:27.829993 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 7 00:21:27.830003 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 7 00:21:27.830011 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 7 00:21:27.830019 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jul 7 00:21:27.830027 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 7 00:21:27.830035 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 7 00:21:27.830045 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 7 00:21:27.830053 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 7 00:21:27.830061 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 7 00:21:27.830189 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 7 00:21:27.830201 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 7 00:21:27.830316 kernel: rtc_cmos 00:04: registered as rtc0 Jul 7 00:21:27.830425 kernel: rtc_cmos 00:04: setting system clock to 2025-07-07T00:21:27 UTC (1751847687) Jul 7 00:21:27.830553 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jul 7 00:21:27.830581 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 7 00:21:27.830590 kernel: efifb: probing for efifb Jul 7 00:21:27.830598 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Jul 7 00:21:27.830606 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jul 7 00:21:27.830614 kernel: efifb: scrolling: redraw Jul 7 00:21:27.830623 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 7 00:21:27.830631 kernel: Console: switching to colour frame buffer device 160x50 Jul 7 00:21:27.830639 kernel: fb0: EFI VGA frame buffer device Jul 7 00:21:27.830647 kernel: pstore: Using crash dump compression: deflate Jul 7 00:21:27.830658 kernel: pstore: Registered efi_pstore as persistent store backend Jul 7 00:21:27.830668 kernel: NET: Registered PF_INET6 protocol family Jul 7 00:21:27.830676 kernel: Segment Routing with IPv6 Jul 7 00:21:27.830684 kernel: In-situ OAM (IOAM) with IPv6 Jul 7 00:21:27.830692 kernel: NET: Registered PF_PACKET protocol family Jul 7 00:21:27.830702 kernel: Key type dns_resolver registered Jul 7 00:21:27.830710 kernel: IPI shorthand broadcast: enabled Jul 7 00:21:27.830719 kernel: sched_clock: Marking stable (2974002465, 134400350)->(3125147421, -16744606) Jul 7 00:21:27.830727 kernel: registered taskstats version 1 Jul 7 00:21:27.830735 kernel: Loading compiled-in X.509 certificates Jul 7 00:21:27.830743 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.35-flatcar: 025c05e23c9778f7a70ff09fb369dd949499fb06' Jul 7 00:21:27.830751 kernel: Demotion targets for Node 0: null Jul 7 00:21:27.830759 kernel: Key type .fscrypt registered Jul 7 00:21:27.830767 kernel: Key type fscrypt-provisioning registered Jul 7 00:21:27.830778 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 7 00:21:27.830786 kernel: ima: Allocated hash algorithm: sha1 Jul 7 00:21:27.830794 kernel: ima: No architecture policies found Jul 7 00:21:27.830801 kernel: clk: Disabling unused clocks Jul 7 00:21:27.830809 kernel: Warning: unable to open an initial console. Jul 7 00:21:27.830818 kernel: Freeing unused kernel image (initmem) memory: 54432K Jul 7 00:21:27.830826 kernel: Write protecting the kernel read-only data: 24576k Jul 7 00:21:27.830834 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 7 00:21:27.830842 kernel: Run /init as init process Jul 7 00:21:27.830852 kernel: with arguments: Jul 7 00:21:27.830860 kernel: /init Jul 7 00:21:27.830868 kernel: with environment: Jul 7 00:21:27.830876 kernel: HOME=/ Jul 7 00:21:27.830884 kernel: TERM=linux Jul 7 00:21:27.830892 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 7 00:21:27.830901 systemd[1]: Successfully made /usr/ read-only. Jul 7 00:21:27.830912 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 7 00:21:27.830923 systemd[1]: Detected virtualization kvm. Jul 7 00:21:27.830931 systemd[1]: Detected architecture x86-64. Jul 7 00:21:27.830940 systemd[1]: Running in initrd. Jul 7 00:21:27.830948 systemd[1]: No hostname configured, using default hostname. Jul 7 00:21:27.830957 systemd[1]: Hostname set to . Jul 7 00:21:27.830965 systemd[1]: Initializing machine ID from VM UUID. Jul 7 00:21:27.830973 systemd[1]: Queued start job for default target initrd.target. Jul 7 00:21:27.830984 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 00:21:27.830993 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 00:21:27.831002 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 7 00:21:27.831010 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 00:21:27.831019 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 7 00:21:27.831028 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 7 00:21:27.831038 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 7 00:21:27.831049 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 7 00:21:27.831058 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 00:21:27.831075 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 00:21:27.831084 systemd[1]: Reached target paths.target - Path Units. Jul 7 00:21:27.831093 systemd[1]: Reached target slices.target - Slice Units. Jul 7 00:21:27.831101 systemd[1]: Reached target swap.target - Swaps. Jul 7 00:21:27.831111 systemd[1]: Reached target timers.target - Timer Units. Jul 7 00:21:27.831119 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 00:21:27.831128 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 00:21:27.831138 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 7 00:21:27.831147 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 7 00:21:27.831156 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 00:21:27.831164 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 00:21:27.831173 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 00:21:27.831181 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 00:21:27.831190 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 7 00:21:27.831199 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 00:21:27.831209 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 7 00:21:27.831218 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 7 00:21:27.831227 systemd[1]: Starting systemd-fsck-usr.service... Jul 7 00:21:27.831236 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 00:21:27.831244 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 00:21:27.831253 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:21:27.831262 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 7 00:21:27.831272 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 00:21:27.831281 systemd[1]: Finished systemd-fsck-usr.service. Jul 7 00:21:27.831290 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 00:21:27.831322 systemd-journald[220]: Collecting audit messages is disabled. Jul 7 00:21:27.831345 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 00:21:27.831354 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 00:21:27.831363 systemd-journald[220]: Journal started Jul 7 00:21:27.831382 systemd-journald[220]: Runtime Journal (/run/log/journal/04fef42c11674de3a2c2803fb4eb1eef) is 6M, max 48.2M, 42.2M free. Jul 7 00:21:27.821504 systemd-modules-load[222]: Inserted module 'overlay' Jul 7 00:21:27.835495 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 00:21:27.841677 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 00:21:27.844835 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:21:27.847164 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 00:21:27.851509 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 00:21:27.855404 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 7 00:21:27.854778 systemd-tmpfiles[239]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 7 00:21:27.857705 kernel: Bridge firewalling registered Jul 7 00:21:27.856013 systemd-modules-load[222]: Inserted module 'br_netfilter' Jul 7 00:21:27.857007 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 00:21:27.859483 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 00:21:27.860502 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 00:21:27.871709 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 00:21:27.872082 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 00:21:27.875288 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 7 00:21:27.878659 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 00:21:27.910056 dracut-cmdline[259]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e91aabf5a2d4674d97b8508f9502216224d5fb9433440e4c8f906b950e21abf8 Jul 7 00:21:27.928727 systemd-resolved[260]: Positive Trust Anchors: Jul 7 00:21:27.928738 systemd-resolved[260]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 00:21:27.928770 systemd-resolved[260]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 00:21:27.931163 systemd-resolved[260]: Defaulting to hostname 'linux'. Jul 7 00:21:27.932166 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 00:21:27.938256 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 00:21:28.014502 kernel: SCSI subsystem initialized Jul 7 00:21:28.023492 kernel: Loading iSCSI transport class v2.0-870. Jul 7 00:21:28.033493 kernel: iscsi: registered transport (tcp) Jul 7 00:21:28.054502 kernel: iscsi: registered transport (qla4xxx) Jul 7 00:21:28.054525 kernel: QLogic iSCSI HBA Driver Jul 7 00:21:28.073823 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 00:21:28.090625 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 00:21:28.092861 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 00:21:28.142938 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 7 00:21:28.145671 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 7 00:21:28.201513 kernel: raid6: avx2x4 gen() 28693 MB/s Jul 7 00:21:28.218494 kernel: raid6: avx2x2 gen() 27045 MB/s Jul 7 00:21:28.235518 kernel: raid6: avx2x1 gen() 24673 MB/s Jul 7 00:21:28.235543 kernel: raid6: using algorithm avx2x4 gen() 28693 MB/s Jul 7 00:21:28.253525 kernel: raid6: .... xor() 8903 MB/s, rmw enabled Jul 7 00:21:28.253572 kernel: raid6: using avx2x2 recovery algorithm Jul 7 00:21:28.273497 kernel: xor: automatically using best checksumming function avx Jul 7 00:21:28.435501 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 7 00:21:28.443070 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 7 00:21:28.445900 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 00:21:28.477012 systemd-udevd[470]: Using default interface naming scheme 'v255'. Jul 7 00:21:28.482203 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 00:21:28.484148 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 7 00:21:28.505612 dracut-pre-trigger[478]: rd.md=0: removing MD RAID activation Jul 7 00:21:28.529538 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 00:21:28.532814 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 00:21:28.600626 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 00:21:28.601588 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 7 00:21:28.656498 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jul 7 00:21:28.660512 kernel: libata version 3.00 loaded. Jul 7 00:21:28.660529 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 7 00:21:28.660674 kernel: cryptd: max_cpu_qlen set to 1000 Jul 7 00:21:28.665694 kernel: ahci 0000:00:1f.2: version 3.0 Jul 7 00:21:28.665914 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 7 00:21:28.674965 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jul 7 00:21:28.674990 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 7 00:21:28.675002 kernel: GPT:9289727 != 19775487 Jul 7 00:21:28.675012 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 7 00:21:28.675023 kernel: GPT:9289727 != 19775487 Jul 7 00:21:28.675033 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 7 00:21:28.675056 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 00:21:28.668309 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 00:21:28.668498 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:21:28.670030 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:21:28.678680 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:21:28.680287 kernel: AES CTR mode by8 optimization enabled Jul 7 00:21:28.684717 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jul 7 00:21:28.684889 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jul 7 00:21:28.685028 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 7 00:21:28.691365 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 00:21:28.692574 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:21:28.696448 kernel: scsi host0: ahci Jul 7 00:21:28.696830 kernel: scsi host1: ahci Jul 7 00:21:28.704643 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:21:28.707482 kernel: scsi host2: ahci Jul 7 00:21:28.711534 kernel: scsi host3: ahci Jul 7 00:21:28.713529 kernel: scsi host4: ahci Jul 7 00:21:28.713751 kernel: scsi host5: ahci Jul 7 00:21:28.713905 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 0 Jul 7 00:21:28.715553 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 0 Jul 7 00:21:28.715571 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 0 Jul 7 00:21:28.717282 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 0 Jul 7 00:21:28.717303 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 0 Jul 7 00:21:28.719015 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 0 Jul 7 00:21:28.724962 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 7 00:21:28.733699 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 7 00:21:28.736242 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:21:28.757729 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 7 00:21:28.764405 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 7 00:21:28.764489 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 7 00:21:28.767560 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 7 00:21:28.803350 disk-uuid[633]: Primary Header is updated. Jul 7 00:21:28.803350 disk-uuid[633]: Secondary Entries is updated. Jul 7 00:21:28.803350 disk-uuid[633]: Secondary Header is updated. Jul 7 00:21:28.807487 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 00:21:28.811483 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 00:21:29.031195 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 7 00:21:29.031249 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 7 00:21:29.031265 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 7 00:21:29.031276 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 7 00:21:29.032488 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 7 00:21:29.033493 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jul 7 00:21:29.033509 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 7 00:21:29.033984 kernel: ata3.00: applying bridge limits Jul 7 00:21:29.035482 kernel: ata3.00: configured for UDMA/100 Jul 7 00:21:29.035499 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 7 00:21:29.079938 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 7 00:21:29.080153 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 7 00:21:29.094518 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 7 00:21:29.481515 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 7 00:21:29.483167 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 00:21:29.484892 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 00:21:29.486035 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 00:21:29.489005 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 7 00:21:29.515859 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 7 00:21:29.813022 disk-uuid[634]: The operation has completed successfully. Jul 7 00:21:29.814250 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 00:21:29.837335 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 7 00:21:29.837448 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 7 00:21:29.874327 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 7 00:21:29.890367 sh[665]: Success Jul 7 00:21:29.907492 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 7 00:21:29.907518 kernel: device-mapper: uevent: version 1.0.3 Jul 7 00:21:29.908525 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 7 00:21:29.917512 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jul 7 00:21:29.947449 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 7 00:21:29.950607 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 7 00:21:29.964359 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 7 00:21:29.970888 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 7 00:21:29.970914 kernel: BTRFS: device fsid 9d729180-1373-4e9f-840c-4db0e9220239 devid 1 transid 39 /dev/mapper/usr (253:0) scanned by mount (677) Jul 7 00:21:29.972174 kernel: BTRFS info (device dm-0): first mount of filesystem 9d729180-1373-4e9f-840c-4db0e9220239 Jul 7 00:21:29.972200 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 7 00:21:29.973014 kernel: BTRFS info (device dm-0): using free-space-tree Jul 7 00:21:29.977689 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 7 00:21:29.978124 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 7 00:21:29.981522 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 7 00:21:29.984025 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 7 00:21:29.986152 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 7 00:21:30.016492 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (710) Jul 7 00:21:30.019048 kernel: BTRFS info (device vda6): first mount of filesystem a5b10ed8-ad12-45a6-8115-f8814df6901b Jul 7 00:21:30.019074 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 00:21:30.019087 kernel: BTRFS info (device vda6): using free-space-tree Jul 7 00:21:30.025496 kernel: BTRFS info (device vda6): last unmount of filesystem a5b10ed8-ad12-45a6-8115-f8814df6901b Jul 7 00:21:30.026200 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 7 00:21:30.029764 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 7 00:21:30.113551 ignition[755]: Ignition 2.21.0 Jul 7 00:21:30.113564 ignition[755]: Stage: fetch-offline Jul 7 00:21:30.113596 ignition[755]: no configs at "/usr/lib/ignition/base.d" Jul 7 00:21:30.113605 ignition[755]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 00:21:30.113692 ignition[755]: parsed url from cmdline: "" Jul 7 00:21:30.117947 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 00:21:30.113697 ignition[755]: no config URL provided Jul 7 00:21:30.113702 ignition[755]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 00:21:30.113710 ignition[755]: no config at "/usr/lib/ignition/user.ign" Jul 7 00:21:30.113733 ignition[755]: op(1): [started] loading QEMU firmware config module Jul 7 00:21:30.113738 ignition[755]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 7 00:21:30.124140 ignition[755]: op(1): [finished] loading QEMU firmware config module Jul 7 00:21:30.128677 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 00:21:30.164870 ignition[755]: parsing config with SHA512: b73be0739bb3ee29cad6cae997fb34993b0149ef54cdf4868e240bbe71cafd340d677983515927637997ca99dfd36b48767a2eabab7332e5ec2cc4c0d9d91831 Jul 7 00:21:30.169929 unknown[755]: fetched base config from "system" Jul 7 00:21:30.169941 unknown[755]: fetched user config from "qemu" Jul 7 00:21:30.170272 ignition[755]: fetch-offline: fetch-offline passed Jul 7 00:21:30.170325 ignition[755]: Ignition finished successfully Jul 7 00:21:30.173042 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 00:21:30.176981 systemd-networkd[854]: lo: Link UP Jul 7 00:21:30.176992 systemd-networkd[854]: lo: Gained carrier Jul 7 00:21:30.178514 systemd-networkd[854]: Enumeration completed Jul 7 00:21:30.178618 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 00:21:30.178865 systemd-networkd[854]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 00:21:30.178869 systemd-networkd[854]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 00:21:30.179797 systemd-networkd[854]: eth0: Link UP Jul 7 00:21:30.179801 systemd-networkd[854]: eth0: Gained carrier Jul 7 00:21:30.179808 systemd-networkd[854]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 00:21:30.180573 systemd[1]: Reached target network.target - Network. Jul 7 00:21:30.182435 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 7 00:21:30.188508 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 7 00:21:30.197522 systemd-networkd[854]: eth0: DHCPv4 address 10.0.0.140/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 7 00:21:30.220769 ignition[858]: Ignition 2.21.0 Jul 7 00:21:30.220781 ignition[858]: Stage: kargs Jul 7 00:21:30.220899 ignition[858]: no configs at "/usr/lib/ignition/base.d" Jul 7 00:21:30.220908 ignition[858]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 00:21:30.223369 ignition[858]: kargs: kargs passed Jul 7 00:21:30.223413 ignition[858]: Ignition finished successfully Jul 7 00:21:30.228030 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 7 00:21:30.229320 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 7 00:21:30.258049 ignition[867]: Ignition 2.21.0 Jul 7 00:21:30.258061 ignition[867]: Stage: disks Jul 7 00:21:30.258190 ignition[867]: no configs at "/usr/lib/ignition/base.d" Jul 7 00:21:30.258201 ignition[867]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 00:21:30.259351 ignition[867]: disks: disks passed Jul 7 00:21:30.259422 ignition[867]: Ignition finished successfully Jul 7 00:21:30.265349 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 7 00:21:30.266729 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 7 00:21:30.268580 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 7 00:21:30.269634 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 00:21:30.271845 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 00:21:30.273622 systemd[1]: Reached target basic.target - Basic System. Jul 7 00:21:30.277297 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 7 00:21:30.312750 systemd-fsck[877]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 7 00:21:30.320283 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 7 00:21:30.321451 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 7 00:21:30.446493 kernel: EXT4-fs (vda9): mounted filesystem 98c55dfc-aac4-4fdd-8ec0-1f5587b3aa36 r/w with ordered data mode. Quota mode: none. Jul 7 00:21:30.447713 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 7 00:21:30.449851 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 7 00:21:30.453078 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 00:21:30.454813 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 7 00:21:30.455935 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 7 00:21:30.455982 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 7 00:21:30.456020 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 00:21:30.469233 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 7 00:21:30.471707 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 7 00:21:30.474734 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (886) Jul 7 00:21:30.476502 kernel: BTRFS info (device vda6): first mount of filesystem a5b10ed8-ad12-45a6-8115-f8814df6901b Jul 7 00:21:30.476518 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 00:21:30.477472 kernel: BTRFS info (device vda6): using free-space-tree Jul 7 00:21:30.480630 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 00:21:30.514156 initrd-setup-root[910]: cut: /sysroot/etc/passwd: No such file or directory Jul 7 00:21:30.518536 initrd-setup-root[917]: cut: /sysroot/etc/group: No such file or directory Jul 7 00:21:30.522003 initrd-setup-root[924]: cut: /sysroot/etc/shadow: No such file or directory Jul 7 00:21:30.525452 initrd-setup-root[931]: cut: /sysroot/etc/gshadow: No such file or directory Jul 7 00:21:30.604738 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 7 00:21:30.606792 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 7 00:21:30.608413 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 7 00:21:30.624494 kernel: BTRFS info (device vda6): last unmount of filesystem a5b10ed8-ad12-45a6-8115-f8814df6901b Jul 7 00:21:30.636239 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 7 00:21:30.649166 ignition[1002]: INFO : Ignition 2.21.0 Jul 7 00:21:30.649166 ignition[1002]: INFO : Stage: mount Jul 7 00:21:30.649166 ignition[1002]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 00:21:30.649166 ignition[1002]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 00:21:30.652695 ignition[1002]: INFO : mount: mount passed Jul 7 00:21:30.653572 ignition[1002]: INFO : Ignition finished successfully Jul 7 00:21:30.656139 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 7 00:21:30.657153 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 7 00:21:30.970040 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 7 00:21:30.971513 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 00:21:30.999088 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1014) Jul 7 00:21:30.999130 kernel: BTRFS info (device vda6): first mount of filesystem a5b10ed8-ad12-45a6-8115-f8814df6901b Jul 7 00:21:30.999143 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 00:21:30.999899 kernel: BTRFS info (device vda6): using free-space-tree Jul 7 00:21:31.003954 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 00:21:31.032270 ignition[1031]: INFO : Ignition 2.21.0 Jul 7 00:21:31.032270 ignition[1031]: INFO : Stage: files Jul 7 00:21:31.034097 ignition[1031]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 00:21:31.035062 ignition[1031]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 00:21:31.036944 ignition[1031]: DEBUG : files: compiled without relabeling support, skipping Jul 7 00:21:31.038333 ignition[1031]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 7 00:21:31.038333 ignition[1031]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 7 00:21:31.042388 ignition[1031]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 7 00:21:31.043716 ignition[1031]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 7 00:21:31.045011 ignition[1031]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 7 00:21:31.044085 unknown[1031]: wrote ssh authorized keys file for user: core Jul 7 00:21:31.047504 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jul 7 00:21:31.047504 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jul 7 00:21:31.100911 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 7 00:21:31.226337 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jul 7 00:21:31.226337 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 7 00:21:31.230115 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 7 00:21:31.519816 systemd-networkd[854]: eth0: Gained IPv6LL Jul 7 00:21:31.739302 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 7 00:21:31.922093 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 7 00:21:31.922093 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 7 00:21:31.925756 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 7 00:21:31.925756 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 7 00:21:31.925756 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 7 00:21:31.925756 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 00:21:31.925756 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 00:21:31.925756 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 00:21:31.925756 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 00:21:31.937533 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 00:21:31.937533 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 00:21:31.937533 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 7 00:21:31.937533 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 7 00:21:31.937533 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 7 00:21:31.947322 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jul 7 00:21:32.441408 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 7 00:21:32.724740 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 7 00:21:32.724740 ignition[1031]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 7 00:21:32.728882 ignition[1031]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 00:21:32.732780 ignition[1031]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 00:21:32.732780 ignition[1031]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 7 00:21:32.732780 ignition[1031]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jul 7 00:21:32.732780 ignition[1031]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 7 00:21:32.739331 ignition[1031]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 7 00:21:32.739331 ignition[1031]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jul 7 00:21:32.739331 ignition[1031]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jul 7 00:21:32.757866 ignition[1031]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 7 00:21:32.761674 ignition[1031]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 7 00:21:32.763259 ignition[1031]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jul 7 00:21:32.763259 ignition[1031]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jul 7 00:21:32.763259 ignition[1031]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jul 7 00:21:32.763259 ignition[1031]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 7 00:21:32.763259 ignition[1031]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 7 00:21:32.763259 ignition[1031]: INFO : files: files passed Jul 7 00:21:32.763259 ignition[1031]: INFO : Ignition finished successfully Jul 7 00:21:32.768448 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 7 00:21:32.772811 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 7 00:21:32.775320 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 7 00:21:32.789324 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 7 00:21:32.789450 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 7 00:21:32.793961 initrd-setup-root-after-ignition[1060]: grep: /sysroot/oem/oem-release: No such file or directory Jul 7 00:21:32.796703 initrd-setup-root-after-ignition[1062]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 00:21:32.798512 initrd-setup-root-after-ignition[1062]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 7 00:21:32.800040 initrd-setup-root-after-ignition[1066]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 00:21:32.802685 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 00:21:32.804075 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 7 00:21:32.807255 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 7 00:21:32.878024 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 7 00:21:32.878144 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 7 00:21:32.880377 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 7 00:21:32.881413 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 7 00:21:32.884186 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 7 00:21:32.884964 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 7 00:21:32.901694 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 00:21:32.903226 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 7 00:21:32.937680 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 7 00:21:32.937841 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 00:21:32.941248 systemd[1]: Stopped target timers.target - Timer Units. Jul 7 00:21:32.942475 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 7 00:21:32.942589 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 00:21:32.947562 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 7 00:21:32.948671 systemd[1]: Stopped target basic.target - Basic System. Jul 7 00:21:32.950626 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 7 00:21:32.951613 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 00:21:32.954885 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 7 00:21:32.956132 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 7 00:21:32.956527 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 7 00:21:32.957037 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 00:21:32.957400 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 7 00:21:32.957934 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 7 00:21:32.958288 systemd[1]: Stopped target swap.target - Swaps. Jul 7 00:21:32.958790 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 7 00:21:32.958895 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 7 00:21:32.970600 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 7 00:21:32.971760 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 00:21:32.972084 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 7 00:21:32.976025 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 00:21:32.976299 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 7 00:21:32.976403 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 7 00:21:32.981622 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 7 00:21:32.981732 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 00:21:32.983909 systemd[1]: Stopped target paths.target - Path Units. Jul 7 00:21:32.984963 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 7 00:21:32.990529 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 00:21:32.990682 systemd[1]: Stopped target slices.target - Slice Units. Jul 7 00:21:32.994291 systemd[1]: Stopped target sockets.target - Socket Units. Jul 7 00:21:32.995270 systemd[1]: iscsid.socket: Deactivated successfully. Jul 7 00:21:32.995362 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 00:21:32.998039 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 7 00:21:32.998121 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 00:21:32.999010 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 7 00:21:32.999123 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 00:21:33.002096 systemd[1]: ignition-files.service: Deactivated successfully. Jul 7 00:21:33.002197 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 7 00:21:33.005071 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 7 00:21:33.006167 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 7 00:21:33.006276 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 00:21:33.007530 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 7 00:21:33.010392 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 7 00:21:33.010521 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 00:21:33.011012 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 7 00:21:33.011104 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 00:21:33.025385 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 7 00:21:33.026367 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 7 00:21:33.037497 ignition[1086]: INFO : Ignition 2.21.0 Jul 7 00:21:33.037497 ignition[1086]: INFO : Stage: umount Jul 7 00:21:33.039284 ignition[1086]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 00:21:33.039284 ignition[1086]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 00:21:33.041546 ignition[1086]: INFO : umount: umount passed Jul 7 00:21:33.041546 ignition[1086]: INFO : Ignition finished successfully Jul 7 00:21:33.043236 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 7 00:21:33.043359 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 7 00:21:33.045433 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 7 00:21:33.045819 systemd[1]: Stopped target network.target - Network. Jul 7 00:21:33.046154 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 7 00:21:33.046200 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 7 00:21:33.046544 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 7 00:21:33.046584 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 7 00:21:33.047141 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 7 00:21:33.047187 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 7 00:21:33.047448 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 7 00:21:33.047504 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 7 00:21:33.048026 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 7 00:21:33.055722 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 7 00:21:33.064708 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 7 00:21:33.064859 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 7 00:21:33.069136 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 7 00:21:33.069343 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 7 00:21:33.069555 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 7 00:21:33.073886 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 7 00:21:33.075159 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 7 00:21:33.075318 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 7 00:21:33.075366 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 7 00:21:33.076573 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 7 00:21:33.076831 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 7 00:21:33.076878 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 00:21:33.077197 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 7 00:21:33.077238 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 7 00:21:33.080031 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 7 00:21:33.080076 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 7 00:21:33.081084 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 7 00:21:33.081132 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 00:21:33.084890 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 00:21:33.086858 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 7 00:21:33.086928 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 7 00:21:33.102632 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 7 00:21:33.102778 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 7 00:21:33.105738 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 7 00:21:33.106761 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 00:21:33.110154 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 7 00:21:33.110219 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 7 00:21:33.112193 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 7 00:21:33.112229 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 00:21:33.113215 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 7 00:21:33.113263 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 7 00:21:33.115423 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 7 00:21:33.115483 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 7 00:21:33.116207 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 00:21:33.116247 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 00:21:33.124217 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 7 00:21:33.126264 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 7 00:21:33.126318 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 00:21:33.129737 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 7 00:21:33.129790 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 00:21:33.133110 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 00:21:33.133163 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:21:33.137317 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 7 00:21:33.137380 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 7 00:21:33.137425 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 7 00:21:33.152221 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 7 00:21:33.152333 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 7 00:21:33.294412 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 7 00:21:33.294553 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 7 00:21:33.296649 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 7 00:21:33.297219 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 7 00:21:33.297274 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 7 00:21:33.301948 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 7 00:21:33.327543 systemd[1]: Switching root. Jul 7 00:21:33.363797 systemd-journald[220]: Journal stopped Jul 7 00:21:34.710764 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). Jul 7 00:21:34.710838 kernel: SELinux: policy capability network_peer_controls=1 Jul 7 00:21:34.710852 kernel: SELinux: policy capability open_perms=1 Jul 7 00:21:34.710869 kernel: SELinux: policy capability extended_socket_class=1 Jul 7 00:21:34.710889 kernel: SELinux: policy capability always_check_network=0 Jul 7 00:21:34.710903 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 7 00:21:34.710915 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 7 00:21:34.710926 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 7 00:21:34.710938 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 7 00:21:34.710949 kernel: SELinux: policy capability userspace_initial_context=0 Jul 7 00:21:34.710960 kernel: audit: type=1403 audit(1751847693.948:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 7 00:21:34.710978 systemd[1]: Successfully loaded SELinux policy in 50.590ms. Jul 7 00:21:34.711002 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 12.562ms. Jul 7 00:21:34.711015 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 7 00:21:34.711030 systemd[1]: Detected virtualization kvm. Jul 7 00:21:34.711042 systemd[1]: Detected architecture x86-64. Jul 7 00:21:34.711055 systemd[1]: Detected first boot. Jul 7 00:21:34.711067 systemd[1]: Initializing machine ID from VM UUID. Jul 7 00:21:34.711079 zram_generator::config[1132]: No configuration found. Jul 7 00:21:34.711092 kernel: Guest personality initialized and is inactive Jul 7 00:21:34.711108 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 7 00:21:34.711120 kernel: Initialized host personality Jul 7 00:21:34.711133 kernel: NET: Registered PF_VSOCK protocol family Jul 7 00:21:34.711145 systemd[1]: Populated /etc with preset unit settings. Jul 7 00:21:34.711158 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 7 00:21:34.711175 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 7 00:21:34.711188 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 7 00:21:34.711200 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 7 00:21:34.711212 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 7 00:21:34.711224 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 7 00:21:34.711236 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 7 00:21:34.711250 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 7 00:21:34.711262 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 7 00:21:34.711274 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 7 00:21:34.711286 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 7 00:21:34.711298 systemd[1]: Created slice user.slice - User and Session Slice. Jul 7 00:21:34.711310 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 00:21:34.711327 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 00:21:34.711339 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 7 00:21:34.711351 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 7 00:21:34.711365 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 7 00:21:34.711378 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 00:21:34.711390 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 7 00:21:34.711402 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 00:21:34.711414 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 00:21:34.711426 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 7 00:21:34.711437 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 7 00:21:34.711451 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 7 00:21:34.711492 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 7 00:21:34.711505 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 00:21:34.711517 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 00:21:34.711531 systemd[1]: Reached target slices.target - Slice Units. Jul 7 00:21:34.711545 systemd[1]: Reached target swap.target - Swaps. Jul 7 00:21:34.711557 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 7 00:21:34.711568 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 7 00:21:34.711581 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 7 00:21:34.711596 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 00:21:34.711608 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 00:21:34.711621 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 00:21:34.711632 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 7 00:21:34.711644 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 7 00:21:34.711656 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 7 00:21:34.711668 systemd[1]: Mounting media.mount - External Media Directory... Jul 7 00:21:34.711680 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:21:34.711692 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 7 00:21:34.711706 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 7 00:21:34.711718 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 7 00:21:34.711730 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 7 00:21:34.711742 systemd[1]: Reached target machines.target - Containers. Jul 7 00:21:34.711754 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 7 00:21:34.711766 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 00:21:34.711778 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 00:21:34.711790 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 7 00:21:34.711802 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 00:21:34.712005 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 00:21:34.712022 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 00:21:34.712039 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 7 00:21:34.712058 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 00:21:34.712070 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 7 00:21:34.712084 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 7 00:21:34.712096 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 7 00:21:34.712107 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 7 00:21:34.712121 systemd[1]: Stopped systemd-fsck-usr.service. Jul 7 00:21:34.712134 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 00:21:34.712147 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 00:21:34.712159 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 00:21:34.712171 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 00:21:34.712183 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 7 00:21:34.712207 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 7 00:21:34.712231 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 00:21:34.712243 systemd[1]: verity-setup.service: Deactivated successfully. Jul 7 00:21:34.712255 systemd[1]: Stopped verity-setup.service. Jul 7 00:21:34.712266 kernel: loop: module loaded Jul 7 00:21:34.712278 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:21:34.712290 kernel: fuse: init (API version 7.41) Jul 7 00:21:34.712304 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 7 00:21:34.712316 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 7 00:21:34.712328 systemd[1]: Mounted media.mount - External Media Directory. Jul 7 00:21:34.712340 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 7 00:21:34.712352 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 7 00:21:34.712365 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 7 00:21:34.712378 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 00:21:34.712391 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 7 00:21:34.712403 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 7 00:21:34.712435 systemd-journald[1203]: Collecting audit messages is disabled. Jul 7 00:21:34.712743 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 7 00:21:34.712762 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 00:21:34.712775 systemd-journald[1203]: Journal started Jul 7 00:21:34.712800 systemd-journald[1203]: Runtime Journal (/run/log/journal/04fef42c11674de3a2c2803fb4eb1eef) is 6M, max 48.2M, 42.2M free. Jul 7 00:21:34.466421 systemd[1]: Queued start job for default target multi-user.target. Jul 7 00:21:34.488298 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 7 00:21:34.488773 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 7 00:21:34.714956 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 00:21:34.718496 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 00:21:34.718572 kernel: ACPI: bus type drm_connector registered Jul 7 00:21:34.720396 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 00:21:34.720683 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 00:21:34.722220 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 00:21:34.722432 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 00:21:34.724175 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 7 00:21:34.724417 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 7 00:21:34.725986 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 00:21:34.726201 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 00:21:34.727866 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 00:21:34.729371 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 00:21:34.731053 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 7 00:21:34.732664 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 7 00:21:34.748293 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 00:21:34.751185 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 7 00:21:34.753393 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 7 00:21:34.754577 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 7 00:21:34.754610 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 00:21:34.757689 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 7 00:21:34.762770 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 7 00:21:34.765013 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 00:21:34.767843 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 7 00:21:34.771324 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 7 00:21:34.772741 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 00:21:34.774200 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 7 00:21:34.775571 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 00:21:34.776792 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 00:21:34.779183 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 7 00:21:34.783126 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 7 00:21:34.795654 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 00:21:34.797623 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 7 00:21:34.799171 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 7 00:21:34.811484 kernel: loop0: detected capacity change from 0 to 113872 Jul 7 00:21:34.811568 systemd-journald[1203]: Time spent on flushing to /var/log/journal/04fef42c11674de3a2c2803fb4eb1eef is 13.940ms for 1044 entries. Jul 7 00:21:34.811568 systemd-journald[1203]: System Journal (/var/log/journal/04fef42c11674de3a2c2803fb4eb1eef) is 8M, max 195.6M, 187.6M free. Jul 7 00:21:34.838652 systemd-journald[1203]: Received client request to flush runtime journal. Jul 7 00:21:34.838721 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 7 00:21:34.813029 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 7 00:21:34.815125 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 7 00:21:34.822613 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 7 00:21:34.824347 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 00:21:34.842270 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 7 00:21:34.851608 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 7 00:21:34.855717 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 00:21:34.860492 kernel: loop1: detected capacity change from 0 to 146240 Jul 7 00:21:34.865948 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 7 00:21:34.887615 systemd-tmpfiles[1268]: ACLs are not supported, ignoring. Jul 7 00:21:34.887634 systemd-tmpfiles[1268]: ACLs are not supported, ignoring. Jul 7 00:21:34.892584 kernel: loop2: detected capacity change from 0 to 229808 Jul 7 00:21:34.895765 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 00:21:34.922524 kernel: loop3: detected capacity change from 0 to 113872 Jul 7 00:21:34.930497 kernel: loop4: detected capacity change from 0 to 146240 Jul 7 00:21:34.943504 kernel: loop5: detected capacity change from 0 to 229808 Jul 7 00:21:34.952013 (sd-merge)[1274]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 7 00:21:34.953552 (sd-merge)[1274]: Merged extensions into '/usr'. Jul 7 00:21:34.959135 systemd[1]: Reload requested from client PID 1252 ('systemd-sysext') (unit systemd-sysext.service)... Jul 7 00:21:34.959157 systemd[1]: Reloading... Jul 7 00:21:35.025491 zram_generator::config[1297]: No configuration found. Jul 7 00:21:35.106320 ldconfig[1247]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 7 00:21:35.127257 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 00:21:35.208104 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 7 00:21:35.208334 systemd[1]: Reloading finished in 248 ms. Jul 7 00:21:35.239952 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 7 00:21:35.241661 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 7 00:21:35.258832 systemd[1]: Starting ensure-sysext.service... Jul 7 00:21:35.260624 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 00:21:35.269761 systemd[1]: Reload requested from client PID 1338 ('systemctl') (unit ensure-sysext.service)... Jul 7 00:21:35.269775 systemd[1]: Reloading... Jul 7 00:21:35.282520 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 7 00:21:35.282557 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 7 00:21:35.282843 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 7 00:21:35.283101 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 7 00:21:35.284096 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 7 00:21:35.284432 systemd-tmpfiles[1339]: ACLs are not supported, ignoring. Jul 7 00:21:35.284591 systemd-tmpfiles[1339]: ACLs are not supported, ignoring. Jul 7 00:21:35.288545 systemd-tmpfiles[1339]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 00:21:35.288557 systemd-tmpfiles[1339]: Skipping /boot Jul 7 00:21:35.302963 systemd-tmpfiles[1339]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 00:21:35.303091 systemd-tmpfiles[1339]: Skipping /boot Jul 7 00:21:35.322495 zram_generator::config[1372]: No configuration found. Jul 7 00:21:35.409996 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 00:21:35.492286 systemd[1]: Reloading finished in 222 ms. Jul 7 00:21:35.520280 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 7 00:21:35.544968 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 00:21:35.554371 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 7 00:21:35.557348 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 7 00:21:35.572080 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 7 00:21:35.577353 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 00:21:35.580824 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 00:21:35.583357 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 7 00:21:35.588290 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:21:35.588483 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 00:21:35.590781 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 00:21:35.594390 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 00:21:35.598647 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 00:21:35.603146 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 00:21:35.603267 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 00:21:35.609720 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 7 00:21:35.610763 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:21:35.612915 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 7 00:21:35.615757 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 00:21:35.616003 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 00:21:35.617838 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 00:21:35.618062 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 00:21:35.619956 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 00:21:35.620167 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 00:21:35.632036 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:21:35.632249 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 00:21:35.635234 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 00:21:35.636024 systemd-udevd[1412]: Using default interface naming scheme 'v255'. Jul 7 00:21:35.637251 augenrules[1439]: No rules Jul 7 00:21:35.637641 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 00:21:35.641845 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 00:21:35.643044 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 00:21:35.643181 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 00:21:35.645096 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 7 00:21:35.646321 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:21:35.647641 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 00:21:35.654740 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 7 00:21:35.657127 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 7 00:21:35.658864 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 7 00:21:35.660883 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 00:21:35.661101 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 00:21:35.663261 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 00:21:35.663613 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 00:21:35.665479 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 00:21:35.665693 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 00:21:35.668929 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 7 00:21:35.672361 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 00:21:35.680035 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 7 00:21:35.685232 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:21:35.690663 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 7 00:21:35.691754 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 00:21:35.694878 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 00:21:35.699625 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 00:21:35.710784 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 00:21:35.719558 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 00:21:35.720777 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 00:21:35.721015 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 00:21:35.725656 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 00:21:35.727064 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 7 00:21:35.727206 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:21:35.734076 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 00:21:35.735447 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 00:21:35.737629 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 00:21:35.744337 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 00:21:35.747728 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 00:21:35.748093 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 00:21:35.750199 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 00:21:35.751519 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 00:21:35.757304 systemd[1]: Finished ensure-sysext.service. Jul 7 00:21:35.762036 augenrules[1477]: /sbin/augenrules: No change Jul 7 00:21:35.803187 augenrules[1518]: No rules Jul 7 00:21:35.804518 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 00:21:35.804830 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 7 00:21:35.809287 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 7 00:21:35.809455 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 00:21:35.809529 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 00:21:35.811805 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 7 00:21:35.818852 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 7 00:21:35.829355 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 7 00:21:35.833977 systemd-resolved[1408]: Positive Trust Anchors: Jul 7 00:21:35.833994 systemd-resolved[1408]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 00:21:35.834027 systemd-resolved[1408]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 00:21:35.840153 systemd-resolved[1408]: Defaulting to hostname 'linux'. Jul 7 00:21:35.841772 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 00:21:35.843090 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 00:21:35.845597 kernel: mousedev: PS/2 mouse device common for all mice Jul 7 00:21:35.853552 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 7 00:21:35.856480 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 7 00:21:35.862482 kernel: ACPI: button: Power Button [PWRF] Jul 7 00:21:35.875702 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jul 7 00:21:35.875953 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 7 00:21:35.876112 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 7 00:21:35.907205 systemd-networkd[1491]: lo: Link UP Jul 7 00:21:35.907502 systemd-networkd[1491]: lo: Gained carrier Jul 7 00:21:35.909091 systemd-networkd[1491]: Enumeration completed Jul 7 00:21:35.909216 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 00:21:35.909453 systemd-networkd[1491]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 00:21:35.909471 systemd-networkd[1491]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 00:21:35.910628 systemd[1]: Reached target network.target - Network. Jul 7 00:21:35.910633 systemd-networkd[1491]: eth0: Link UP Jul 7 00:21:35.910778 systemd-networkd[1491]: eth0: Gained carrier Jul 7 00:21:35.910792 systemd-networkd[1491]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 00:21:35.913870 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 7 00:21:35.923118 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 7 00:21:35.927296 systemd-networkd[1491]: eth0: DHCPv4 address 10.0.0.140/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 7 00:21:35.957702 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 7 00:21:36.002654 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:21:36.014980 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 00:21:36.015267 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:21:36.018796 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:21:36.024529 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 7 00:21:37.524577 systemd-resolved[1408]: Clock change detected. Flushing caches. Jul 7 00:21:37.524676 systemd-timesyncd[1528]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 7 00:21:37.524715 systemd-timesyncd[1528]: Initial clock synchronization to Mon 2025-07-07 00:21:37.524528 UTC. Jul 7 00:21:37.525729 systemd[1]: Reached target time-set.target - System Time Set. Jul 7 00:21:37.560847 kernel: kvm_amd: TSC scaling supported Jul 7 00:21:37.560911 kernel: kvm_amd: Nested Virtualization enabled Jul 7 00:21:37.560925 kernel: kvm_amd: Nested Paging enabled Jul 7 00:21:37.562009 kernel: kvm_amd: LBR virtualization supported Jul 7 00:21:37.564845 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jul 7 00:21:37.564872 kernel: kvm_amd: Virtual GIF supported Jul 7 00:21:37.605623 kernel: EDAC MC: Ver: 3.0.0 Jul 7 00:21:37.609444 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:21:37.610894 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 00:21:37.612073 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 7 00:21:37.613316 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 7 00:21:37.614542 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 7 00:21:37.615839 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 7 00:21:37.617051 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 7 00:21:37.618281 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 7 00:21:37.619521 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 7 00:21:37.619549 systemd[1]: Reached target paths.target - Path Units. Jul 7 00:21:37.620472 systemd[1]: Reached target timers.target - Timer Units. Jul 7 00:21:37.622499 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 7 00:21:37.625450 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 7 00:21:37.629025 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 7 00:21:37.630432 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 7 00:21:37.631756 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 7 00:21:37.641287 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 7 00:21:37.642730 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 7 00:21:37.644470 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 7 00:21:37.646284 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 00:21:37.647220 systemd[1]: Reached target basic.target - Basic System. Jul 7 00:21:37.648166 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 7 00:21:37.648192 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 7 00:21:37.649184 systemd[1]: Starting containerd.service - containerd container runtime... Jul 7 00:21:37.651211 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 7 00:21:37.654753 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 7 00:21:37.659410 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 7 00:21:37.661724 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 7 00:21:37.662714 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 7 00:21:37.663831 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 7 00:21:37.667720 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 7 00:21:37.670079 jq[1567]: false Jul 7 00:21:37.669780 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 7 00:21:37.671682 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 7 00:21:37.675764 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 7 00:21:37.676468 google_oslogin_nss_cache[1569]: oslogin_cache_refresh[1569]: Refreshing passwd entry cache Jul 7 00:21:37.676683 oslogin_cache_refresh[1569]: Refreshing passwd entry cache Jul 7 00:21:37.679037 extend-filesystems[1568]: Found /dev/vda6 Jul 7 00:21:37.681636 extend-filesystems[1568]: Found /dev/vda9 Jul 7 00:21:37.681650 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 7 00:21:37.684790 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 7 00:21:37.685517 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 7 00:21:37.686136 extend-filesystems[1568]: Checking size of /dev/vda9 Jul 7 00:21:37.687787 systemd[1]: Starting update-engine.service - Update Engine... Jul 7 00:21:37.690726 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 7 00:21:37.690881 oslogin_cache_refresh[1569]: Failure getting users, quitting Jul 7 00:21:37.691320 google_oslogin_nss_cache[1569]: oslogin_cache_refresh[1569]: Failure getting users, quitting Jul 7 00:21:37.691320 google_oslogin_nss_cache[1569]: oslogin_cache_refresh[1569]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 7 00:21:37.691320 google_oslogin_nss_cache[1569]: oslogin_cache_refresh[1569]: Refreshing group entry cache Jul 7 00:21:37.690899 oslogin_cache_refresh[1569]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 7 00:21:37.690942 oslogin_cache_refresh[1569]: Refreshing group entry cache Jul 7 00:21:37.697555 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 7 00:21:37.699320 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 7 00:21:37.701674 jq[1587]: true Jul 7 00:21:37.699883 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 7 00:21:37.700245 systemd[1]: motdgen.service: Deactivated successfully. Jul 7 00:21:37.700521 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 7 00:21:37.703104 google_oslogin_nss_cache[1569]: oslogin_cache_refresh[1569]: Failure getting groups, quitting Jul 7 00:21:37.703145 oslogin_cache_refresh[1569]: Failure getting groups, quitting Jul 7 00:21:37.703200 google_oslogin_nss_cache[1569]: oslogin_cache_refresh[1569]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 7 00:21:37.703282 oslogin_cache_refresh[1569]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 7 00:21:37.706350 extend-filesystems[1568]: Resized partition /dev/vda9 Jul 7 00:21:37.708114 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 7 00:21:37.708364 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 7 00:21:37.709935 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 7 00:21:37.710248 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 7 00:21:37.714438 extend-filesystems[1596]: resize2fs 1.47.2 (1-Jan-2025) Jul 7 00:21:37.725788 update_engine[1585]: I20250707 00:21:37.725692 1585 main.cc:92] Flatcar Update Engine starting Jul 7 00:21:37.731636 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 7 00:21:37.733070 jq[1597]: true Jul 7 00:21:37.735558 (ntainerd)[1598]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 7 00:21:37.744611 tar[1595]: linux-amd64/LICENSE Jul 7 00:21:37.744808 tar[1595]: linux-amd64/helm Jul 7 00:21:37.767370 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 7 00:21:37.786216 extend-filesystems[1596]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 7 00:21:37.786216 extend-filesystems[1596]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 7 00:21:37.786216 extend-filesystems[1596]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 7 00:21:37.789900 extend-filesystems[1568]: Resized filesystem in /dev/vda9 Jul 7 00:21:37.793481 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 7 00:21:37.794011 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 7 00:21:37.795164 systemd-logind[1579]: Watching system buttons on /dev/input/event2 (Power Button) Jul 7 00:21:37.795192 systemd-logind[1579]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 7 00:21:37.797109 systemd-logind[1579]: New seat seat0. Jul 7 00:21:37.799695 systemd[1]: Started systemd-logind.service - User Login Management. Jul 7 00:21:37.805375 bash[1626]: Updated "/home/core/.ssh/authorized_keys" Jul 7 00:21:37.808648 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 7 00:21:37.810626 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 7 00:21:37.812209 dbus-daemon[1565]: [system] SELinux support is enabled Jul 7 00:21:37.812634 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 7 00:21:37.816516 update_engine[1585]: I20250707 00:21:37.816475 1585 update_check_scheduler.cc:74] Next update check in 2m21s Jul 7 00:21:37.817143 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 7 00:21:37.818522 dbus-daemon[1565]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 7 00:21:37.817175 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 7 00:21:37.818469 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 7 00:21:37.818485 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 7 00:21:37.819761 systemd[1]: Started update-engine.service - Update Engine. Jul 7 00:21:37.822881 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 7 00:21:37.870626 locksmithd[1630]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 7 00:21:37.929188 sshd_keygen[1592]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 7 00:21:37.934458 containerd[1598]: time="2025-07-07T00:21:37Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 7 00:21:37.935410 containerd[1598]: time="2025-07-07T00:21:37.935376492Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 7 00:21:37.943209 containerd[1598]: time="2025-07-07T00:21:37.943160668Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.456µs" Jul 7 00:21:37.943209 containerd[1598]: time="2025-07-07T00:21:37.943201644Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 7 00:21:37.943273 containerd[1598]: time="2025-07-07T00:21:37.943219869Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 7 00:21:37.943442 containerd[1598]: time="2025-07-07T00:21:37.943411388Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 7 00:21:37.943473 containerd[1598]: time="2025-07-07T00:21:37.943448167Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 7 00:21:37.943492 containerd[1598]: time="2025-07-07T00:21:37.943475959Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 7 00:21:37.943565 containerd[1598]: time="2025-07-07T00:21:37.943539949Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 7 00:21:37.943565 containerd[1598]: time="2025-07-07T00:21:37.943558594Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 7 00:21:37.943879 containerd[1598]: time="2025-07-07T00:21:37.943852625Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 7 00:21:37.943879 containerd[1598]: time="2025-07-07T00:21:37.943872913Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 7 00:21:37.943932 containerd[1598]: time="2025-07-07T00:21:37.943883373Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 7 00:21:37.943932 containerd[1598]: time="2025-07-07T00:21:37.943892890Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 7 00:21:37.944021 containerd[1598]: time="2025-07-07T00:21:37.943998398Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 7 00:21:37.944248 containerd[1598]: time="2025-07-07T00:21:37.944224051Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 7 00:21:37.944274 containerd[1598]: time="2025-07-07T00:21:37.944259398Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 7 00:21:37.944274 containerd[1598]: time="2025-07-07T00:21:37.944269787Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 7 00:21:37.944312 containerd[1598]: time="2025-07-07T00:21:37.944301577Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 7 00:21:37.944549 containerd[1598]: time="2025-07-07T00:21:37.944526448Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 7 00:21:37.944631 containerd[1598]: time="2025-07-07T00:21:37.944608803Z" level=info msg="metadata content store policy set" policy=shared Jul 7 00:21:37.950650 containerd[1598]: time="2025-07-07T00:21:37.950617309Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 7 00:21:37.950688 containerd[1598]: time="2025-07-07T00:21:37.950668785Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 7 00:21:37.950688 containerd[1598]: time="2025-07-07T00:21:37.950684294Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 7 00:21:37.950763 containerd[1598]: time="2025-07-07T00:21:37.950702729Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 7 00:21:37.950788 containerd[1598]: time="2025-07-07T00:21:37.950763833Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 7 00:21:37.950788 containerd[1598]: time="2025-07-07T00:21:37.950776297Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 7 00:21:37.950824 containerd[1598]: time="2025-07-07T00:21:37.950789341Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 7 00:21:37.950824 containerd[1598]: time="2025-07-07T00:21:37.950801865Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 7 00:21:37.950824 containerd[1598]: time="2025-07-07T00:21:37.950818766Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 7 00:21:37.950888 containerd[1598]: time="2025-07-07T00:21:37.950828825Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 7 00:21:37.950888 containerd[1598]: time="2025-07-07T00:21:37.950838103Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 7 00:21:37.950888 containerd[1598]: time="2025-07-07T00:21:37.950850656Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 7 00:21:37.951307 containerd[1598]: time="2025-07-07T00:21:37.950972805Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 7 00:21:37.951307 containerd[1598]: time="2025-07-07T00:21:37.950997882Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 7 00:21:37.951307 containerd[1598]: time="2025-07-07T00:21:37.951012179Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 7 00:21:37.951307 containerd[1598]: time="2025-07-07T00:21:37.951029421Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 7 00:21:37.951307 containerd[1598]: time="2025-07-07T00:21:37.951041504Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 7 00:21:37.951307 containerd[1598]: time="2025-07-07T00:21:37.951051773Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 7 00:21:37.951307 containerd[1598]: time="2025-07-07T00:21:37.951062534Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 7 00:21:37.951307 containerd[1598]: time="2025-07-07T00:21:37.951072693Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 7 00:21:37.951307 containerd[1598]: time="2025-07-07T00:21:37.951083583Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 7 00:21:37.951307 containerd[1598]: time="2025-07-07T00:21:37.951093582Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 7 00:21:37.951307 containerd[1598]: time="2025-07-07T00:21:37.951103861Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 7 00:21:37.951307 containerd[1598]: time="2025-07-07T00:21:37.951164495Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 7 00:21:37.951307 containerd[1598]: time="2025-07-07T00:21:37.951176668Z" level=info msg="Start snapshots syncer" Jul 7 00:21:37.951307 containerd[1598]: time="2025-07-07T00:21:37.951209830Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 7 00:21:37.952116 containerd[1598]: time="2025-07-07T00:21:37.951988750Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 7 00:21:37.952219 containerd[1598]: time="2025-07-07T00:21:37.952156304Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 7 00:21:37.952942 containerd[1598]: time="2025-07-07T00:21:37.952908916Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 7 00:21:37.953054 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 7 00:21:37.953116 containerd[1598]: time="2025-07-07T00:21:37.953078884Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 7 00:21:37.953116 containerd[1598]: time="2025-07-07T00:21:37.953107989Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 7 00:21:37.953156 containerd[1598]: time="2025-07-07T00:21:37.953124820Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 7 00:21:37.954334 containerd[1598]: time="2025-07-07T00:21:37.954231525Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 7 00:21:37.954334 containerd[1598]: time="2025-07-07T00:21:37.954272312Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 7 00:21:37.954334 containerd[1598]: time="2025-07-07T00:21:37.954286438Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 7 00:21:37.954517 containerd[1598]: time="2025-07-07T00:21:37.954385554Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 7 00:21:37.954517 containerd[1598]: time="2025-07-07T00:21:37.954418897Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 7 00:21:37.954517 containerd[1598]: time="2025-07-07T00:21:37.954436520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 7 00:21:37.954517 containerd[1598]: time="2025-07-07T00:21:37.954453191Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 7 00:21:37.955312 containerd[1598]: time="2025-07-07T00:21:37.955277607Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 7 00:21:37.955369 containerd[1598]: time="2025-07-07T00:21:37.955330676Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 7 00:21:37.955369 containerd[1598]: time="2025-07-07T00:21:37.955343090Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 7 00:21:37.955369 containerd[1598]: time="2025-07-07T00:21:37.955357386Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 7 00:21:37.955369 containerd[1598]: time="2025-07-07T00:21:37.955369349Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 7 00:21:37.955444 containerd[1598]: time="2025-07-07T00:21:37.955380830Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 7 00:21:37.955444 containerd[1598]: time="2025-07-07T00:21:37.955402912Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 7 00:21:37.955444 containerd[1598]: time="2025-07-07T00:21:37.955434782Z" level=info msg="runtime interface created" Jul 7 00:21:37.955444 containerd[1598]: time="2025-07-07T00:21:37.955442847Z" level=info msg="created NRI interface" Jul 7 00:21:37.955520 containerd[1598]: time="2025-07-07T00:21:37.955457133Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 7 00:21:37.955520 containerd[1598]: time="2025-07-07T00:21:37.955476820Z" level=info msg="Connect containerd service" Jul 7 00:21:37.955520 containerd[1598]: time="2025-07-07T00:21:37.955514030Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 7 00:21:37.956399 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 7 00:21:37.956935 containerd[1598]: time="2025-07-07T00:21:37.956516770Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 00:21:37.977392 systemd[1]: issuegen.service: Deactivated successfully. Jul 7 00:21:37.977782 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 7 00:21:37.982001 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 7 00:21:38.006221 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 7 00:21:38.009853 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 7 00:21:38.013052 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 7 00:21:38.014265 systemd[1]: Reached target getty.target - Login Prompts. Jul 7 00:21:38.049257 containerd[1598]: time="2025-07-07T00:21:38.049207075Z" level=info msg="Start subscribing containerd event" Jul 7 00:21:38.049488 containerd[1598]: time="2025-07-07T00:21:38.049281885Z" level=info msg="Start recovering state" Jul 7 00:21:38.049488 containerd[1598]: time="2025-07-07T00:21:38.049354080Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 7 00:21:38.049488 containerd[1598]: time="2025-07-07T00:21:38.049391370Z" level=info msg="Start event monitor" Jul 7 00:21:38.049488 containerd[1598]: time="2025-07-07T00:21:38.049408562Z" level=info msg="Start cni network conf syncer for default" Jul 7 00:21:38.049488 containerd[1598]: time="2025-07-07T00:21:38.049416137Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 7 00:21:38.049488 containerd[1598]: time="2025-07-07T00:21:38.049417199Z" level=info msg="Start streaming server" Jul 7 00:21:38.049488 containerd[1598]: time="2025-07-07T00:21:38.049447746Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 7 00:21:38.049488 containerd[1598]: time="2025-07-07T00:21:38.049454969Z" level=info msg="runtime interface starting up..." Jul 7 00:21:38.049488 containerd[1598]: time="2025-07-07T00:21:38.049460620Z" level=info msg="starting plugins..." Jul 7 00:21:38.049488 containerd[1598]: time="2025-07-07T00:21:38.049474847Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 7 00:21:38.049783 systemd[1]: Started containerd.service - containerd container runtime. Jul 7 00:21:38.051043 containerd[1598]: time="2025-07-07T00:21:38.049920492Z" level=info msg="containerd successfully booted in 0.116070s" Jul 7 00:21:38.200330 tar[1595]: linux-amd64/README.md Jul 7 00:21:38.223146 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 7 00:21:38.458784 systemd-networkd[1491]: eth0: Gained IPv6LL Jul 7 00:21:38.461912 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 7 00:21:38.463694 systemd[1]: Reached target network-online.target - Network is Online. Jul 7 00:21:38.466240 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 7 00:21:38.468562 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:21:38.470701 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 7 00:21:38.502831 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 7 00:21:38.505027 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 7 00:21:38.505360 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 7 00:21:38.508411 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 7 00:21:39.246854 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:21:39.248702 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 7 00:21:39.250136 systemd[1]: Startup finished in 3.030s (kernel) + 6.324s (initrd) + 3.851s (userspace) = 13.206s. Jul 7 00:21:39.258039 (kubelet)[1701]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 00:21:39.680181 kubelet[1701]: E0707 00:21:39.680061 1701 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 00:21:39.684319 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 00:21:39.684518 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 00:21:39.684921 systemd[1]: kubelet.service: Consumed 1.025s CPU time, 267.1M memory peak. Jul 7 00:21:42.051651 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 7 00:21:42.053197 systemd[1]: Started sshd@0-10.0.0.140:22-10.0.0.1:35454.service - OpenSSH per-connection server daemon (10.0.0.1:35454). Jul 7 00:21:42.123452 sshd[1714]: Accepted publickey for core from 10.0.0.1 port 35454 ssh2: RSA SHA256:vB2ZN40YeU5BcTegIv+9PTVQlt78XDAEBJuAoVsHXyE Jul 7 00:21:42.125933 sshd-session[1714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:21:42.133393 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 7 00:21:42.134830 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 7 00:21:42.141884 systemd-logind[1579]: New session 1 of user core. Jul 7 00:21:42.159665 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 7 00:21:42.163765 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 7 00:21:42.187148 (systemd)[1718]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 7 00:21:42.189987 systemd-logind[1579]: New session c1 of user core. Jul 7 00:21:42.347094 systemd[1718]: Queued start job for default target default.target. Jul 7 00:21:42.355795 systemd[1718]: Created slice app.slice - User Application Slice. Jul 7 00:21:42.355821 systemd[1718]: Reached target paths.target - Paths. Jul 7 00:21:42.355871 systemd[1718]: Reached target timers.target - Timers. Jul 7 00:21:42.357370 systemd[1718]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 7 00:21:42.368351 systemd[1718]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 7 00:21:42.368494 systemd[1718]: Reached target sockets.target - Sockets. Jul 7 00:21:42.368537 systemd[1718]: Reached target basic.target - Basic System. Jul 7 00:21:42.368601 systemd[1718]: Reached target default.target - Main User Target. Jul 7 00:21:42.368640 systemd[1718]: Startup finished in 171ms. Jul 7 00:21:42.369115 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 7 00:21:42.370750 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 7 00:21:42.438780 systemd[1]: Started sshd@1-10.0.0.140:22-10.0.0.1:35458.service - OpenSSH per-connection server daemon (10.0.0.1:35458). Jul 7 00:21:42.485160 sshd[1729]: Accepted publickey for core from 10.0.0.1 port 35458 ssh2: RSA SHA256:vB2ZN40YeU5BcTegIv+9PTVQlt78XDAEBJuAoVsHXyE Jul 7 00:21:42.486735 sshd-session[1729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:21:42.491461 systemd-logind[1579]: New session 2 of user core. Jul 7 00:21:42.501745 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 7 00:21:42.554934 sshd[1731]: Connection closed by 10.0.0.1 port 35458 Jul 7 00:21:42.555313 sshd-session[1729]: pam_unix(sshd:session): session closed for user core Jul 7 00:21:42.573423 systemd[1]: sshd@1-10.0.0.140:22-10.0.0.1:35458.service: Deactivated successfully. Jul 7 00:21:42.575069 systemd[1]: session-2.scope: Deactivated successfully. Jul 7 00:21:42.575833 systemd-logind[1579]: Session 2 logged out. Waiting for processes to exit. Jul 7 00:21:42.578356 systemd[1]: Started sshd@2-10.0.0.140:22-10.0.0.1:35466.service - OpenSSH per-connection server daemon (10.0.0.1:35466). Jul 7 00:21:42.578924 systemd-logind[1579]: Removed session 2. Jul 7 00:21:42.624106 sshd[1737]: Accepted publickey for core from 10.0.0.1 port 35466 ssh2: RSA SHA256:vB2ZN40YeU5BcTegIv+9PTVQlt78XDAEBJuAoVsHXyE Jul 7 00:21:42.625536 sshd-session[1737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:21:42.629870 systemd-logind[1579]: New session 3 of user core. Jul 7 00:21:42.636720 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 7 00:21:42.684888 sshd[1739]: Connection closed by 10.0.0.1 port 35466 Jul 7 00:21:42.685232 sshd-session[1737]: pam_unix(sshd:session): session closed for user core Jul 7 00:21:42.696108 systemd[1]: sshd@2-10.0.0.140:22-10.0.0.1:35466.service: Deactivated successfully. Jul 7 00:21:42.697725 systemd[1]: session-3.scope: Deactivated successfully. Jul 7 00:21:42.698455 systemd-logind[1579]: Session 3 logged out. Waiting for processes to exit. Jul 7 00:21:42.701236 systemd[1]: Started sshd@3-10.0.0.140:22-10.0.0.1:35472.service - OpenSSH per-connection server daemon (10.0.0.1:35472). Jul 7 00:21:42.701771 systemd-logind[1579]: Removed session 3. Jul 7 00:21:42.754189 sshd[1745]: Accepted publickey for core from 10.0.0.1 port 35472 ssh2: RSA SHA256:vB2ZN40YeU5BcTegIv+9PTVQlt78XDAEBJuAoVsHXyE Jul 7 00:21:42.755473 sshd-session[1745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:21:42.759577 systemd-logind[1579]: New session 4 of user core. Jul 7 00:21:42.774697 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 7 00:21:42.826566 sshd[1747]: Connection closed by 10.0.0.1 port 35472 Jul 7 00:21:42.826898 sshd-session[1745]: pam_unix(sshd:session): session closed for user core Jul 7 00:21:42.837019 systemd[1]: sshd@3-10.0.0.140:22-10.0.0.1:35472.service: Deactivated successfully. Jul 7 00:21:42.838654 systemd[1]: session-4.scope: Deactivated successfully. Jul 7 00:21:42.839374 systemd-logind[1579]: Session 4 logged out. Waiting for processes to exit. Jul 7 00:21:42.842304 systemd[1]: Started sshd@4-10.0.0.140:22-10.0.0.1:35486.service - OpenSSH per-connection server daemon (10.0.0.1:35486). Jul 7 00:21:42.842813 systemd-logind[1579]: Removed session 4. Jul 7 00:21:42.887927 sshd[1753]: Accepted publickey for core from 10.0.0.1 port 35486 ssh2: RSA SHA256:vB2ZN40YeU5BcTegIv+9PTVQlt78XDAEBJuAoVsHXyE Jul 7 00:21:42.889227 sshd-session[1753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:21:42.893245 systemd-logind[1579]: New session 5 of user core. Jul 7 00:21:42.899722 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 7 00:21:42.955664 sudo[1756]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 7 00:21:42.955977 sudo[1756]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 00:21:42.976790 sudo[1756]: pam_unix(sudo:session): session closed for user root Jul 7 00:21:42.978291 sshd[1755]: Connection closed by 10.0.0.1 port 35486 Jul 7 00:21:42.978657 sshd-session[1753]: pam_unix(sshd:session): session closed for user core Jul 7 00:21:42.990204 systemd[1]: sshd@4-10.0.0.140:22-10.0.0.1:35486.service: Deactivated successfully. Jul 7 00:21:42.991863 systemd[1]: session-5.scope: Deactivated successfully. Jul 7 00:21:42.992611 systemd-logind[1579]: Session 5 logged out. Waiting for processes to exit. Jul 7 00:21:42.995552 systemd[1]: Started sshd@5-10.0.0.140:22-10.0.0.1:35496.service - OpenSSH per-connection server daemon (10.0.0.1:35496). Jul 7 00:21:42.996108 systemd-logind[1579]: Removed session 5. Jul 7 00:21:43.048292 sshd[1762]: Accepted publickey for core from 10.0.0.1 port 35496 ssh2: RSA SHA256:vB2ZN40YeU5BcTegIv+9PTVQlt78XDAEBJuAoVsHXyE Jul 7 00:21:43.049662 sshd-session[1762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:21:43.053659 systemd-logind[1579]: New session 6 of user core. Jul 7 00:21:43.075724 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 7 00:21:43.128001 sudo[1766]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 7 00:21:43.128317 sudo[1766]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 00:21:43.193778 sudo[1766]: pam_unix(sudo:session): session closed for user root Jul 7 00:21:43.199355 sudo[1765]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 7 00:21:43.199662 sudo[1765]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 00:21:43.208823 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 7 00:21:43.256356 augenrules[1788]: No rules Jul 7 00:21:43.258044 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 00:21:43.258309 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 7 00:21:43.259378 sudo[1765]: pam_unix(sudo:session): session closed for user root Jul 7 00:21:43.260755 sshd[1764]: Connection closed by 10.0.0.1 port 35496 Jul 7 00:21:43.261047 sshd-session[1762]: pam_unix(sshd:session): session closed for user core Jul 7 00:21:43.277152 systemd[1]: sshd@5-10.0.0.140:22-10.0.0.1:35496.service: Deactivated successfully. Jul 7 00:21:43.278861 systemd[1]: session-6.scope: Deactivated successfully. Jul 7 00:21:43.279525 systemd-logind[1579]: Session 6 logged out. Waiting for processes to exit. Jul 7 00:21:43.282055 systemd[1]: Started sshd@6-10.0.0.140:22-10.0.0.1:35508.service - OpenSSH per-connection server daemon (10.0.0.1:35508). Jul 7 00:21:43.282606 systemd-logind[1579]: Removed session 6. Jul 7 00:21:43.329990 sshd[1797]: Accepted publickey for core from 10.0.0.1 port 35508 ssh2: RSA SHA256:vB2ZN40YeU5BcTegIv+9PTVQlt78XDAEBJuAoVsHXyE Jul 7 00:21:43.331229 sshd-session[1797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:21:43.335085 systemd-logind[1579]: New session 7 of user core. Jul 7 00:21:43.344707 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 7 00:21:43.397624 sudo[1800]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 7 00:21:43.397958 sudo[1800]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 00:21:43.710162 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 7 00:21:43.728913 (dockerd)[1821]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 7 00:21:43.951433 dockerd[1821]: time="2025-07-07T00:21:43.951364438Z" level=info msg="Starting up" Jul 7 00:21:43.953251 dockerd[1821]: time="2025-07-07T00:21:43.953199970Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 7 00:21:44.444750 dockerd[1821]: time="2025-07-07T00:21:44.444701705Z" level=info msg="Loading containers: start." Jul 7 00:21:44.455606 kernel: Initializing XFRM netlink socket Jul 7 00:21:44.694162 systemd-networkd[1491]: docker0: Link UP Jul 7 00:21:44.700163 dockerd[1821]: time="2025-07-07T00:21:44.700068217Z" level=info msg="Loading containers: done." Jul 7 00:21:44.715151 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3604860927-merged.mount: Deactivated successfully. Jul 7 00:21:44.719030 dockerd[1821]: time="2025-07-07T00:21:44.718975739Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 7 00:21:44.719152 dockerd[1821]: time="2025-07-07T00:21:44.719053625Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 7 00:21:44.719202 dockerd[1821]: time="2025-07-07T00:21:44.719180874Z" level=info msg="Initializing buildkit" Jul 7 00:21:44.750214 dockerd[1821]: time="2025-07-07T00:21:44.750162314Z" level=info msg="Completed buildkit initialization" Jul 7 00:21:44.756310 dockerd[1821]: time="2025-07-07T00:21:44.756260117Z" level=info msg="Daemon has completed initialization" Jul 7 00:21:44.756409 dockerd[1821]: time="2025-07-07T00:21:44.756345618Z" level=info msg="API listen on /run/docker.sock" Jul 7 00:21:44.756458 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 7 00:21:45.264353 containerd[1598]: time="2025-07-07T00:21:45.264315531Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\"" Jul 7 00:21:45.872340 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2658233028.mount: Deactivated successfully. Jul 7 00:21:46.783406 containerd[1598]: time="2025-07-07T00:21:46.783335172Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:21:46.784115 containerd[1598]: time="2025-07-07T00:21:46.784070912Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.2: active requests=0, bytes read=30079099" Jul 7 00:21:46.785255 containerd[1598]: time="2025-07-07T00:21:46.785227821Z" level=info msg="ImageCreate event name:\"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:21:46.787797 containerd[1598]: time="2025-07-07T00:21:46.787743789Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:21:46.788773 containerd[1598]: time="2025-07-07T00:21:46.788716763Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.2\" with image id \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\", size \"30075899\" in 1.524356899s" Jul 7 00:21:46.788773 containerd[1598]: time="2025-07-07T00:21:46.788767628Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\" returns image reference \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\"" Jul 7 00:21:46.789348 containerd[1598]: time="2025-07-07T00:21:46.789314323Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\"" Jul 7 00:21:48.452152 containerd[1598]: time="2025-07-07T00:21:48.452089166Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:21:48.453890 containerd[1598]: time="2025-07-07T00:21:48.453844958Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.2: active requests=0, bytes read=26018946" Jul 7 00:21:48.455311 containerd[1598]: time="2025-07-07T00:21:48.455276292Z" level=info msg="ImageCreate event name:\"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:21:48.458328 containerd[1598]: time="2025-07-07T00:21:48.458263092Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:21:48.459105 containerd[1598]: time="2025-07-07T00:21:48.459061238Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.2\" with image id \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\", size \"27646507\" in 1.669703734s" Jul 7 00:21:48.459105 containerd[1598]: time="2025-07-07T00:21:48.459101985Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\" returns image reference \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\"" Jul 7 00:21:48.459570 containerd[1598]: time="2025-07-07T00:21:48.459534365Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\"" Jul 7 00:21:49.934965 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 7 00:21:49.936727 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:21:50.133512 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:21:50.138734 (kubelet)[2102]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 00:21:50.247460 kubelet[2102]: E0707 00:21:50.247290 2102 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 00:21:50.254199 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 00:21:50.254399 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 00:21:50.254787 systemd[1]: kubelet.service: Consumed 217ms CPU time, 109.6M memory peak. Jul 7 00:21:50.324890 containerd[1598]: time="2025-07-07T00:21:50.324816259Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:21:50.325561 containerd[1598]: time="2025-07-07T00:21:50.325501183Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.2: active requests=0, bytes read=20155055" Jul 7 00:21:50.326650 containerd[1598]: time="2025-07-07T00:21:50.326615702Z" level=info msg="ImageCreate event name:\"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:21:50.329111 containerd[1598]: time="2025-07-07T00:21:50.329075885Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:21:50.329968 containerd[1598]: time="2025-07-07T00:21:50.329937090Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.2\" with image id \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\", size \"21782634\" in 1.870366487s" Jul 7 00:21:50.329968 containerd[1598]: time="2025-07-07T00:21:50.329966155Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\" returns image reference \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\"" Jul 7 00:21:50.330450 containerd[1598]: time="2025-07-07T00:21:50.330410317Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jul 7 00:21:51.295190 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1689240010.mount: Deactivated successfully. Jul 7 00:21:51.944137 containerd[1598]: time="2025-07-07T00:21:51.944086830Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:21:51.945096 containerd[1598]: time="2025-07-07T00:21:51.945034317Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.2: active requests=0, bytes read=31892746" Jul 7 00:21:51.946192 containerd[1598]: time="2025-07-07T00:21:51.946155990Z" level=info msg="ImageCreate event name:\"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:21:51.948040 containerd[1598]: time="2025-07-07T00:21:51.948005708Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:21:51.948517 containerd[1598]: time="2025-07-07T00:21:51.948462024Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.2\" with image id \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\", repo tag \"registry.k8s.io/kube-proxy:v1.33.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\", size \"31891765\" in 1.618011451s" Jul 7 00:21:51.948517 containerd[1598]: time="2025-07-07T00:21:51.948512619Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\"" Jul 7 00:21:51.949078 containerd[1598]: time="2025-07-07T00:21:51.949041941Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jul 7 00:21:52.481105 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4061485207.mount: Deactivated successfully. Jul 7 00:21:53.142279 containerd[1598]: time="2025-07-07T00:21:53.142210562Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:21:53.142914 containerd[1598]: time="2025-07-07T00:21:53.142871201Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Jul 7 00:21:53.144194 containerd[1598]: time="2025-07-07T00:21:53.144152854Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:21:53.146783 containerd[1598]: time="2025-07-07T00:21:53.146747349Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:21:53.148507 containerd[1598]: time="2025-07-07T00:21:53.148461653Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.199389636s" Jul 7 00:21:53.148558 containerd[1598]: time="2025-07-07T00:21:53.148503902Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jul 7 00:21:53.148979 containerd[1598]: time="2025-07-07T00:21:53.148944638Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 7 00:21:53.647257 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3621240019.mount: Deactivated successfully. Jul 7 00:21:53.653732 containerd[1598]: time="2025-07-07T00:21:53.653685377Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 00:21:53.654919 containerd[1598]: time="2025-07-07T00:21:53.654860931Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 7 00:21:53.656357 containerd[1598]: time="2025-07-07T00:21:53.656251519Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 00:21:53.658660 containerd[1598]: time="2025-07-07T00:21:53.658627884Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 00:21:53.659182 containerd[1598]: time="2025-07-07T00:21:53.659136458Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 510.164018ms" Jul 7 00:21:53.659182 containerd[1598]: time="2025-07-07T00:21:53.659165272Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 7 00:21:53.659780 containerd[1598]: time="2025-07-07T00:21:53.659749858Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jul 7 00:21:54.166652 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2264471316.mount: Deactivated successfully. Jul 7 00:21:55.723887 containerd[1598]: time="2025-07-07T00:21:55.723823325Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:21:55.724652 containerd[1598]: time="2025-07-07T00:21:55.724581707Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58247175" Jul 7 00:21:55.725818 containerd[1598]: time="2025-07-07T00:21:55.725777710Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:21:55.728379 containerd[1598]: time="2025-07-07T00:21:55.728347979Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:21:55.730603 containerd[1598]: time="2025-07-07T00:21:55.730346987Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.070566081s" Jul 7 00:21:55.730603 containerd[1598]: time="2025-07-07T00:21:55.730396350Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jul 7 00:21:58.747882 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:21:58.748037 systemd[1]: kubelet.service: Consumed 217ms CPU time, 109.6M memory peak. Jul 7 00:21:58.750155 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:21:58.774261 systemd[1]: Reload requested from client PID 2261 ('systemctl') (unit session-7.scope)... Jul 7 00:21:58.774285 systemd[1]: Reloading... Jul 7 00:21:58.842665 zram_generator::config[2303]: No configuration found. Jul 7 00:21:59.024108 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 00:21:59.139762 systemd[1]: Reloading finished in 365 ms. Jul 7 00:21:59.197190 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 7 00:21:59.197280 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 7 00:21:59.197563 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:21:59.197614 systemd[1]: kubelet.service: Consumed 141ms CPU time, 98.2M memory peak. Jul 7 00:21:59.199073 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:21:59.370436 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:21:59.374329 (kubelet)[2351]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 00:21:59.414133 kubelet[2351]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 00:21:59.414133 kubelet[2351]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 7 00:21:59.414133 kubelet[2351]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 00:21:59.414520 kubelet[2351]: I0707 00:21:59.414141 2351 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 00:21:59.675731 kubelet[2351]: I0707 00:21:59.675619 2351 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 7 00:21:59.675731 kubelet[2351]: I0707 00:21:59.675642 2351 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 00:21:59.675917 kubelet[2351]: I0707 00:21:59.675876 2351 server.go:956] "Client rotation is on, will bootstrap in background" Jul 7 00:21:59.704097 kubelet[2351]: I0707 00:21:59.704055 2351 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 00:21:59.704225 kubelet[2351]: E0707 00:21:59.704152 2351 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.140:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.140:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 7 00:21:59.708746 kubelet[2351]: I0707 00:21:59.708727 2351 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 7 00:21:59.713600 kubelet[2351]: I0707 00:21:59.713562 2351 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 00:21:59.713808 kubelet[2351]: I0707 00:21:59.713773 2351 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 00:21:59.713944 kubelet[2351]: I0707 00:21:59.713792 2351 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 00:21:59.713944 kubelet[2351]: I0707 00:21:59.713938 2351 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 00:21:59.714082 kubelet[2351]: I0707 00:21:59.713947 2351 container_manager_linux.go:303] "Creating device plugin manager" Jul 7 00:21:59.714684 kubelet[2351]: I0707 00:21:59.714653 2351 state_mem.go:36] "Initialized new in-memory state store" Jul 7 00:21:59.716844 kubelet[2351]: I0707 00:21:59.716821 2351 kubelet.go:480] "Attempting to sync node with API server" Jul 7 00:21:59.716844 kubelet[2351]: I0707 00:21:59.716837 2351 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 00:21:59.716917 kubelet[2351]: I0707 00:21:59.716857 2351 kubelet.go:386] "Adding apiserver pod source" Jul 7 00:21:59.718911 kubelet[2351]: I0707 00:21:59.718888 2351 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 00:21:59.721997 kubelet[2351]: E0707 00:21:59.721969 2351 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.140:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.140:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 7 00:21:59.721997 kubelet[2351]: E0707 00:21:59.721969 2351 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.140:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.140:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 7 00:21:59.723616 kubelet[2351]: I0707 00:21:59.722909 2351 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 7 00:21:59.723616 kubelet[2351]: I0707 00:21:59.723490 2351 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 7 00:21:59.724241 kubelet[2351]: W0707 00:21:59.724176 2351 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 7 00:21:59.727164 kubelet[2351]: I0707 00:21:59.727136 2351 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 7 00:21:59.727220 kubelet[2351]: I0707 00:21:59.727186 2351 server.go:1289] "Started kubelet" Jul 7 00:21:59.728073 kubelet[2351]: I0707 00:21:59.728016 2351 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 00:21:59.729532 kubelet[2351]: I0707 00:21:59.729262 2351 server.go:317] "Adding debug handlers to kubelet server" Jul 7 00:21:59.731141 kubelet[2351]: I0707 00:21:59.731025 2351 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 00:21:59.732089 kubelet[2351]: I0707 00:21:59.731406 2351 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 00:21:59.732089 kubelet[2351]: I0707 00:21:59.731965 2351 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 00:21:59.732158 kubelet[2351]: I0707 00:21:59.732149 2351 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 7 00:21:59.732307 kubelet[2351]: I0707 00:21:59.732286 2351 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 00:21:59.732744 kubelet[2351]: E0707 00:21:59.731759 2351 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.140:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.140:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.184fd0382aea4ce6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-07 00:21:59.727156454 +0000 UTC m=+0.348947798,LastTimestamp:2025-07-07 00:21:59.727156454 +0000 UTC m=+0.348947798,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 7 00:21:59.732824 kubelet[2351]: I0707 00:21:59.732802 2351 reconciler.go:26] "Reconciler: start to sync state" Jul 7 00:21:59.732852 kubelet[2351]: I0707 00:21:59.732827 2351 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 7 00:21:59.733150 kubelet[2351]: E0707 00:21:59.733127 2351 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 00:21:59.733719 kubelet[2351]: E0707 00:21:59.733698 2351 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.140:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.140:6443: connect: connection refused" interval="200ms" Jul 7 00:21:59.733804 kubelet[2351]: I0707 00:21:59.733782 2351 factory.go:223] Registration of the systemd container factory successfully Jul 7 00:21:59.733898 kubelet[2351]: E0707 00:21:59.733878 2351 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 00:21:59.733898 kubelet[2351]: I0707 00:21:59.733878 2351 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 00:21:59.734146 kubelet[2351]: E0707 00:21:59.733966 2351 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.140:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.140:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 7 00:21:59.735135 kubelet[2351]: I0707 00:21:59.735109 2351 factory.go:223] Registration of the containerd container factory successfully Jul 7 00:21:59.749806 kubelet[2351]: I0707 00:21:59.749782 2351 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 7 00:21:59.749806 kubelet[2351]: I0707 00:21:59.749798 2351 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 7 00:21:59.749806 kubelet[2351]: I0707 00:21:59.749811 2351 state_mem.go:36] "Initialized new in-memory state store" Jul 7 00:21:59.750879 kubelet[2351]: I0707 00:21:59.750852 2351 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 7 00:21:59.752255 kubelet[2351]: I0707 00:21:59.751933 2351 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 7 00:21:59.752255 kubelet[2351]: I0707 00:21:59.751948 2351 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 7 00:21:59.752255 kubelet[2351]: I0707 00:21:59.751964 2351 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 7 00:21:59.752255 kubelet[2351]: I0707 00:21:59.751970 2351 kubelet.go:2436] "Starting kubelet main sync loop" Jul 7 00:21:59.752255 kubelet[2351]: E0707 00:21:59.752000 2351 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 00:21:59.833496 kubelet[2351]: E0707 00:21:59.833460 2351 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 00:21:59.852679 kubelet[2351]: E0707 00:21:59.852652 2351 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 7 00:21:59.934360 kubelet[2351]: E0707 00:21:59.934258 2351 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 00:21:59.934565 kubelet[2351]: E0707 00:21:59.934529 2351 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.140:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.140:6443: connect: connection refused" interval="400ms" Jul 7 00:22:00.034884 kubelet[2351]: E0707 00:22:00.034841 2351 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 00:22:00.053062 kubelet[2351]: E0707 00:22:00.053025 2351 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 7 00:22:00.135437 kubelet[2351]: E0707 00:22:00.135392 2351 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 00:22:00.230944 kubelet[2351]: E0707 00:22:00.230826 2351 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.140:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.140:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 7 00:22:00.231704 kubelet[2351]: I0707 00:22:00.231676 2351 policy_none.go:49] "None policy: Start" Jul 7 00:22:00.231704 kubelet[2351]: I0707 00:22:00.231699 2351 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 7 00:22:00.231783 kubelet[2351]: I0707 00:22:00.231714 2351 state_mem.go:35] "Initializing new in-memory state store" Jul 7 00:22:00.235455 kubelet[2351]: E0707 00:22:00.235435 2351 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 00:22:00.238138 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 7 00:22:00.251657 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 7 00:22:00.273452 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 7 00:22:00.274995 kubelet[2351]: E0707 00:22:00.274959 2351 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 7 00:22:00.275334 kubelet[2351]: I0707 00:22:00.275241 2351 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 00:22:00.275334 kubelet[2351]: I0707 00:22:00.275256 2351 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 00:22:00.276013 kubelet[2351]: I0707 00:22:00.275712 2351 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 00:22:00.276573 kubelet[2351]: E0707 00:22:00.276552 2351 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 7 00:22:00.276646 kubelet[2351]: E0707 00:22:00.276583 2351 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 7 00:22:00.335255 kubelet[2351]: E0707 00:22:00.335210 2351 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.140:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.140:6443: connect: connection refused" interval="800ms" Jul 7 00:22:00.377641 kubelet[2351]: I0707 00:22:00.377524 2351 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 7 00:22:00.377973 kubelet[2351]: E0707 00:22:00.377939 2351 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.140:6443/api/v1/nodes\": dial tcp 10.0.0.140:6443: connect: connection refused" node="localhost" Jul 7 00:22:00.537547 kubelet[2351]: I0707 00:22:00.537408 2351 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d225b05457cb409e8afb34571d4dae0b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d225b05457cb409e8afb34571d4dae0b\") " pod="kube-system/kube-apiserver-localhost" Jul 7 00:22:00.537547 kubelet[2351]: I0707 00:22:00.537449 2351 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d225b05457cb409e8afb34571d4dae0b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d225b05457cb409e8afb34571d4dae0b\") " pod="kube-system/kube-apiserver-localhost" Jul 7 00:22:00.537547 kubelet[2351]: I0707 00:22:00.537471 2351 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d225b05457cb409e8afb34571d4dae0b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d225b05457cb409e8afb34571d4dae0b\") " pod="kube-system/kube-apiserver-localhost" Jul 7 00:22:00.544210 kubelet[2351]: E0707 00:22:00.544163 2351 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.140:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.140:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 7 00:22:00.579685 kubelet[2351]: I0707 00:22:00.579658 2351 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 7 00:22:00.580083 kubelet[2351]: E0707 00:22:00.580021 2351 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.140:6443/api/v1/nodes\": dial tcp 10.0.0.140:6443: connect: connection refused" node="localhost" Jul 7 00:22:00.901088 kubelet[2351]: E0707 00:22:00.900973 2351 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.140:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.140:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 7 00:22:00.981876 kubelet[2351]: I0707 00:22:00.981845 2351 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 7 00:22:00.982129 kubelet[2351]: E0707 00:22:00.982094 2351 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.140:6443/api/v1/nodes\": dial tcp 10.0.0.140:6443: connect: connection refused" node="localhost" Jul 7 00:22:01.007129 systemd[1]: Created slice kubepods-burstable-podd225b05457cb409e8afb34571d4dae0b.slice - libcontainer container kubepods-burstable-podd225b05457cb409e8afb34571d4dae0b.slice. Jul 7 00:22:01.029400 kubelet[2351]: E0707 00:22:01.029359 2351 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 00:22:01.029783 kubelet[2351]: E0707 00:22:01.029744 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:22:01.030376 containerd[1598]: time="2025-07-07T00:22:01.030335895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d225b05457cb409e8afb34571d4dae0b,Namespace:kube-system,Attempt:0,}" Jul 7 00:22:01.032654 systemd[1]: Created slice kubepods-burstable-pod84b858ec27c8b2738b1d9ff9927e0dcb.slice - libcontainer container kubepods-burstable-pod84b858ec27c8b2738b1d9ff9927e0dcb.slice. Jul 7 00:22:01.040171 kubelet[2351]: I0707 00:22:01.040143 2351 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 00:22:01.040238 kubelet[2351]: I0707 00:22:01.040189 2351 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/834ee54f1daa06092e339273649eb5ea-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"834ee54f1daa06092e339273649eb5ea\") " pod="kube-system/kube-scheduler-localhost" Jul 7 00:22:01.040238 kubelet[2351]: I0707 00:22:01.040213 2351 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 00:22:01.040291 kubelet[2351]: I0707 00:22:01.040254 2351 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 00:22:01.040291 kubelet[2351]: I0707 00:22:01.040281 2351 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 00:22:01.040347 kubelet[2351]: I0707 00:22:01.040301 2351 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 00:22:01.050298 containerd[1598]: time="2025-07-07T00:22:01.050252779Z" level=info msg="connecting to shim 978dc60c29943fd49f0fa2a42c8dfec679637e72368f0f38036b93ff9cc56349" address="unix:///run/containerd/s/369b74613961af9fe0dd41adb0b2c9eb315112e5949021848c9198aa49ed8324" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:22:01.051611 kubelet[2351]: E0707 00:22:01.051567 2351 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 00:22:01.054544 systemd[1]: Created slice kubepods-burstable-pod834ee54f1daa06092e339273649eb5ea.slice - libcontainer container kubepods-burstable-pod834ee54f1daa06092e339273649eb5ea.slice. Jul 7 00:22:01.060601 kubelet[2351]: E0707 00:22:01.060545 2351 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 00:22:01.069416 kubelet[2351]: E0707 00:22:01.069389 2351 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.140:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.140:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 7 00:22:01.083726 systemd[1]: Started cri-containerd-978dc60c29943fd49f0fa2a42c8dfec679637e72368f0f38036b93ff9cc56349.scope - libcontainer container 978dc60c29943fd49f0fa2a42c8dfec679637e72368f0f38036b93ff9cc56349. Jul 7 00:22:01.125669 containerd[1598]: time="2025-07-07T00:22:01.125577273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d225b05457cb409e8afb34571d4dae0b,Namespace:kube-system,Attempt:0,} returns sandbox id \"978dc60c29943fd49f0fa2a42c8dfec679637e72368f0f38036b93ff9cc56349\"" Jul 7 00:22:01.126558 kubelet[2351]: E0707 00:22:01.126533 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:22:01.131031 containerd[1598]: time="2025-07-07T00:22:01.131006222Z" level=info msg="CreateContainer within sandbox \"978dc60c29943fd49f0fa2a42c8dfec679637e72368f0f38036b93ff9cc56349\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 7 00:22:01.135736 kubelet[2351]: E0707 00:22:01.135682 2351 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.140:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.140:6443: connect: connection refused" interval="1.6s" Jul 7 00:22:01.141953 containerd[1598]: time="2025-07-07T00:22:01.141914745Z" level=info msg="Container 26bce9125ebd00c64d31cfdb60a49716ad35bd908b0db0790ed5d9ecd81106f1: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:22:01.149577 containerd[1598]: time="2025-07-07T00:22:01.149541305Z" level=info msg="CreateContainer within sandbox \"978dc60c29943fd49f0fa2a42c8dfec679637e72368f0f38036b93ff9cc56349\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"26bce9125ebd00c64d31cfdb60a49716ad35bd908b0db0790ed5d9ecd81106f1\"" Jul 7 00:22:01.150026 containerd[1598]: time="2025-07-07T00:22:01.149998442Z" level=info msg="StartContainer for \"26bce9125ebd00c64d31cfdb60a49716ad35bd908b0db0790ed5d9ecd81106f1\"" Jul 7 00:22:01.151083 containerd[1598]: time="2025-07-07T00:22:01.151049553Z" level=info msg="connecting to shim 26bce9125ebd00c64d31cfdb60a49716ad35bd908b0db0790ed5d9ecd81106f1" address="unix:///run/containerd/s/369b74613961af9fe0dd41adb0b2c9eb315112e5949021848c9198aa49ed8324" protocol=ttrpc version=3 Jul 7 00:22:01.177710 systemd[1]: Started cri-containerd-26bce9125ebd00c64d31cfdb60a49716ad35bd908b0db0790ed5d9ecd81106f1.scope - libcontainer container 26bce9125ebd00c64d31cfdb60a49716ad35bd908b0db0790ed5d9ecd81106f1. Jul 7 00:22:01.221893 containerd[1598]: time="2025-07-07T00:22:01.221860484Z" level=info msg="StartContainer for \"26bce9125ebd00c64d31cfdb60a49716ad35bd908b0db0790ed5d9ecd81106f1\" returns successfully" Jul 7 00:22:01.253663 kubelet[2351]: E0707 00:22:01.253619 2351 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.140:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.140:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 7 00:22:01.352800 kubelet[2351]: E0707 00:22:01.352759 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:22:01.353362 containerd[1598]: time="2025-07-07T00:22:01.353315151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:84b858ec27c8b2738b1d9ff9927e0dcb,Namespace:kube-system,Attempt:0,}" Jul 7 00:22:01.361725 kubelet[2351]: E0707 00:22:01.361689 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:22:01.362211 containerd[1598]: time="2025-07-07T00:22:01.362055199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:834ee54f1daa06092e339273649eb5ea,Namespace:kube-system,Attempt:0,}" Jul 7 00:22:01.377338 containerd[1598]: time="2025-07-07T00:22:01.377280797Z" level=info msg="connecting to shim 212c2e35e66d9069a66381a7c1ec4b6c07343e7a21b4fe1f2df6821e293112e3" address="unix:///run/containerd/s/1e69cb742acab629ea455a407960073121f631e2d3f554dc4d169468abfe9e07" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:22:01.386007 containerd[1598]: time="2025-07-07T00:22:01.385968055Z" level=info msg="connecting to shim 879caad1c64dbc6cd19dc76c5326ded3a297b9fe9f564b171233a5e01607d619" address="unix:///run/containerd/s/a03eac097ed32346bc80df0770a30b28022f6cf015dced53e3f31490020a3c51" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:22:01.402730 systemd[1]: Started cri-containerd-212c2e35e66d9069a66381a7c1ec4b6c07343e7a21b4fe1f2df6821e293112e3.scope - libcontainer container 212c2e35e66d9069a66381a7c1ec4b6c07343e7a21b4fe1f2df6821e293112e3. Jul 7 00:22:01.411527 systemd[1]: Started cri-containerd-879caad1c64dbc6cd19dc76c5326ded3a297b9fe9f564b171233a5e01607d619.scope - libcontainer container 879caad1c64dbc6cd19dc76c5326ded3a297b9fe9f564b171233a5e01607d619. Jul 7 00:22:01.451323 containerd[1598]: time="2025-07-07T00:22:01.450499922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:84b858ec27c8b2738b1d9ff9927e0dcb,Namespace:kube-system,Attempt:0,} returns sandbox id \"212c2e35e66d9069a66381a7c1ec4b6c07343e7a21b4fe1f2df6821e293112e3\"" Jul 7 00:22:01.455259 kubelet[2351]: E0707 00:22:01.455185 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:22:01.457349 containerd[1598]: time="2025-07-07T00:22:01.457308148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:834ee54f1daa06092e339273649eb5ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"879caad1c64dbc6cd19dc76c5326ded3a297b9fe9f564b171233a5e01607d619\"" Jul 7 00:22:01.457943 kubelet[2351]: E0707 00:22:01.457919 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:22:01.460998 containerd[1598]: time="2025-07-07T00:22:01.460945568Z" level=info msg="CreateContainer within sandbox \"212c2e35e66d9069a66381a7c1ec4b6c07343e7a21b4fe1f2df6821e293112e3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 7 00:22:01.463342 containerd[1598]: time="2025-07-07T00:22:01.463310833Z" level=info msg="CreateContainer within sandbox \"879caad1c64dbc6cd19dc76c5326ded3a297b9fe9f564b171233a5e01607d619\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 7 00:22:01.469903 containerd[1598]: time="2025-07-07T00:22:01.469850645Z" level=info msg="Container 332bb702923fa6092ef514859b821df9fa0f634c92180223e3a37ca5fe169b5f: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:22:01.474341 containerd[1598]: time="2025-07-07T00:22:01.474290360Z" level=info msg="Container b122df9211a0d0d23689f01ace203465ed9ed59b3f658323a3957f240e47fb9a: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:22:01.480134 containerd[1598]: time="2025-07-07T00:22:01.480095765Z" level=info msg="CreateContainer within sandbox \"212c2e35e66d9069a66381a7c1ec4b6c07343e7a21b4fe1f2df6821e293112e3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"332bb702923fa6092ef514859b821df9fa0f634c92180223e3a37ca5fe169b5f\"" Jul 7 00:22:01.480563 containerd[1598]: time="2025-07-07T00:22:01.480520251Z" level=info msg="StartContainer for \"332bb702923fa6092ef514859b821df9fa0f634c92180223e3a37ca5fe169b5f\"" Jul 7 00:22:01.481450 containerd[1598]: time="2025-07-07T00:22:01.481428994Z" level=info msg="connecting to shim 332bb702923fa6092ef514859b821df9fa0f634c92180223e3a37ca5fe169b5f" address="unix:///run/containerd/s/1e69cb742acab629ea455a407960073121f631e2d3f554dc4d169468abfe9e07" protocol=ttrpc version=3 Jul 7 00:22:01.482835 containerd[1598]: time="2025-07-07T00:22:01.482806908Z" level=info msg="CreateContainer within sandbox \"879caad1c64dbc6cd19dc76c5326ded3a297b9fe9f564b171233a5e01607d619\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b122df9211a0d0d23689f01ace203465ed9ed59b3f658323a3957f240e47fb9a\"" Jul 7 00:22:01.483392 containerd[1598]: time="2025-07-07T00:22:01.483351499Z" level=info msg="StartContainer for \"b122df9211a0d0d23689f01ace203465ed9ed59b3f658323a3957f240e47fb9a\"" Jul 7 00:22:01.484258 containerd[1598]: time="2025-07-07T00:22:01.484222122Z" level=info msg="connecting to shim b122df9211a0d0d23689f01ace203465ed9ed59b3f658323a3957f240e47fb9a" address="unix:///run/containerd/s/a03eac097ed32346bc80df0770a30b28022f6cf015dced53e3f31490020a3c51" protocol=ttrpc version=3 Jul 7 00:22:01.502720 systemd[1]: Started cri-containerd-332bb702923fa6092ef514859b821df9fa0f634c92180223e3a37ca5fe169b5f.scope - libcontainer container 332bb702923fa6092ef514859b821df9fa0f634c92180223e3a37ca5fe169b5f. Jul 7 00:22:01.506513 systemd[1]: Started cri-containerd-b122df9211a0d0d23689f01ace203465ed9ed59b3f658323a3957f240e47fb9a.scope - libcontainer container b122df9211a0d0d23689f01ace203465ed9ed59b3f658323a3957f240e47fb9a. Jul 7 00:22:01.554363 containerd[1598]: time="2025-07-07T00:22:01.554287895Z" level=info msg="StartContainer for \"332bb702923fa6092ef514859b821df9fa0f634c92180223e3a37ca5fe169b5f\" returns successfully" Jul 7 00:22:01.562278 containerd[1598]: time="2025-07-07T00:22:01.562208165Z" level=info msg="StartContainer for \"b122df9211a0d0d23689f01ace203465ed9ed59b3f658323a3957f240e47fb9a\" returns successfully" Jul 7 00:22:01.761994 kubelet[2351]: E0707 00:22:01.761808 2351 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 00:22:01.763044 kubelet[2351]: E0707 00:22:01.762376 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:22:01.764809 kubelet[2351]: E0707 00:22:01.764756 2351 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 00:22:01.764974 kubelet[2351]: E0707 00:22:01.764938 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:22:01.766739 kubelet[2351]: E0707 00:22:01.766709 2351 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 00:22:01.770540 kubelet[2351]: E0707 00:22:01.770520 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:22:01.784082 kubelet[2351]: I0707 00:22:01.784057 2351 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 7 00:22:02.771202 kubelet[2351]: E0707 00:22:02.770963 2351 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 00:22:02.771202 kubelet[2351]: E0707 00:22:02.771124 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:22:02.772204 kubelet[2351]: E0707 00:22:02.772097 2351 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 00:22:02.772310 kubelet[2351]: E0707 00:22:02.772293 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:22:03.082843 kubelet[2351]: E0707 00:22:03.082712 2351 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 7 00:22:03.179167 kubelet[2351]: I0707 00:22:03.178681 2351 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 7 00:22:03.233951 kubelet[2351]: I0707 00:22:03.233903 2351 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 7 00:22:03.240459 kubelet[2351]: E0707 00:22:03.240353 2351 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 7 00:22:03.240459 kubelet[2351]: I0707 00:22:03.240397 2351 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 7 00:22:03.245817 kubelet[2351]: E0707 00:22:03.245777 2351 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 7 00:22:03.245817 kubelet[2351]: I0707 00:22:03.245811 2351 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 7 00:22:03.248796 kubelet[2351]: E0707 00:22:03.248753 2351 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jul 7 00:22:03.723488 kubelet[2351]: I0707 00:22:03.723444 2351 apiserver.go:52] "Watching apiserver" Jul 7 00:22:03.733972 kubelet[2351]: I0707 00:22:03.733932 2351 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 7 00:22:05.319038 systemd[1]: Reload requested from client PID 2633 ('systemctl') (unit session-7.scope)... Jul 7 00:22:05.319051 systemd[1]: Reloading... Jul 7 00:22:05.387670 zram_generator::config[2679]: No configuration found. Jul 7 00:22:05.474426 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 00:22:05.602279 systemd[1]: Reloading finished in 282 ms. Jul 7 00:22:05.629910 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:22:05.654814 systemd[1]: kubelet.service: Deactivated successfully. Jul 7 00:22:05.655094 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:22:05.655141 systemd[1]: kubelet.service: Consumed 827ms CPU time, 130.9M memory peak. Jul 7 00:22:05.656941 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:22:05.848534 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:22:05.853448 (kubelet)[2721]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 00:22:05.895021 kubelet[2721]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 00:22:05.895021 kubelet[2721]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 7 00:22:05.895021 kubelet[2721]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 00:22:05.895387 kubelet[2721]: I0707 00:22:05.895049 2721 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 00:22:05.900732 kubelet[2721]: I0707 00:22:05.900702 2721 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 7 00:22:05.900732 kubelet[2721]: I0707 00:22:05.900720 2721 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 00:22:05.902251 kubelet[2721]: I0707 00:22:05.901371 2721 server.go:956] "Client rotation is on, will bootstrap in background" Jul 7 00:22:05.903242 kubelet[2721]: I0707 00:22:05.903210 2721 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jul 7 00:22:05.905283 kubelet[2721]: I0707 00:22:05.905178 2721 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 00:22:05.908753 kubelet[2721]: I0707 00:22:05.908737 2721 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 7 00:22:05.914728 kubelet[2721]: I0707 00:22:05.914690 2721 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 00:22:05.914969 kubelet[2721]: I0707 00:22:05.914929 2721 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 00:22:05.915104 kubelet[2721]: I0707 00:22:05.914954 2721 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 00:22:05.915195 kubelet[2721]: I0707 00:22:05.915111 2721 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 00:22:05.915195 kubelet[2721]: I0707 00:22:05.915120 2721 container_manager_linux.go:303] "Creating device plugin manager" Jul 7 00:22:05.915195 kubelet[2721]: I0707 00:22:05.915159 2721 state_mem.go:36] "Initialized new in-memory state store" Jul 7 00:22:05.915340 kubelet[2721]: I0707 00:22:05.915322 2721 kubelet.go:480] "Attempting to sync node with API server" Jul 7 00:22:05.915372 kubelet[2721]: I0707 00:22:05.915341 2721 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 00:22:05.915372 kubelet[2721]: I0707 00:22:05.915361 2721 kubelet.go:386] "Adding apiserver pod source" Jul 7 00:22:05.915372 kubelet[2721]: I0707 00:22:05.915375 2721 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 00:22:05.917858 kubelet[2721]: I0707 00:22:05.917805 2721 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 7 00:22:05.918487 kubelet[2721]: I0707 00:22:05.918462 2721 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 7 00:22:05.921258 kubelet[2721]: I0707 00:22:05.921229 2721 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 7 00:22:05.921310 kubelet[2721]: I0707 00:22:05.921265 2721 server.go:1289] "Started kubelet" Jul 7 00:22:05.923952 kubelet[2721]: I0707 00:22:05.923742 2721 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 00:22:05.924039 kubelet[2721]: I0707 00:22:05.923999 2721 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 00:22:05.924199 kubelet[2721]: I0707 00:22:05.924182 2721 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 00:22:05.924566 kubelet[2721]: I0707 00:22:05.924547 2721 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 00:22:05.928618 kubelet[2721]: I0707 00:22:05.928252 2721 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 7 00:22:05.928618 kubelet[2721]: E0707 00:22:05.928349 2721 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 00:22:05.928809 kubelet[2721]: I0707 00:22:05.928791 2721 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 7 00:22:05.928919 kubelet[2721]: I0707 00:22:05.928904 2721 reconciler.go:26] "Reconciler: start to sync state" Jul 7 00:22:05.929744 kubelet[2721]: I0707 00:22:05.929597 2721 factory.go:223] Registration of the systemd container factory successfully Jul 7 00:22:05.929880 kubelet[2721]: I0707 00:22:05.929778 2721 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 00:22:05.930006 kubelet[2721]: I0707 00:22:05.929987 2721 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 00:22:05.930487 kubelet[2721]: I0707 00:22:05.930461 2721 server.go:317] "Adding debug handlers to kubelet server" Jul 7 00:22:05.935198 kubelet[2721]: I0707 00:22:05.934707 2721 factory.go:223] Registration of the containerd container factory successfully Jul 7 00:22:05.937297 kubelet[2721]: E0707 00:22:05.937208 2721 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 00:22:05.939302 kubelet[2721]: I0707 00:22:05.939260 2721 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 7 00:22:05.947132 kubelet[2721]: I0707 00:22:05.947101 2721 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 7 00:22:05.947132 kubelet[2721]: I0707 00:22:05.947124 2721 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 7 00:22:05.947222 kubelet[2721]: I0707 00:22:05.947143 2721 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 7 00:22:05.947222 kubelet[2721]: I0707 00:22:05.947151 2721 kubelet.go:2436] "Starting kubelet main sync loop" Jul 7 00:22:05.947268 kubelet[2721]: E0707 00:22:05.947202 2721 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 00:22:05.973276 kubelet[2721]: I0707 00:22:05.973244 2721 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 7 00:22:05.973276 kubelet[2721]: I0707 00:22:05.973264 2721 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 7 00:22:05.973276 kubelet[2721]: I0707 00:22:05.973284 2721 state_mem.go:36] "Initialized new in-memory state store" Jul 7 00:22:05.973422 kubelet[2721]: I0707 00:22:05.973398 2721 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 7 00:22:05.973422 kubelet[2721]: I0707 00:22:05.973408 2721 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 7 00:22:05.973422 kubelet[2721]: I0707 00:22:05.973422 2721 policy_none.go:49] "None policy: Start" Jul 7 00:22:05.973486 kubelet[2721]: I0707 00:22:05.973431 2721 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 7 00:22:05.973486 kubelet[2721]: I0707 00:22:05.973441 2721 state_mem.go:35] "Initializing new in-memory state store" Jul 7 00:22:05.973524 kubelet[2721]: I0707 00:22:05.973519 2721 state_mem.go:75] "Updated machine memory state" Jul 7 00:22:05.977866 kubelet[2721]: E0707 00:22:05.977840 2721 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 7 00:22:05.978032 kubelet[2721]: I0707 00:22:05.978010 2721 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 00:22:05.978079 kubelet[2721]: I0707 00:22:05.978030 2721 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 00:22:05.978342 kubelet[2721]: I0707 00:22:05.978178 2721 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 00:22:05.979749 kubelet[2721]: E0707 00:22:05.979716 2721 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 7 00:22:06.048738 kubelet[2721]: I0707 00:22:06.048677 2721 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 7 00:22:06.048738 kubelet[2721]: I0707 00:22:06.048725 2721 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 7 00:22:06.048997 kubelet[2721]: I0707 00:22:06.048816 2721 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 7 00:22:06.084103 kubelet[2721]: I0707 00:22:06.084058 2721 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 7 00:22:06.090329 kubelet[2721]: I0707 00:22:06.090310 2721 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 7 00:22:06.090373 kubelet[2721]: I0707 00:22:06.090368 2721 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 7 00:22:06.231536 kubelet[2721]: I0707 00:22:06.230863 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d225b05457cb409e8afb34571d4dae0b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d225b05457cb409e8afb34571d4dae0b\") " pod="kube-system/kube-apiserver-localhost" Jul 7 00:22:06.231536 kubelet[2721]: I0707 00:22:06.230908 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 00:22:06.231536 kubelet[2721]: I0707 00:22:06.230931 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 00:22:06.231536 kubelet[2721]: I0707 00:22:06.230946 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 00:22:06.231536 kubelet[2721]: I0707 00:22:06.230998 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 00:22:06.231805 kubelet[2721]: I0707 00:22:06.231033 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/834ee54f1daa06092e339273649eb5ea-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"834ee54f1daa06092e339273649eb5ea\") " pod="kube-system/kube-scheduler-localhost" Jul 7 00:22:06.231805 kubelet[2721]: I0707 00:22:06.231052 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d225b05457cb409e8afb34571d4dae0b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d225b05457cb409e8afb34571d4dae0b\") " pod="kube-system/kube-apiserver-localhost" Jul 7 00:22:06.231805 kubelet[2721]: I0707 00:22:06.231066 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d225b05457cb409e8afb34571d4dae0b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d225b05457cb409e8afb34571d4dae0b\") " pod="kube-system/kube-apiserver-localhost" Jul 7 00:22:06.231805 kubelet[2721]: I0707 00:22:06.231081 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 00:22:06.320784 sudo[2760]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 7 00:22:06.321103 sudo[2760]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 7 00:22:06.354362 kubelet[2721]: E0707 00:22:06.354096 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:22:06.354362 kubelet[2721]: E0707 00:22:06.354248 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:22:06.354362 kubelet[2721]: E0707 00:22:06.354261 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:22:06.916650 kubelet[2721]: I0707 00:22:06.916566 2721 apiserver.go:52] "Watching apiserver" Jul 7 00:22:06.929760 kubelet[2721]: I0707 00:22:06.929727 2721 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 7 00:22:06.958490 sudo[2760]: pam_unix(sudo:session): session closed for user root Jul 7 00:22:06.962010 kubelet[2721]: I0707 00:22:06.961670 2721 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 7 00:22:06.962010 kubelet[2721]: I0707 00:22:06.961831 2721 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 7 00:22:06.962010 kubelet[2721]: I0707 00:22:06.961864 2721 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 7 00:22:06.969350 kubelet[2721]: E0707 00:22:06.969326 2721 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 7 00:22:06.969494 kubelet[2721]: E0707 00:22:06.969475 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:22:06.969708 kubelet[2721]: E0707 00:22:06.969687 2721 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 7 00:22:06.969788 kubelet[2721]: E0707 00:22:06.969771 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:22:06.971392 kubelet[2721]: E0707 00:22:06.971374 2721 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jul 7 00:22:06.971477 kubelet[2721]: E0707 00:22:06.971459 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:22:07.000687 kubelet[2721]: I0707 00:22:07.000629 2721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.000608795 podStartE2EDuration="1.000608795s" podCreationTimestamp="2025-07-07 00:22:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:22:06.994376089 +0000 UTC m=+1.136568908" watchObservedRunningTime="2025-07-07 00:22:07.000608795 +0000 UTC m=+1.142801634" Jul 7 00:22:07.010915 kubelet[2721]: I0707 00:22:07.010865 2721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.01084582 podStartE2EDuration="1.01084582s" podCreationTimestamp="2025-07-07 00:22:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:22:07.000961386 +0000 UTC m=+1.143154205" watchObservedRunningTime="2025-07-07 00:22:07.01084582 +0000 UTC m=+1.153038639" Jul 7 00:22:07.020540 kubelet[2721]: I0707 00:22:07.020478 2721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.020457051 podStartE2EDuration="1.020457051s" podCreationTimestamp="2025-07-07 00:22:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:22:07.011407723 +0000 UTC m=+1.153600542" watchObservedRunningTime="2025-07-07 00:22:07.020457051 +0000 UTC m=+1.162649870" Jul 7 00:22:07.963835 kubelet[2721]: E0707 00:22:07.963673 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:22:07.963835 kubelet[2721]: E0707 00:22:07.963759 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:22:07.964475 kubelet[2721]: E0707 00:22:07.964147 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:22:08.378113 sudo[1800]: pam_unix(sudo:session): session closed for user root Jul 7 00:22:08.379466 sshd[1799]: Connection closed by 10.0.0.1 port 35508 Jul 7 00:22:08.379858 sshd-session[1797]: pam_unix(sshd:session): session closed for user core Jul 7 00:22:08.384572 systemd[1]: sshd@6-10.0.0.140:22-10.0.0.1:35508.service: Deactivated successfully. Jul 7 00:22:08.386765 systemd[1]: session-7.scope: Deactivated successfully. Jul 7 00:22:08.386972 systemd[1]: session-7.scope: Consumed 4.966s CPU time, 255.1M memory peak. Jul 7 00:22:08.388175 systemd-logind[1579]: Session 7 logged out. Waiting for processes to exit. Jul 7 00:22:08.389608 systemd-logind[1579]: Removed session 7. Jul 7 00:22:08.965065 kubelet[2721]: E0707 00:22:08.965024 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:22:08.965872 kubelet[2721]: E0707 00:22:08.965688 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:22:11.111854 kubelet[2721]: I0707 00:22:11.111828 2721 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 7 00:22:11.112263 containerd[1598]: time="2025-07-07T00:22:11.112113790Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 7 00:22:11.112481 kubelet[2721]: I0707 00:22:11.112283 2721 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 7 00:22:12.149230 systemd[1]: Created slice kubepods-besteffort-poda89e47b8_674d_4954_9eec_fc2c91649760.slice - libcontainer container kubepods-besteffort-poda89e47b8_674d_4954_9eec_fc2c91649760.slice. Jul 7 00:22:12.165415 systemd[1]: Created slice kubepods-burstable-podcf9b3fc7_319a_4ea9_9813_406bb0cb6545.slice - libcontainer container kubepods-burstable-podcf9b3fc7_319a_4ea9_9813_406bb0cb6545.slice. Jul 7 00:22:12.173848 kubelet[2721]: I0707 00:22:12.173813 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cf9b3fc7-319a-4ea9-9813-406bb0cb6545-cilium-run\") pod \"cilium-rmtj2\" (UID: \"cf9b3fc7-319a-4ea9-9813-406bb0cb6545\") " pod="kube-system/cilium-rmtj2" Jul 7 00:22:12.173848 kubelet[2721]: I0707 00:22:12.173846 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwp4b\" (UniqueName: \"kubernetes.io/projected/cf9b3fc7-319a-4ea9-9813-406bb0cb6545-kube-api-access-vwp4b\") pod \"cilium-rmtj2\" (UID: \"cf9b3fc7-319a-4ea9-9813-406bb0cb6545\") " pod="kube-system/cilium-rmtj2" Jul 7 00:22:12.174215 kubelet[2721]: I0707 00:22:12.173863 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a89e47b8-674d-4954-9eec-fc2c91649760-kube-proxy\") pod \"kube-proxy-7fb7f\" (UID: \"a89e47b8-674d-4954-9eec-fc2c91649760\") " pod="kube-system/kube-proxy-7fb7f" Jul 7 00:22:12.174215 kubelet[2721]: I0707 00:22:12.173877 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a89e47b8-674d-4954-9eec-fc2c91649760-lib-modules\") pod \"kube-proxy-7fb7f\" (UID: \"a89e47b8-674d-4954-9eec-fc2c91649760\") " pod="kube-system/kube-proxy-7fb7f" Jul 7 00:22:12.174215 kubelet[2721]: I0707 00:22:12.173908 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cf9b3fc7-319a-4ea9-9813-406bb0cb6545-bpf-maps\") pod \"cilium-rmtj2\" (UID: \"cf9b3fc7-319a-4ea9-9813-406bb0cb6545\") " pod="kube-system/cilium-rmtj2" Jul 7 00:22:12.174215 kubelet[2721]: I0707 00:22:12.173936 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cf9b3fc7-319a-4ea9-9813-406bb0cb6545-hostproc\") pod \"cilium-rmtj2\" (UID: \"cf9b3fc7-319a-4ea9-9813-406bb0cb6545\") " pod="kube-system/cilium-rmtj2" Jul 7 00:22:12.174215 kubelet[2721]: I0707 00:22:12.173955 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cf9b3fc7-319a-4ea9-9813-406bb0cb6545-lib-modules\") pod \"cilium-rmtj2\" (UID: \"cf9b3fc7-319a-4ea9-9813-406bb0cb6545\") " pod="kube-system/cilium-rmtj2" Jul 7 00:22:12.174215 kubelet[2721]: I0707 00:22:12.173973 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cf9b3fc7-319a-4ea9-9813-406bb0cb6545-cilium-config-path\") pod \"cilium-rmtj2\" (UID: \"cf9b3fc7-319a-4ea9-9813-406bb0cb6545\") " pod="kube-system/cilium-rmtj2" Jul 7 00:22:12.174349 kubelet[2721]: I0707 00:22:12.173985 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cf9b3fc7-319a-4ea9-9813-406bb0cb6545-hubble-tls\") pod \"cilium-rmtj2\" (UID: \"cf9b3fc7-319a-4ea9-9813-406bb0cb6545\") " pod="kube-system/cilium-rmtj2" Jul 7 00:22:12.174349 kubelet[2721]: I0707 00:22:12.174027 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a89e47b8-674d-4954-9eec-fc2c91649760-xtables-lock\") pod \"kube-proxy-7fb7f\" (UID: \"a89e47b8-674d-4954-9eec-fc2c91649760\") " pod="kube-system/kube-proxy-7fb7f" Jul 7 00:22:12.174349 kubelet[2721]: I0707 00:22:12.174056 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cf9b3fc7-319a-4ea9-9813-406bb0cb6545-cilium-cgroup\") pod \"cilium-rmtj2\" (UID: \"cf9b3fc7-319a-4ea9-9813-406bb0cb6545\") " pod="kube-system/cilium-rmtj2" Jul 7 00:22:12.174349 kubelet[2721]: I0707 00:22:12.174084 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cf9b3fc7-319a-4ea9-9813-406bb0cb6545-etc-cni-netd\") pod \"cilium-rmtj2\" (UID: \"cf9b3fc7-319a-4ea9-9813-406bb0cb6545\") " pod="kube-system/cilium-rmtj2" Jul 7 00:22:12.174349 kubelet[2721]: I0707 00:22:12.174114 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cf9b3fc7-319a-4ea9-9813-406bb0cb6545-xtables-lock\") pod \"cilium-rmtj2\" (UID: \"cf9b3fc7-319a-4ea9-9813-406bb0cb6545\") " pod="kube-system/cilium-rmtj2" Jul 7 00:22:12.174349 kubelet[2721]: I0707 00:22:12.174137 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cf9b3fc7-319a-4ea9-9813-406bb0cb6545-host-proc-sys-net\") pod \"cilium-rmtj2\" (UID: \"cf9b3fc7-319a-4ea9-9813-406bb0cb6545\") " pod="kube-system/cilium-rmtj2" Jul 7 00:22:12.174475 kubelet[2721]: I0707 00:22:12.174156 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cf9b3fc7-319a-4ea9-9813-406bb0cb6545-host-proc-sys-kernel\") pod \"cilium-rmtj2\" (UID: \"cf9b3fc7-319a-4ea9-9813-406bb0cb6545\") " pod="kube-system/cilium-rmtj2" Jul 7 00:22:12.174475 kubelet[2721]: I0707 00:22:12.174178 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkzf7\" (UniqueName: \"kubernetes.io/projected/a89e47b8-674d-4954-9eec-fc2c91649760-kube-api-access-mkzf7\") pod \"kube-proxy-7fb7f\" (UID: \"a89e47b8-674d-4954-9eec-fc2c91649760\") " pod="kube-system/kube-proxy-7fb7f" Jul 7 00:22:12.174475 kubelet[2721]: I0707 00:22:12.174201 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cf9b3fc7-319a-4ea9-9813-406bb0cb6545-cni-path\") pod \"cilium-rmtj2\" (UID: \"cf9b3fc7-319a-4ea9-9813-406bb0cb6545\") " pod="kube-system/cilium-rmtj2" Jul 7 00:22:12.174475 kubelet[2721]: I0707 00:22:12.174229 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cf9b3fc7-319a-4ea9-9813-406bb0cb6545-clustermesh-secrets\") pod \"cilium-rmtj2\" (UID: \"cf9b3fc7-319a-4ea9-9813-406bb0cb6545\") " pod="kube-system/cilium-rmtj2" Jul 7 00:22:12.314711 systemd[1]: Created slice kubepods-besteffort-podef970f4b_8dd0_43a5_8fa5_69cbfe2f415b.slice - libcontainer container kubepods-besteffort-podef970f4b_8dd0_43a5_8fa5_69cbfe2f415b.slice. Jul 7 00:22:12.376025 kubelet[2721]: I0707 00:22:12.375976 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m44b7\" (UniqueName: \"kubernetes.io/projected/ef970f4b-8dd0-43a5-8fa5-69cbfe2f415b-kube-api-access-m44b7\") pod \"cilium-operator-6c4d7847fc-dtc8l\" (UID: \"ef970f4b-8dd0-43a5-8fa5-69cbfe2f415b\") " pod="kube-system/cilium-operator-6c4d7847fc-dtc8l" Jul 7 00:22:12.376025 kubelet[2721]: I0707 00:22:12.376014 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ef970f4b-8dd0-43a5-8fa5-69cbfe2f415b-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-dtc8l\" (UID: \"ef970f4b-8dd0-43a5-8fa5-69cbfe2f415b\") " pod="kube-system/cilium-operator-6c4d7847fc-dtc8l" Jul 7 00:22:12.464318 kubelet[2721]: E0707 00:22:12.464286 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:22:12.464955 containerd[1598]: time="2025-07-07T00:22:12.464725295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7fb7f,Uid:a89e47b8-674d-4954-9eec-fc2c91649760,Namespace:kube-system,Attempt:0,}" Jul 7 00:22:12.468031 kubelet[2721]: E0707 00:22:12.468008 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:22:12.468488 containerd[1598]: time="2025-07-07T00:22:12.468451853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rmtj2,Uid:cf9b3fc7-319a-4ea9-9813-406bb0cb6545,Namespace:kube-system,Attempt:0,}" Jul 7 00:22:12.502183 containerd[1598]: time="2025-07-07T00:22:12.502074410Z" level=info msg="connecting to shim 8666e16be6751ada9b3502d11572b29b0c620b989ca8c1af13e477d917254cd2" address="unix:///run/containerd/s/8108d11d8cb4b4f61be08cfae19a2afc7d0e615e96390a7f05e47219bca5fbc9" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:22:12.504024 containerd[1598]: time="2025-07-07T00:22:12.504000309Z" level=info msg="connecting to shim 2d03a3583af6d8e08f33221cd4098625920e39ca0cdbb7538a6c202930b6bb6d" address="unix:///run/containerd/s/7ea73f1c3e4590acd5bb9e36a47f8c8956d5dc2c896da0e218e8b39b375a4df6" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:22:12.556735 systemd[1]: Started cri-containerd-8666e16be6751ada9b3502d11572b29b0c620b989ca8c1af13e477d917254cd2.scope - libcontainer container 8666e16be6751ada9b3502d11572b29b0c620b989ca8c1af13e477d917254cd2. Jul 7 00:22:12.560447 systemd[1]: Started cri-containerd-2d03a3583af6d8e08f33221cd4098625920e39ca0cdbb7538a6c202930b6bb6d.scope - libcontainer container 2d03a3583af6d8e08f33221cd4098625920e39ca0cdbb7538a6c202930b6bb6d. Jul 7 00:22:12.586682 containerd[1598]: time="2025-07-07T00:22:12.586639819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rmtj2,Uid:cf9b3fc7-319a-4ea9-9813-406bb0cb6545,Namespace:kube-system,Attempt:0,} returns sandbox id \"8666e16be6751ada9b3502d11572b29b0c620b989ca8c1af13e477d917254cd2\"" Jul 7 00:22:12.587386 kubelet[2721]: E0707 00:22:12.587311 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:22:12.588342 containerd[1598]: time="2025-07-07T00:22:12.588307493Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 7 00:22:12.588921 containerd[1598]: time="2025-07-07T00:22:12.588863950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7fb7f,Uid:a89e47b8-674d-4954-9eec-fc2c91649760,Namespace:kube-system,Attempt:0,} returns sandbox id \"2d03a3583af6d8e08f33221cd4098625920e39ca0cdbb7538a6c202930b6bb6d\"" Jul 7 00:22:12.589758 kubelet[2721]: E0707 00:22:12.589726 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:22:12.595087 containerd[1598]: time="2025-07-07T00:22:12.595059335Z" level=info msg="CreateContainer within sandbox \"2d03a3583af6d8e08f33221cd4098625920e39ca0cdbb7538a6c202930b6bb6d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 7 00:22:12.606777 containerd[1598]: time="2025-07-07T00:22:12.606722774Z" level=info msg="Container 500c39fe8621a37311f69a38ee42f361112f04e44406c488341c9532f04956d9: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:22:12.614031 containerd[1598]: time="2025-07-07T00:22:12.613992799Z" level=info msg="CreateContainer within sandbox \"2d03a3583af6d8e08f33221cd4098625920e39ca0cdbb7538a6c202930b6bb6d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"500c39fe8621a37311f69a38ee42f361112f04e44406c488341c9532f04956d9\"" Jul 7 00:22:12.614450 containerd[1598]: time="2025-07-07T00:22:12.614405850Z" level=info msg="StartContainer for \"500c39fe8621a37311f69a38ee42f361112f04e44406c488341c9532f04956d9\"" Jul 7 00:22:12.615679 containerd[1598]: time="2025-07-07T00:22:12.615653831Z" level=info msg="connecting to shim 500c39fe8621a37311f69a38ee42f361112f04e44406c488341c9532f04956d9" address="unix:///run/containerd/s/7ea73f1c3e4590acd5bb9e36a47f8c8956d5dc2c896da0e218e8b39b375a4df6" protocol=ttrpc version=3 Jul 7 00:22:12.619939 kubelet[2721]: E0707 00:22:12.619859 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:22:12.620360 containerd[1598]: time="2025-07-07T00:22:12.620326881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-dtc8l,Uid:ef970f4b-8dd0-43a5-8fa5-69cbfe2f415b,Namespace:kube-system,Attempt:0,}" Jul 7 00:22:12.636817 systemd[1]: Started cri-containerd-500c39fe8621a37311f69a38ee42f361112f04e44406c488341c9532f04956d9.scope - libcontainer container 500c39fe8621a37311f69a38ee42f361112f04e44406c488341c9532f04956d9. Jul 7 00:22:12.647836 containerd[1598]: time="2025-07-07T00:22:12.647719456Z" level=info msg="connecting to shim ee11772bba322e00bfb172cff8ab6bae612598209f068244757153173d3a752b" address="unix:///run/containerd/s/fab788275e298d10bf9a2fc804f0dedd9bfd846762c28883c7a9311aa16ef092" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:22:12.673896 systemd[1]: Started cri-containerd-ee11772bba322e00bfb172cff8ab6bae612598209f068244757153173d3a752b.scope - libcontainer container ee11772bba322e00bfb172cff8ab6bae612598209f068244757153173d3a752b. Jul 7 00:22:12.682202 containerd[1598]: time="2025-07-07T00:22:12.682148829Z" level=info msg="StartContainer for \"500c39fe8621a37311f69a38ee42f361112f04e44406c488341c9532f04956d9\" returns successfully" Jul 7 00:22:12.732291 containerd[1598]: time="2025-07-07T00:22:12.732165707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-dtc8l,Uid:ef970f4b-8dd0-43a5-8fa5-69cbfe2f415b,Namespace:kube-system,Attempt:0,} returns sandbox id \"ee11772bba322e00bfb172cff8ab6bae612598209f068244757153173d3a752b\"" Jul 7 00:22:12.733429 kubelet[2721]: E0707 00:22:12.733392 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:22:12.973467 kubelet[2721]: E0707 00:22:12.973434 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:22:12.982051 kubelet[2721]: I0707 00:22:12.981496 2721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7fb7f" podStartSLOduration=0.981480393 podStartE2EDuration="981.480393ms" podCreationTimestamp="2025-07-07 00:22:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:22:12.981035881 +0000 UTC m=+7.123228700" watchObservedRunningTime="2025-07-07 00:22:12.981480393 +0000 UTC m=+7.123673212" Jul 7 00:22:16.886986 kubelet[2721]: E0707 00:22:16.886935 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:22:16.915379 kubelet[2721]: E0707 00:22:16.915345 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:22:16.979829 kubelet[2721]: E0707 00:22:16.979797 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:22:17.964844 kubelet[2721]: E0707 00:22:17.964800 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:22:20.403628 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1196366839.mount: Deactivated successfully. Jul 7 00:22:23.053479 update_engine[1585]: I20250707 00:22:23.053398 1585 update_attempter.cc:509] Updating boot flags... Jul 7 00:22:23.295750 containerd[1598]: time="2025-07-07T00:22:23.295462742Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:22:23.296164 containerd[1598]: time="2025-07-07T00:22:23.296137511Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jul 7 00:22:23.297373 containerd[1598]: time="2025-07-07T00:22:23.297305795Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:22:23.298821 containerd[1598]: time="2025-07-07T00:22:23.298795247Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.710451062s" Jul 7 00:22:23.298884 containerd[1598]: time="2025-07-07T00:22:23.298838519Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 7 00:22:23.302422 containerd[1598]: time="2025-07-07T00:22:23.302382865Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 7 00:22:23.319772 containerd[1598]: time="2025-07-07T00:22:23.315570083Z" level=info msg="CreateContainer within sandbox \"8666e16be6751ada9b3502d11572b29b0c620b989ca8c1af13e477d917254cd2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 7 00:22:23.328890 containerd[1598]: time="2025-07-07T00:22:23.327365122Z" level=info msg="Container 6b09a645164b9579922f5c251b0152a509f7bb09fe3b0e12bfe916d2cda4d137: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:22:23.337364 containerd[1598]: time="2025-07-07T00:22:23.337311778Z" level=info msg="CreateContainer within sandbox \"8666e16be6751ada9b3502d11572b29b0c620b989ca8c1af13e477d917254cd2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6b09a645164b9579922f5c251b0152a509f7bb09fe3b0e12bfe916d2cda4d137\"" Jul 7 00:22:23.339080 containerd[1598]: time="2025-07-07T00:22:23.339002042Z" level=info msg="StartContainer for \"6b09a645164b9579922f5c251b0152a509f7bb09fe3b0e12bfe916d2cda4d137\"" Jul 7 00:22:23.342690 containerd[1598]: time="2025-07-07T00:22:23.342637390Z" level=info msg="connecting to shim 6b09a645164b9579922f5c251b0152a509f7bb09fe3b0e12bfe916d2cda4d137" address="unix:///run/containerd/s/8108d11d8cb4b4f61be08cfae19a2afc7d0e615e96390a7f05e47219bca5fbc9" protocol=ttrpc version=3 Jul 7 00:22:23.466745 systemd[1]: Started cri-containerd-6b09a645164b9579922f5c251b0152a509f7bb09fe3b0e12bfe916d2cda4d137.scope - libcontainer container 6b09a645164b9579922f5c251b0152a509f7bb09fe3b0e12bfe916d2cda4d137. Jul 7 00:22:23.518147 containerd[1598]: time="2025-07-07T00:22:23.518088939Z" level=info msg="StartContainer for \"6b09a645164b9579922f5c251b0152a509f7bb09fe3b0e12bfe916d2cda4d137\" returns successfully" Jul 7 00:22:23.528739 systemd[1]: cri-containerd-6b09a645164b9579922f5c251b0152a509f7bb09fe3b0e12bfe916d2cda4d137.scope: Deactivated successfully. Jul 7 00:22:23.529961 containerd[1598]: time="2025-07-07T00:22:23.529927712Z" level=info msg="received exit event container_id:\"6b09a645164b9579922f5c251b0152a509f7bb09fe3b0e12bfe916d2cda4d137\" id:\"6b09a645164b9579922f5c251b0152a509f7bb09fe3b0e12bfe916d2cda4d137\" pid:3171 exited_at:{seconds:1751847743 nanos:529532122}" Jul 7 00:22:23.530069 containerd[1598]: time="2025-07-07T00:22:23.530028663Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6b09a645164b9579922f5c251b0152a509f7bb09fe3b0e12bfe916d2cda4d137\" id:\"6b09a645164b9579922f5c251b0152a509f7bb09fe3b0e12bfe916d2cda4d137\" pid:3171 exited_at:{seconds:1751847743 nanos:529532122}" Jul 7 00:22:23.550123 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6b09a645164b9579922f5c251b0152a509f7bb09fe3b0e12bfe916d2cda4d137-rootfs.mount: Deactivated successfully. Jul 7 00:22:24.015436 kubelet[2721]: E0707 00:22:24.015396 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:22:24.803890 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2171082030.mount: Deactivated successfully. Jul 7 00:22:25.019136 kubelet[2721]: E0707 00:22:25.018989 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:22:25.025901 containerd[1598]: time="2025-07-07T00:22:25.025674900Z" level=info msg="CreateContainer within sandbox \"8666e16be6751ada9b3502d11572b29b0c620b989ca8c1af13e477d917254cd2\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 7 00:22:25.045852 containerd[1598]: time="2025-07-07T00:22:25.044503852Z" level=info msg="Container 93acd5dbb5e0c15e92f3293042a1bb38b7dafcf550984718584b3b5bd433f31a: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:22:25.046797 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount443129141.mount: Deactivated successfully. Jul 7 00:22:25.052770 containerd[1598]: time="2025-07-07T00:22:25.052741171Z" level=info msg="CreateContainer within sandbox \"8666e16be6751ada9b3502d11572b29b0c620b989ca8c1af13e477d917254cd2\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"93acd5dbb5e0c15e92f3293042a1bb38b7dafcf550984718584b3b5bd433f31a\"" Jul 7 00:22:25.054321 containerd[1598]: time="2025-07-07T00:22:25.053953575Z" level=info msg="StartContainer for \"93acd5dbb5e0c15e92f3293042a1bb38b7dafcf550984718584b3b5bd433f31a\"" Jul 7 00:22:25.054772 containerd[1598]: time="2025-07-07T00:22:25.054739513Z" level=info msg="connecting to shim 93acd5dbb5e0c15e92f3293042a1bb38b7dafcf550984718584b3b5bd433f31a" address="unix:///run/containerd/s/8108d11d8cb4b4f61be08cfae19a2afc7d0e615e96390a7f05e47219bca5fbc9" protocol=ttrpc version=3 Jul 7 00:22:25.075839 systemd[1]: Started cri-containerd-93acd5dbb5e0c15e92f3293042a1bb38b7dafcf550984718584b3b5bd433f31a.scope - libcontainer container 93acd5dbb5e0c15e92f3293042a1bb38b7dafcf550984718584b3b5bd433f31a. Jul 7 00:22:25.112686 containerd[1598]: time="2025-07-07T00:22:25.112647349Z" level=info msg="StartContainer for \"93acd5dbb5e0c15e92f3293042a1bb38b7dafcf550984718584b3b5bd433f31a\" returns successfully" Jul 7 00:22:25.124171 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 7 00:22:25.124516 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 7 00:22:25.124755 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 7 00:22:25.127010 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 00:22:25.128076 systemd[1]: cri-containerd-93acd5dbb5e0c15e92f3293042a1bb38b7dafcf550984718584b3b5bd433f31a.scope: Deactivated successfully. Jul 7 00:22:25.130136 containerd[1598]: time="2025-07-07T00:22:25.129045481Z" level=info msg="TaskExit event in podsandbox handler container_id:\"93acd5dbb5e0c15e92f3293042a1bb38b7dafcf550984718584b3b5bd433f31a\" id:\"93acd5dbb5e0c15e92f3293042a1bb38b7dafcf550984718584b3b5bd433f31a\" pid:3230 exited_at:{seconds:1751847745 nanos:128815486}" Jul 7 00:22:25.130136 containerd[1598]: time="2025-07-07T00:22:25.129209912Z" level=info msg="received exit event container_id:\"93acd5dbb5e0c15e92f3293042a1bb38b7dafcf550984718584b3b5bd433f31a\" id:\"93acd5dbb5e0c15e92f3293042a1bb38b7dafcf550984718584b3b5bd433f31a\" pid:3230 exited_at:{seconds:1751847745 nanos:128815486}" Jul 7 00:22:25.149708 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 00:22:25.694237 containerd[1598]: time="2025-07-07T00:22:25.694190573Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:22:25.694892 containerd[1598]: time="2025-07-07T00:22:25.694864709Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jul 7 00:22:25.695947 containerd[1598]: time="2025-07-07T00:22:25.695908675Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:22:25.697011 containerd[1598]: time="2025-07-07T00:22:25.696987046Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.394562902s" Jul 7 00:22:25.697076 containerd[1598]: time="2025-07-07T00:22:25.697011562Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 7 00:22:25.701581 containerd[1598]: time="2025-07-07T00:22:25.701545041Z" level=info msg="CreateContainer within sandbox \"ee11772bba322e00bfb172cff8ab6bae612598209f068244757153173d3a752b\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 7 00:22:25.710198 containerd[1598]: time="2025-07-07T00:22:25.710149053Z" level=info msg="Container 9633dcbb51a275ca9877b94bb40cbd0f91b30fbb815834904813f5b6d8e6db9e: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:22:25.716029 containerd[1598]: time="2025-07-07T00:22:25.715995989Z" level=info msg="CreateContainer within sandbox \"ee11772bba322e00bfb172cff8ab6bae612598209f068244757153173d3a752b\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"9633dcbb51a275ca9877b94bb40cbd0f91b30fbb815834904813f5b6d8e6db9e\"" Jul 7 00:22:25.716577 containerd[1598]: time="2025-07-07T00:22:25.716368694Z" level=info msg="StartContainer for \"9633dcbb51a275ca9877b94bb40cbd0f91b30fbb815834904813f5b6d8e6db9e\"" Jul 7 00:22:25.717269 containerd[1598]: time="2025-07-07T00:22:25.717237237Z" level=info msg="connecting to shim 9633dcbb51a275ca9877b94bb40cbd0f91b30fbb815834904813f5b6d8e6db9e" address="unix:///run/containerd/s/fab788275e298d10bf9a2fc804f0dedd9bfd846762c28883c7a9311aa16ef092" protocol=ttrpc version=3 Jul 7 00:22:25.745751 systemd[1]: Started cri-containerd-9633dcbb51a275ca9877b94bb40cbd0f91b30fbb815834904813f5b6d8e6db9e.scope - libcontainer container 9633dcbb51a275ca9877b94bb40cbd0f91b30fbb815834904813f5b6d8e6db9e. Jul 7 00:22:25.774348 containerd[1598]: time="2025-07-07T00:22:25.774296267Z" level=info msg="StartContainer for \"9633dcbb51a275ca9877b94bb40cbd0f91b30fbb815834904813f5b6d8e6db9e\" returns successfully" Jul 7 00:22:26.021809 kubelet[2721]: E0707 00:22:26.021515 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:22:26.026050 kubelet[2721]: E0707 00:22:26.026015 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:22:26.031704 containerd[1598]: time="2025-07-07T00:22:26.031406013Z" level=info msg="CreateContainer within sandbox \"8666e16be6751ada9b3502d11572b29b0c620b989ca8c1af13e477d917254cd2\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 7 00:22:26.048167 kubelet[2721]: I0707 00:22:26.045550 2721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-dtc8l" podStartSLOduration=1.083243535 podStartE2EDuration="14.045534935s" podCreationTimestamp="2025-07-07 00:22:12 +0000 UTC" firstStartedPulling="2025-07-07 00:22:12.735225628 +0000 UTC m=+6.877418447" lastFinishedPulling="2025-07-07 00:22:25.697517028 +0000 UTC m=+19.839709847" observedRunningTime="2025-07-07 00:22:26.030095194 +0000 UTC m=+20.172288013" watchObservedRunningTime="2025-07-07 00:22:26.045534935 +0000 UTC m=+20.187727754" Jul 7 00:22:26.048289 containerd[1598]: time="2025-07-07T00:22:26.045877263Z" level=info msg="Container c306b718674fa1b519537fe3405dc848ffa05c3ed1caa8d0e77c2b18fc2edfaa: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:22:26.051047 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3718428837.mount: Deactivated successfully. Jul 7 00:22:26.055873 containerd[1598]: time="2025-07-07T00:22:26.055832001Z" level=info msg="CreateContainer within sandbox \"8666e16be6751ada9b3502d11572b29b0c620b989ca8c1af13e477d917254cd2\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c306b718674fa1b519537fe3405dc848ffa05c3ed1caa8d0e77c2b18fc2edfaa\"" Jul 7 00:22:26.056375 containerd[1598]: time="2025-07-07T00:22:26.056330764Z" level=info msg="StartContainer for \"c306b718674fa1b519537fe3405dc848ffa05c3ed1caa8d0e77c2b18fc2edfaa\"" Jul 7 00:22:26.057689 containerd[1598]: time="2025-07-07T00:22:26.057663796Z" level=info msg="connecting to shim c306b718674fa1b519537fe3405dc848ffa05c3ed1caa8d0e77c2b18fc2edfaa" address="unix:///run/containerd/s/8108d11d8cb4b4f61be08cfae19a2afc7d0e615e96390a7f05e47219bca5fbc9" protocol=ttrpc version=3 Jul 7 00:22:26.095729 systemd[1]: Started cri-containerd-c306b718674fa1b519537fe3405dc848ffa05c3ed1caa8d0e77c2b18fc2edfaa.scope - libcontainer container c306b718674fa1b519537fe3405dc848ffa05c3ed1caa8d0e77c2b18fc2edfaa. Jul 7 00:22:26.168331 systemd[1]: cri-containerd-c306b718674fa1b519537fe3405dc848ffa05c3ed1caa8d0e77c2b18fc2edfaa.scope: Deactivated successfully. Jul 7 00:22:26.170139 containerd[1598]: time="2025-07-07T00:22:26.170104248Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c306b718674fa1b519537fe3405dc848ffa05c3ed1caa8d0e77c2b18fc2edfaa\" id:\"c306b718674fa1b519537fe3405dc848ffa05c3ed1caa8d0e77c2b18fc2edfaa\" pid:3319 exited_at:{seconds:1751847746 nanos:169699522}" Jul 7 00:22:26.171155 containerd[1598]: time="2025-07-07T00:22:26.171114719Z" level=info msg="received exit event container_id:\"c306b718674fa1b519537fe3405dc848ffa05c3ed1caa8d0e77c2b18fc2edfaa\" id:\"c306b718674fa1b519537fe3405dc848ffa05c3ed1caa8d0e77c2b18fc2edfaa\" pid:3319 exited_at:{seconds:1751847746 nanos:169699522}" Jul 7 00:22:26.175530 containerd[1598]: time="2025-07-07T00:22:26.175500654Z" level=info msg="StartContainer for \"c306b718674fa1b519537fe3405dc848ffa05c3ed1caa8d0e77c2b18fc2edfaa\" returns successfully" Jul 7 00:22:26.195627 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c306b718674fa1b519537fe3405dc848ffa05c3ed1caa8d0e77c2b18fc2edfaa-rootfs.mount: Deactivated successfully. Jul 7 00:22:27.030083 kubelet[2721]: E0707 00:22:27.029674 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:22:27.030083 kubelet[2721]: E0707 00:22:27.029889 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:22:27.036937 containerd[1598]: time="2025-07-07T00:22:27.036863169Z" level=info msg="CreateContainer within sandbox \"8666e16be6751ada9b3502d11572b29b0c620b989ca8c1af13e477d917254cd2\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 7 00:22:27.050737 containerd[1598]: time="2025-07-07T00:22:27.050669888Z" level=info msg="Container 045373259523db2581d10adf6cf3c1e6dba564b4f47ead7e2cfadf2d2cf6eab0: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:22:27.053845 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount5496856.mount: Deactivated successfully. Jul 7 00:22:27.058121 containerd[1598]: time="2025-07-07T00:22:27.058091929Z" level=info msg="CreateContainer within sandbox \"8666e16be6751ada9b3502d11572b29b0c620b989ca8c1af13e477d917254cd2\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"045373259523db2581d10adf6cf3c1e6dba564b4f47ead7e2cfadf2d2cf6eab0\"" Jul 7 00:22:27.058440 containerd[1598]: time="2025-07-07T00:22:27.058408617Z" level=info msg="StartContainer for \"045373259523db2581d10adf6cf3c1e6dba564b4f47ead7e2cfadf2d2cf6eab0\"" Jul 7 00:22:27.059340 containerd[1598]: time="2025-07-07T00:22:27.059316474Z" level=info msg="connecting to shim 045373259523db2581d10adf6cf3c1e6dba564b4f47ead7e2cfadf2d2cf6eab0" address="unix:///run/containerd/s/8108d11d8cb4b4f61be08cfae19a2afc7d0e615e96390a7f05e47219bca5fbc9" protocol=ttrpc version=3 Jul 7 00:22:27.082762 systemd[1]: Started cri-containerd-045373259523db2581d10adf6cf3c1e6dba564b4f47ead7e2cfadf2d2cf6eab0.scope - libcontainer container 045373259523db2581d10adf6cf3c1e6dba564b4f47ead7e2cfadf2d2cf6eab0. Jul 7 00:22:27.108513 systemd[1]: cri-containerd-045373259523db2581d10adf6cf3c1e6dba564b4f47ead7e2cfadf2d2cf6eab0.scope: Deactivated successfully. Jul 7 00:22:27.109195 containerd[1598]: time="2025-07-07T00:22:27.109146838Z" level=info msg="TaskExit event in podsandbox handler container_id:\"045373259523db2581d10adf6cf3c1e6dba564b4f47ead7e2cfadf2d2cf6eab0\" id:\"045373259523db2581d10adf6cf3c1e6dba564b4f47ead7e2cfadf2d2cf6eab0\" pid:3358 exited_at:{seconds:1751847747 nanos:108709813}" Jul 7 00:22:27.131303 containerd[1598]: time="2025-07-07T00:22:27.131255320Z" level=info msg="received exit event container_id:\"045373259523db2581d10adf6cf3c1e6dba564b4f47ead7e2cfadf2d2cf6eab0\" id:\"045373259523db2581d10adf6cf3c1e6dba564b4f47ead7e2cfadf2d2cf6eab0\" pid:3358 exited_at:{seconds:1751847747 nanos:108709813}" Jul 7 00:22:27.138582 containerd[1598]: time="2025-07-07T00:22:27.138547815Z" level=info msg="StartContainer for \"045373259523db2581d10adf6cf3c1e6dba564b4f47ead7e2cfadf2d2cf6eab0\" returns successfully" Jul 7 00:22:27.150697 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-045373259523db2581d10adf6cf3c1e6dba564b4f47ead7e2cfadf2d2cf6eab0-rootfs.mount: Deactivated successfully. Jul 7 00:22:28.034533 kubelet[2721]: E0707 00:22:28.034486 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:22:28.041188 containerd[1598]: time="2025-07-07T00:22:28.041127242Z" level=info msg="CreateContainer within sandbox \"8666e16be6751ada9b3502d11572b29b0c620b989ca8c1af13e477d917254cd2\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 7 00:22:28.054721 containerd[1598]: time="2025-07-07T00:22:28.054666523Z" level=info msg="Container 7b316c158e183853d88d166bd3e44cc54472166fb14590595427f60e98462694: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:22:28.059010 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount670722794.mount: Deactivated successfully. Jul 7 00:22:28.062549 containerd[1598]: time="2025-07-07T00:22:28.062510203Z" level=info msg="CreateContainer within sandbox \"8666e16be6751ada9b3502d11572b29b0c620b989ca8c1af13e477d917254cd2\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7b316c158e183853d88d166bd3e44cc54472166fb14590595427f60e98462694\"" Jul 7 00:22:28.063102 containerd[1598]: time="2025-07-07T00:22:28.063011239Z" level=info msg="StartContainer for \"7b316c158e183853d88d166bd3e44cc54472166fb14590595427f60e98462694\"" Jul 7 00:22:28.063896 containerd[1598]: time="2025-07-07T00:22:28.063874570Z" level=info msg="connecting to shim 7b316c158e183853d88d166bd3e44cc54472166fb14590595427f60e98462694" address="unix:///run/containerd/s/8108d11d8cb4b4f61be08cfae19a2afc7d0e615e96390a7f05e47219bca5fbc9" protocol=ttrpc version=3 Jul 7 00:22:28.080705 systemd[1]: Started cri-containerd-7b316c158e183853d88d166bd3e44cc54472166fb14590595427f60e98462694.scope - libcontainer container 7b316c158e183853d88d166bd3e44cc54472166fb14590595427f60e98462694. Jul 7 00:22:28.116623 containerd[1598]: time="2025-07-07T00:22:28.116556756Z" level=info msg="StartContainer for \"7b316c158e183853d88d166bd3e44cc54472166fb14590595427f60e98462694\" returns successfully" Jul 7 00:22:28.288918 kubelet[2721]: I0707 00:22:28.288031 2721 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 7 00:22:28.294595 containerd[1598]: time="2025-07-07T00:22:28.294485276Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7b316c158e183853d88d166bd3e44cc54472166fb14590595427f60e98462694\" id:\"1762e0083cec2eb285f226581227f71fd5cfa52b457a5bf52abbac9a3b8ec058\" pid:3428 exited_at:{seconds:1751847748 nanos:218382220}" Jul 7 00:22:28.340242 systemd[1]: Created slice kubepods-burstable-pod5daafe07_d890_4ddb_b90d_98bc8bbc5da9.slice - libcontainer container kubepods-burstable-pod5daafe07_d890_4ddb_b90d_98bc8bbc5da9.slice. Jul 7 00:22:28.348094 systemd[1]: Created slice kubepods-burstable-podc23582b1_7883_43dc_b14f_f2ab830f8f51.slice - libcontainer container kubepods-burstable-podc23582b1_7883_43dc_b14f_f2ab830f8f51.slice. Jul 7 00:22:28.383739 kubelet[2721]: I0707 00:22:28.383693 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xp72k\" (UniqueName: \"kubernetes.io/projected/5daafe07-d890-4ddb-b90d-98bc8bbc5da9-kube-api-access-xp72k\") pod \"coredns-674b8bbfcf-2c42j\" (UID: \"5daafe07-d890-4ddb-b90d-98bc8bbc5da9\") " pod="kube-system/coredns-674b8bbfcf-2c42j" Jul 7 00:22:28.383739 kubelet[2721]: I0707 00:22:28.383728 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5daafe07-d890-4ddb-b90d-98bc8bbc5da9-config-volume\") pod \"coredns-674b8bbfcf-2c42j\" (UID: \"5daafe07-d890-4ddb-b90d-98bc8bbc5da9\") " pod="kube-system/coredns-674b8bbfcf-2c42j" Jul 7 00:22:28.383739 kubelet[2721]: I0707 00:22:28.383746 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c23582b1-7883-43dc-b14f-f2ab830f8f51-config-volume\") pod \"coredns-674b8bbfcf-f7wgh\" (UID: \"c23582b1-7883-43dc-b14f-f2ab830f8f51\") " pod="kube-system/coredns-674b8bbfcf-f7wgh" Jul 7 00:22:28.383942 kubelet[2721]: I0707 00:22:28.383762 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5nms\" (UniqueName: \"kubernetes.io/projected/c23582b1-7883-43dc-b14f-f2ab830f8f51-kube-api-access-h5nms\") pod \"coredns-674b8bbfcf-f7wgh\" (UID: \"c23582b1-7883-43dc-b14f-f2ab830f8f51\") " pod="kube-system/coredns-674b8bbfcf-f7wgh" Jul 7 00:22:28.646218 kubelet[2721]: E0707 00:22:28.646111 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:22:28.646827 containerd[1598]: time="2025-07-07T00:22:28.646787269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2c42j,Uid:5daafe07-d890-4ddb-b90d-98bc8bbc5da9,Namespace:kube-system,Attempt:0,}" Jul 7 00:22:28.651475 kubelet[2721]: E0707 00:22:28.651447 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:22:28.652005 containerd[1598]: time="2025-07-07T00:22:28.651954271Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-f7wgh,Uid:c23582b1-7883-43dc-b14f-f2ab830f8f51,Namespace:kube-system,Attempt:0,}" Jul 7 00:22:29.040716 kubelet[2721]: E0707 00:22:29.040685 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:22:29.054128 kubelet[2721]: I0707 00:22:29.054012 2721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-rmtj2" podStartSLOduration=6.339936976 podStartE2EDuration="17.053997255s" podCreationTimestamp="2025-07-07 00:22:12 +0000 UTC" firstStartedPulling="2025-07-07 00:22:12.588037085 +0000 UTC m=+6.730229894" lastFinishedPulling="2025-07-07 00:22:23.302097354 +0000 UTC m=+17.444290173" observedRunningTime="2025-07-07 00:22:29.053977578 +0000 UTC m=+23.196170417" watchObservedRunningTime="2025-07-07 00:22:29.053997255 +0000 UTC m=+23.196190074" Jul 7 00:22:30.043847 kubelet[2721]: E0707 00:22:30.042994 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:22:30.263082 systemd-networkd[1491]: cilium_host: Link UP Jul 7 00:22:30.263238 systemd-networkd[1491]: cilium_net: Link UP Jul 7 00:22:30.263401 systemd-networkd[1491]: cilium_net: Gained carrier Jul 7 00:22:30.263558 systemd-networkd[1491]: cilium_host: Gained carrier Jul 7 00:22:30.360630 systemd-networkd[1491]: cilium_vxlan: Link UP Jul 7 00:22:30.360804 systemd-networkd[1491]: cilium_vxlan: Gained carrier Jul 7 00:22:30.490764 systemd-networkd[1491]: cilium_host: Gained IPv6LL Jul 7 00:22:30.561626 kernel: NET: Registered PF_ALG protocol family Jul 7 00:22:31.043686 kubelet[2721]: E0707 00:22:31.043609 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:22:31.066781 systemd-networkd[1491]: cilium_net: Gained IPv6LL Jul 7 00:22:31.177003 systemd-networkd[1491]: lxc_health: Link UP Jul 7 00:22:31.178015 systemd-networkd[1491]: lxc_health: Gained carrier Jul 7 00:22:31.694539 kernel: eth0: renamed from tmp80217 Jul 7 00:22:31.693981 systemd-networkd[1491]: lxc00c21bdc6c9f: Link UP Jul 7 00:22:31.694279 systemd-networkd[1491]: lxc00c21bdc6c9f: Gained carrier Jul 7 00:22:31.694417 systemd-networkd[1491]: lxc545bcc3d3768: Link UP Jul 7 00:22:31.704687 kernel: eth0: renamed from tmp47ed6 Jul 7 00:22:31.706144 systemd-networkd[1491]: lxc545bcc3d3768: Gained carrier Jul 7 00:22:32.026795 systemd-networkd[1491]: cilium_vxlan: Gained IPv6LL Jul 7 00:22:32.470332 kubelet[2721]: E0707 00:22:32.470215 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:22:32.666774 systemd-networkd[1491]: lxc_health: Gained IPv6LL Jul 7 00:22:32.922747 systemd-networkd[1491]: lxc545bcc3d3768: Gained IPv6LL Jul 7 00:22:33.046541 kubelet[2721]: E0707 00:22:33.046500 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:22:33.690856 systemd-networkd[1491]: lxc00c21bdc6c9f: Gained IPv6LL Jul 7 00:22:34.048726 kubelet[2721]: E0707 00:22:34.048258 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:22:35.167237 containerd[1598]: time="2025-07-07T00:22:35.167179898Z" level=info msg="connecting to shim 47ed697a1665afb049c05273636fc6122783829921b1a2525797a09e12507bf6" address="unix:///run/containerd/s/30ce7cf13ca1377e3131047e6f64c13570a0fd19c9fd094f386c5b88bd087e52" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:22:35.168299 containerd[1598]: time="2025-07-07T00:22:35.168018317Z" level=info msg="connecting to shim 8021771758e2296439e0a1656973cd9cd4c85a18370dcb710a21606c7ca27c7d" address="unix:///run/containerd/s/5725dad12271be12fa7ea56b36537753090fadf151085376f9e9c3095a330e79" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:22:35.188714 systemd[1]: Started cri-containerd-47ed697a1665afb049c05273636fc6122783829921b1a2525797a09e12507bf6.scope - libcontainer container 47ed697a1665afb049c05273636fc6122783829921b1a2525797a09e12507bf6. Jul 7 00:22:35.192135 systemd[1]: Started cri-containerd-8021771758e2296439e0a1656973cd9cd4c85a18370dcb710a21606c7ca27c7d.scope - libcontainer container 8021771758e2296439e0a1656973cd9cd4c85a18370dcb710a21606c7ca27c7d. Jul 7 00:22:35.203195 systemd-resolved[1408]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 00:22:35.205066 systemd-resolved[1408]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 00:22:35.236157 containerd[1598]: time="2025-07-07T00:22:35.236110041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-f7wgh,Uid:c23582b1-7883-43dc-b14f-f2ab830f8f51,Namespace:kube-system,Attempt:0,} returns sandbox id \"47ed697a1665afb049c05273636fc6122783829921b1a2525797a09e12507bf6\"" Jul 7 00:22:35.236830 kubelet[2721]: E0707 00:22:35.236805 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:22:35.239439 containerd[1598]: time="2025-07-07T00:22:35.239362248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2c42j,Uid:5daafe07-d890-4ddb-b90d-98bc8bbc5da9,Namespace:kube-system,Attempt:0,} returns sandbox id \"8021771758e2296439e0a1656973cd9cd4c85a18370dcb710a21606c7ca27c7d\"" Jul 7 00:22:35.241351 containerd[1598]: time="2025-07-07T00:22:35.241315058Z" level=info msg="CreateContainer within sandbox \"47ed697a1665afb049c05273636fc6122783829921b1a2525797a09e12507bf6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 00:22:35.250735 kubelet[2721]: E0707 00:22:35.250704 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:22:35.251181 containerd[1598]: time="2025-07-07T00:22:35.251032457Z" level=info msg="Container d73c2b0e66dd13937cad3ec3750dfab28f64c01fabad1498c4a2ce1d9d4c5091: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:22:35.257796 containerd[1598]: time="2025-07-07T00:22:35.257740506Z" level=info msg="CreateContainer within sandbox \"8021771758e2296439e0a1656973cd9cd4c85a18370dcb710a21606c7ca27c7d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 00:22:35.262313 containerd[1598]: time="2025-07-07T00:22:35.262287292Z" level=info msg="CreateContainer within sandbox \"47ed697a1665afb049c05273636fc6122783829921b1a2525797a09e12507bf6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d73c2b0e66dd13937cad3ec3750dfab28f64c01fabad1498c4a2ce1d9d4c5091\"" Jul 7 00:22:35.262680 containerd[1598]: time="2025-07-07T00:22:35.262638764Z" level=info msg="StartContainer for \"d73c2b0e66dd13937cad3ec3750dfab28f64c01fabad1498c4a2ce1d9d4c5091\"" Jul 7 00:22:35.263312 containerd[1598]: time="2025-07-07T00:22:35.263281666Z" level=info msg="connecting to shim d73c2b0e66dd13937cad3ec3750dfab28f64c01fabad1498c4a2ce1d9d4c5091" address="unix:///run/containerd/s/30ce7cf13ca1377e3131047e6f64c13570a0fd19c9fd094f386c5b88bd087e52" protocol=ttrpc version=3 Jul 7 00:22:35.268873 containerd[1598]: time="2025-07-07T00:22:35.268845439Z" level=info msg="Container 1b021ca43740a83ce9abffa9687be348a52e841653f2421db94b1c9da9c859f2: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:22:35.274886 containerd[1598]: time="2025-07-07T00:22:35.274786453Z" level=info msg="CreateContainer within sandbox \"8021771758e2296439e0a1656973cd9cd4c85a18370dcb710a21606c7ca27c7d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1b021ca43740a83ce9abffa9687be348a52e841653f2421db94b1c9da9c859f2\"" Jul 7 00:22:35.275551 containerd[1598]: time="2025-07-07T00:22:35.275533220Z" level=info msg="StartContainer for \"1b021ca43740a83ce9abffa9687be348a52e841653f2421db94b1c9da9c859f2\"" Jul 7 00:22:35.276561 containerd[1598]: time="2025-07-07T00:22:35.276505631Z" level=info msg="connecting to shim 1b021ca43740a83ce9abffa9687be348a52e841653f2421db94b1c9da9c859f2" address="unix:///run/containerd/s/5725dad12271be12fa7ea56b36537753090fadf151085376f9e9c3095a330e79" protocol=ttrpc version=3 Jul 7 00:22:35.286754 systemd[1]: Started cri-containerd-d73c2b0e66dd13937cad3ec3750dfab28f64c01fabad1498c4a2ce1d9d4c5091.scope - libcontainer container d73c2b0e66dd13937cad3ec3750dfab28f64c01fabad1498c4a2ce1d9d4c5091. Jul 7 00:22:35.300732 systemd[1]: Started cri-containerd-1b021ca43740a83ce9abffa9687be348a52e841653f2421db94b1c9da9c859f2.scope - libcontainer container 1b021ca43740a83ce9abffa9687be348a52e841653f2421db94b1c9da9c859f2. Jul 7 00:22:35.328799 containerd[1598]: time="2025-07-07T00:22:35.328751911Z" level=info msg="StartContainer for \"d73c2b0e66dd13937cad3ec3750dfab28f64c01fabad1498c4a2ce1d9d4c5091\" returns successfully" Jul 7 00:22:35.330535 containerd[1598]: time="2025-07-07T00:22:35.330490277Z" level=info msg="StartContainer for \"1b021ca43740a83ce9abffa9687be348a52e841653f2421db94b1c9da9c859f2\" returns successfully" Jul 7 00:22:36.052106 kubelet[2721]: E0707 00:22:36.052062 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:22:36.054310 kubelet[2721]: E0707 00:22:36.054238 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:22:36.308030 kubelet[2721]: I0707 00:22:36.307874 2721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-f7wgh" podStartSLOduration=24.307860219 podStartE2EDuration="24.307860219s" podCreationTimestamp="2025-07-07 00:22:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:22:36.307089147 +0000 UTC m=+30.449281966" watchObservedRunningTime="2025-07-07 00:22:36.307860219 +0000 UTC m=+30.450053028" Jul 7 00:22:36.320265 kubelet[2721]: I0707 00:22:36.320188 2721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-2c42j" podStartSLOduration=24.320169104 podStartE2EDuration="24.320169104s" podCreationTimestamp="2025-07-07 00:22:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:22:36.319258598 +0000 UTC m=+30.461451448" watchObservedRunningTime="2025-07-07 00:22:36.320169104 +0000 UTC m=+30.462361923" Jul 7 00:22:36.838563 systemd[1]: Started sshd@7-10.0.0.140:22-10.0.0.1:44406.service - OpenSSH per-connection server daemon (10.0.0.1:44406). Jul 7 00:22:36.892607 sshd[4081]: Accepted publickey for core from 10.0.0.1 port 44406 ssh2: RSA SHA256:vB2ZN40YeU5BcTegIv+9PTVQlt78XDAEBJuAoVsHXyE Jul 7 00:22:36.894168 sshd-session[4081]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:22:36.898823 systemd-logind[1579]: New session 8 of user core. Jul 7 00:22:36.908723 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 7 00:22:37.056568 kubelet[2721]: E0707 00:22:37.056252 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:22:37.056568 kubelet[2721]: E0707 00:22:37.056453 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:22:37.117801 sshd[4084]: Connection closed by 10.0.0.1 port 44406 Jul 7 00:22:37.118043 sshd-session[4081]: pam_unix(sshd:session): session closed for user core Jul 7 00:22:37.122352 systemd[1]: sshd@7-10.0.0.140:22-10.0.0.1:44406.service: Deactivated successfully. Jul 7 00:22:37.124371 systemd[1]: session-8.scope: Deactivated successfully. Jul 7 00:22:37.125108 systemd-logind[1579]: Session 8 logged out. Waiting for processes to exit. Jul 7 00:22:37.126484 systemd-logind[1579]: Removed session 8. Jul 7 00:22:38.057956 kubelet[2721]: E0707 00:22:38.057921 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:22:38.058360 kubelet[2721]: E0707 00:22:38.058003 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:22:42.141516 systemd[1]: Started sshd@8-10.0.0.140:22-10.0.0.1:47496.service - OpenSSH per-connection server daemon (10.0.0.1:47496). Jul 7 00:22:42.187318 sshd[4100]: Accepted publickey for core from 10.0.0.1 port 47496 ssh2: RSA SHA256:vB2ZN40YeU5BcTegIv+9PTVQlt78XDAEBJuAoVsHXyE Jul 7 00:22:42.188673 sshd-session[4100]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:22:42.192788 systemd-logind[1579]: New session 9 of user core. Jul 7 00:22:42.204706 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 7 00:22:42.307891 sshd[4102]: Connection closed by 10.0.0.1 port 47496 Jul 7 00:22:42.308239 sshd-session[4100]: pam_unix(sshd:session): session closed for user core Jul 7 00:22:42.312517 systemd[1]: sshd@8-10.0.0.140:22-10.0.0.1:47496.service: Deactivated successfully. Jul 7 00:22:42.314533 systemd[1]: session-9.scope: Deactivated successfully. Jul 7 00:22:42.315309 systemd-logind[1579]: Session 9 logged out. Waiting for processes to exit. Jul 7 00:22:42.316443 systemd-logind[1579]: Removed session 9. Jul 7 00:22:47.324495 systemd[1]: Started sshd@9-10.0.0.140:22-10.0.0.1:47508.service - OpenSSH per-connection server daemon (10.0.0.1:47508). Jul 7 00:22:47.373417 sshd[4119]: Accepted publickey for core from 10.0.0.1 port 47508 ssh2: RSA SHA256:vB2ZN40YeU5BcTegIv+9PTVQlt78XDAEBJuAoVsHXyE Jul 7 00:22:47.375112 sshd-session[4119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:22:47.380800 systemd-logind[1579]: New session 10 of user core. Jul 7 00:22:47.388740 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 7 00:22:47.493128 sshd[4121]: Connection closed by 10.0.0.1 port 47508 Jul 7 00:22:47.493453 sshd-session[4119]: pam_unix(sshd:session): session closed for user core Jul 7 00:22:47.497319 systemd[1]: sshd@9-10.0.0.140:22-10.0.0.1:47508.service: Deactivated successfully. Jul 7 00:22:47.499422 systemd[1]: session-10.scope: Deactivated successfully. Jul 7 00:22:47.502143 systemd-logind[1579]: Session 10 logged out. Waiting for processes to exit. Jul 7 00:22:47.503194 systemd-logind[1579]: Removed session 10. Jul 7 00:22:52.508503 systemd[1]: Started sshd@10-10.0.0.140:22-10.0.0.1:52438.service - OpenSSH per-connection server daemon (10.0.0.1:52438). Jul 7 00:22:52.557381 sshd[4136]: Accepted publickey for core from 10.0.0.1 port 52438 ssh2: RSA SHA256:vB2ZN40YeU5BcTegIv+9PTVQlt78XDAEBJuAoVsHXyE Jul 7 00:22:52.558780 sshd-session[4136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:22:52.562873 systemd-logind[1579]: New session 11 of user core. Jul 7 00:22:52.576719 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 7 00:22:52.686603 sshd[4138]: Connection closed by 10.0.0.1 port 52438 Jul 7 00:22:52.686903 sshd-session[4136]: pam_unix(sshd:session): session closed for user core Jul 7 00:22:52.695073 systemd[1]: sshd@10-10.0.0.140:22-10.0.0.1:52438.service: Deactivated successfully. Jul 7 00:22:52.696691 systemd[1]: session-11.scope: Deactivated successfully. Jul 7 00:22:52.697455 systemd-logind[1579]: Session 11 logged out. Waiting for processes to exit. Jul 7 00:22:52.700401 systemd[1]: Started sshd@11-10.0.0.140:22-10.0.0.1:52446.service - OpenSSH per-connection server daemon (10.0.0.1:52446). Jul 7 00:22:52.701014 systemd-logind[1579]: Removed session 11. Jul 7 00:22:52.757046 sshd[4152]: Accepted publickey for core from 10.0.0.1 port 52446 ssh2: RSA SHA256:vB2ZN40YeU5BcTegIv+9PTVQlt78XDAEBJuAoVsHXyE Jul 7 00:22:52.758357 sshd-session[4152]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:22:52.762465 systemd-logind[1579]: New session 12 of user core. Jul 7 00:22:52.775704 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 7 00:22:52.915086 sshd[4154]: Connection closed by 10.0.0.1 port 52446 Jul 7 00:22:52.915843 sshd-session[4152]: pam_unix(sshd:session): session closed for user core Jul 7 00:22:52.927459 systemd[1]: sshd@11-10.0.0.140:22-10.0.0.1:52446.service: Deactivated successfully. Jul 7 00:22:52.930226 systemd[1]: session-12.scope: Deactivated successfully. Jul 7 00:22:52.931473 systemd-logind[1579]: Session 12 logged out. Waiting for processes to exit. Jul 7 00:22:52.935519 systemd[1]: Started sshd@12-10.0.0.140:22-10.0.0.1:52448.service - OpenSSH per-connection server daemon (10.0.0.1:52448). Jul 7 00:22:52.937362 systemd-logind[1579]: Removed session 12. Jul 7 00:22:52.980466 sshd[4166]: Accepted publickey for core from 10.0.0.1 port 52448 ssh2: RSA SHA256:vB2ZN40YeU5BcTegIv+9PTVQlt78XDAEBJuAoVsHXyE Jul 7 00:22:52.981854 sshd-session[4166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:22:52.986218 systemd-logind[1579]: New session 13 of user core. Jul 7 00:22:52.996733 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 7 00:22:53.107148 sshd[4168]: Connection closed by 10.0.0.1 port 52448 Jul 7 00:22:53.107637 sshd-session[4166]: pam_unix(sshd:session): session closed for user core Jul 7 00:22:53.111962 systemd[1]: sshd@12-10.0.0.140:22-10.0.0.1:52448.service: Deactivated successfully. Jul 7 00:22:53.113989 systemd[1]: session-13.scope: Deactivated successfully. Jul 7 00:22:53.114759 systemd-logind[1579]: Session 13 logged out. Waiting for processes to exit. Jul 7 00:22:53.115939 systemd-logind[1579]: Removed session 13. Jul 7 00:22:58.123508 systemd[1]: Started sshd@13-10.0.0.140:22-10.0.0.1:45574.service - OpenSSH per-connection server daemon (10.0.0.1:45574). Jul 7 00:22:58.184717 sshd[4181]: Accepted publickey for core from 10.0.0.1 port 45574 ssh2: RSA SHA256:vB2ZN40YeU5BcTegIv+9PTVQlt78XDAEBJuAoVsHXyE Jul 7 00:22:58.186194 sshd-session[4181]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:22:58.190045 systemd-logind[1579]: New session 14 of user core. Jul 7 00:22:58.197724 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 7 00:22:58.308772 sshd[4183]: Connection closed by 10.0.0.1 port 45574 Jul 7 00:22:58.309083 sshd-session[4181]: pam_unix(sshd:session): session closed for user core Jul 7 00:22:58.314198 systemd[1]: sshd@13-10.0.0.140:22-10.0.0.1:45574.service: Deactivated successfully. Jul 7 00:22:58.316202 systemd[1]: session-14.scope: Deactivated successfully. Jul 7 00:22:58.317173 systemd-logind[1579]: Session 14 logged out. Waiting for processes to exit. Jul 7 00:22:58.318698 systemd-logind[1579]: Removed session 14. Jul 7 00:23:03.324102 systemd[1]: Started sshd@14-10.0.0.140:22-10.0.0.1:45588.service - OpenSSH per-connection server daemon (10.0.0.1:45588). Jul 7 00:23:03.369242 sshd[4196]: Accepted publickey for core from 10.0.0.1 port 45588 ssh2: RSA SHA256:vB2ZN40YeU5BcTegIv+9PTVQlt78XDAEBJuAoVsHXyE Jul 7 00:23:03.370509 sshd-session[4196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:23:03.374664 systemd-logind[1579]: New session 15 of user core. Jul 7 00:23:03.391721 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 7 00:23:03.497303 sshd[4198]: Connection closed by 10.0.0.1 port 45588 Jul 7 00:23:03.497664 sshd-session[4196]: pam_unix(sshd:session): session closed for user core Jul 7 00:23:03.506217 systemd[1]: sshd@14-10.0.0.140:22-10.0.0.1:45588.service: Deactivated successfully. Jul 7 00:23:03.508103 systemd[1]: session-15.scope: Deactivated successfully. Jul 7 00:23:03.509034 systemd-logind[1579]: Session 15 logged out. Waiting for processes to exit. Jul 7 00:23:03.512151 systemd[1]: Started sshd@15-10.0.0.140:22-10.0.0.1:45592.service - OpenSSH per-connection server daemon (10.0.0.1:45592). Jul 7 00:23:03.513032 systemd-logind[1579]: Removed session 15. Jul 7 00:23:03.558249 sshd[4211]: Accepted publickey for core from 10.0.0.1 port 45592 ssh2: RSA SHA256:vB2ZN40YeU5BcTegIv+9PTVQlt78XDAEBJuAoVsHXyE Jul 7 00:23:03.559758 sshd-session[4211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:23:03.564118 systemd-logind[1579]: New session 16 of user core. Jul 7 00:23:03.570706 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 7 00:23:03.812008 sshd[4213]: Connection closed by 10.0.0.1 port 45592 Jul 7 00:23:03.812309 sshd-session[4211]: pam_unix(sshd:session): session closed for user core Jul 7 00:23:03.824236 systemd[1]: sshd@15-10.0.0.140:22-10.0.0.1:45592.service: Deactivated successfully. Jul 7 00:23:03.826107 systemd[1]: session-16.scope: Deactivated successfully. Jul 7 00:23:03.827001 systemd-logind[1579]: Session 16 logged out. Waiting for processes to exit. Jul 7 00:23:03.829987 systemd[1]: Started sshd@16-10.0.0.140:22-10.0.0.1:45598.service - OpenSSH per-connection server daemon (10.0.0.1:45598). Jul 7 00:23:03.830608 systemd-logind[1579]: Removed session 16. Jul 7 00:23:03.887091 sshd[4226]: Accepted publickey for core from 10.0.0.1 port 45598 ssh2: RSA SHA256:vB2ZN40YeU5BcTegIv+9PTVQlt78XDAEBJuAoVsHXyE Jul 7 00:23:03.888863 sshd-session[4226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:23:03.893069 systemd-logind[1579]: New session 17 of user core. Jul 7 00:23:03.902724 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 7 00:23:04.615123 sshd[4228]: Connection closed by 10.0.0.1 port 45598 Jul 7 00:23:04.615616 sshd-session[4226]: pam_unix(sshd:session): session closed for user core Jul 7 00:23:04.624450 systemd[1]: sshd@16-10.0.0.140:22-10.0.0.1:45598.service: Deactivated successfully. Jul 7 00:23:04.627062 systemd[1]: session-17.scope: Deactivated successfully. Jul 7 00:23:04.628217 systemd-logind[1579]: Session 17 logged out. Waiting for processes to exit. Jul 7 00:23:04.631468 systemd[1]: Started sshd@17-10.0.0.140:22-10.0.0.1:45606.service - OpenSSH per-connection server daemon (10.0.0.1:45606). Jul 7 00:23:04.632894 systemd-logind[1579]: Removed session 17. Jul 7 00:23:04.674571 sshd[4247]: Accepted publickey for core from 10.0.0.1 port 45606 ssh2: RSA SHA256:vB2ZN40YeU5BcTegIv+9PTVQlt78XDAEBJuAoVsHXyE Jul 7 00:23:04.675891 sshd-session[4247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:23:04.680085 systemd-logind[1579]: New session 18 of user core. Jul 7 00:23:04.689734 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 7 00:23:04.907581 sshd[4249]: Connection closed by 10.0.0.1 port 45606 Jul 7 00:23:04.907923 sshd-session[4247]: pam_unix(sshd:session): session closed for user core Jul 7 00:23:04.920853 systemd[1]: sshd@17-10.0.0.140:22-10.0.0.1:45606.service: Deactivated successfully. Jul 7 00:23:04.922867 systemd[1]: session-18.scope: Deactivated successfully. Jul 7 00:23:04.923693 systemd-logind[1579]: Session 18 logged out. Waiting for processes to exit. Jul 7 00:23:04.927188 systemd[1]: Started sshd@18-10.0.0.140:22-10.0.0.1:45620.service - OpenSSH per-connection server daemon (10.0.0.1:45620). Jul 7 00:23:04.927882 systemd-logind[1579]: Removed session 18. Jul 7 00:23:04.974832 sshd[4261]: Accepted publickey for core from 10.0.0.1 port 45620 ssh2: RSA SHA256:vB2ZN40YeU5BcTegIv+9PTVQlt78XDAEBJuAoVsHXyE Jul 7 00:23:04.976409 sshd-session[4261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:23:04.980421 systemd-logind[1579]: New session 19 of user core. Jul 7 00:23:04.986698 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 7 00:23:05.089495 sshd[4263]: Connection closed by 10.0.0.1 port 45620 Jul 7 00:23:05.089824 sshd-session[4261]: pam_unix(sshd:session): session closed for user core Jul 7 00:23:05.093521 systemd[1]: sshd@18-10.0.0.140:22-10.0.0.1:45620.service: Deactivated successfully. Jul 7 00:23:05.095706 systemd[1]: session-19.scope: Deactivated successfully. Jul 7 00:23:05.097432 systemd-logind[1579]: Session 19 logged out. Waiting for processes to exit. Jul 7 00:23:05.098955 systemd-logind[1579]: Removed session 19. Jul 7 00:23:10.105168 systemd[1]: Started sshd@19-10.0.0.140:22-10.0.0.1:49762.service - OpenSSH per-connection server daemon (10.0.0.1:49762). Jul 7 00:23:10.160539 sshd[4281]: Accepted publickey for core from 10.0.0.1 port 49762 ssh2: RSA SHA256:vB2ZN40YeU5BcTegIv+9PTVQlt78XDAEBJuAoVsHXyE Jul 7 00:23:10.161944 sshd-session[4281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:23:10.165957 systemd-logind[1579]: New session 20 of user core. Jul 7 00:23:10.173708 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 7 00:23:10.279440 sshd[4283]: Connection closed by 10.0.0.1 port 49762 Jul 7 00:23:10.279769 sshd-session[4281]: pam_unix(sshd:session): session closed for user core Jul 7 00:23:10.283501 systemd[1]: sshd@19-10.0.0.140:22-10.0.0.1:49762.service: Deactivated successfully. Jul 7 00:23:10.285375 systemd[1]: session-20.scope: Deactivated successfully. Jul 7 00:23:10.286208 systemd-logind[1579]: Session 20 logged out. Waiting for processes to exit. Jul 7 00:23:10.287395 systemd-logind[1579]: Removed session 20. Jul 7 00:23:15.296436 systemd[1]: Started sshd@20-10.0.0.140:22-10.0.0.1:49774.service - OpenSSH per-connection server daemon (10.0.0.1:49774). Jul 7 00:23:15.350468 sshd[4298]: Accepted publickey for core from 10.0.0.1 port 49774 ssh2: RSA SHA256:vB2ZN40YeU5BcTegIv+9PTVQlt78XDAEBJuAoVsHXyE Jul 7 00:23:15.351703 sshd-session[4298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:23:15.355711 systemd-logind[1579]: New session 21 of user core. Jul 7 00:23:15.366742 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 7 00:23:15.469188 sshd[4300]: Connection closed by 10.0.0.1 port 49774 Jul 7 00:23:15.469506 sshd-session[4298]: pam_unix(sshd:session): session closed for user core Jul 7 00:23:15.473523 systemd[1]: sshd@20-10.0.0.140:22-10.0.0.1:49774.service: Deactivated successfully. Jul 7 00:23:15.475282 systemd[1]: session-21.scope: Deactivated successfully. Jul 7 00:23:15.476059 systemd-logind[1579]: Session 21 logged out. Waiting for processes to exit. Jul 7 00:23:15.477261 systemd-logind[1579]: Removed session 21. Jul 7 00:23:20.485083 systemd[1]: Started sshd@21-10.0.0.140:22-10.0.0.1:41432.service - OpenSSH per-connection server daemon (10.0.0.1:41432). Jul 7 00:23:20.535956 sshd[4313]: Accepted publickey for core from 10.0.0.1 port 41432 ssh2: RSA SHA256:vB2ZN40YeU5BcTegIv+9PTVQlt78XDAEBJuAoVsHXyE Jul 7 00:23:20.537660 sshd-session[4313]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:23:20.541741 systemd-logind[1579]: New session 22 of user core. Jul 7 00:23:20.555709 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 7 00:23:20.665016 sshd[4315]: Connection closed by 10.0.0.1 port 41432 Jul 7 00:23:20.665286 sshd-session[4313]: pam_unix(sshd:session): session closed for user core Jul 7 00:23:20.681466 systemd[1]: sshd@21-10.0.0.140:22-10.0.0.1:41432.service: Deactivated successfully. Jul 7 00:23:20.683460 systemd[1]: session-22.scope: Deactivated successfully. Jul 7 00:23:20.684237 systemd-logind[1579]: Session 22 logged out. Waiting for processes to exit. Jul 7 00:23:20.687429 systemd[1]: Started sshd@22-10.0.0.140:22-10.0.0.1:41444.service - OpenSSH per-connection server daemon (10.0.0.1:41444). Jul 7 00:23:20.688074 systemd-logind[1579]: Removed session 22. Jul 7 00:23:20.728526 sshd[4328]: Accepted publickey for core from 10.0.0.1 port 41444 ssh2: RSA SHA256:vB2ZN40YeU5BcTegIv+9PTVQlt78XDAEBJuAoVsHXyE Jul 7 00:23:20.729947 sshd-session[4328]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:23:20.734026 systemd-logind[1579]: New session 23 of user core. Jul 7 00:23:20.743725 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 7 00:23:22.066683 containerd[1598]: time="2025-07-07T00:23:22.066193916Z" level=info msg="StopContainer for \"9633dcbb51a275ca9877b94bb40cbd0f91b30fbb815834904813f5b6d8e6db9e\" with timeout 30 (s)" Jul 7 00:23:22.077546 containerd[1598]: time="2025-07-07T00:23:22.077465238Z" level=info msg="Stop container \"9633dcbb51a275ca9877b94bb40cbd0f91b30fbb815834904813f5b6d8e6db9e\" with signal terminated" Jul 7 00:23:22.091479 systemd[1]: cri-containerd-9633dcbb51a275ca9877b94bb40cbd0f91b30fbb815834904813f5b6d8e6db9e.scope: Deactivated successfully. Jul 7 00:23:22.093783 containerd[1598]: time="2025-07-07T00:23:22.093747977Z" level=info msg="received exit event container_id:\"9633dcbb51a275ca9877b94bb40cbd0f91b30fbb815834904813f5b6d8e6db9e\" id:\"9633dcbb51a275ca9877b94bb40cbd0f91b30fbb815834904813f5b6d8e6db9e\" pid:3282 exited_at:{seconds:1751847802 nanos:92795984}" Jul 7 00:23:22.093905 containerd[1598]: time="2025-07-07T00:23:22.093871204Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9633dcbb51a275ca9877b94bb40cbd0f91b30fbb815834904813f5b6d8e6db9e\" id:\"9633dcbb51a275ca9877b94bb40cbd0f91b30fbb815834904813f5b6d8e6db9e\" pid:3282 exited_at:{seconds:1751847802 nanos:92795984}" Jul 7 00:23:22.102252 containerd[1598]: time="2025-07-07T00:23:22.102067905Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 00:23:22.102792 containerd[1598]: time="2025-07-07T00:23:22.102758114Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7b316c158e183853d88d166bd3e44cc54472166fb14590595427f60e98462694\" id:\"670f2757445c6a2be38947938f6f2bfd817d3e0c1fa83031e97aad6e29ff1e20\" pid:4351 exited_at:{seconds:1751847802 nanos:102476462}" Jul 7 00:23:22.104826 containerd[1598]: time="2025-07-07T00:23:22.104787331Z" level=info msg="StopContainer for \"7b316c158e183853d88d166bd3e44cc54472166fb14590595427f60e98462694\" with timeout 2 (s)" Jul 7 00:23:22.105074 containerd[1598]: time="2025-07-07T00:23:22.105035690Z" level=info msg="Stop container \"7b316c158e183853d88d166bd3e44cc54472166fb14590595427f60e98462694\" with signal terminated" Jul 7 00:23:22.112117 systemd-networkd[1491]: lxc_health: Link DOWN Jul 7 00:23:22.112126 systemd-networkd[1491]: lxc_health: Lost carrier Jul 7 00:23:22.120405 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9633dcbb51a275ca9877b94bb40cbd0f91b30fbb815834904813f5b6d8e6db9e-rootfs.mount: Deactivated successfully. Jul 7 00:23:22.129004 systemd[1]: cri-containerd-7b316c158e183853d88d166bd3e44cc54472166fb14590595427f60e98462694.scope: Deactivated successfully. Jul 7 00:23:22.129398 systemd[1]: cri-containerd-7b316c158e183853d88d166bd3e44cc54472166fb14590595427f60e98462694.scope: Consumed 6.169s CPU time, 126.5M memory peak, 248K read from disk, 13.3M written to disk. Jul 7 00:23:22.130323 containerd[1598]: time="2025-07-07T00:23:22.130193257Z" level=info msg="received exit event container_id:\"7b316c158e183853d88d166bd3e44cc54472166fb14590595427f60e98462694\" id:\"7b316c158e183853d88d166bd3e44cc54472166fb14590595427f60e98462694\" pid:3395 exited_at:{seconds:1751847802 nanos:129652435}" Jul 7 00:23:22.130672 containerd[1598]: time="2025-07-07T00:23:22.130245316Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7b316c158e183853d88d166bd3e44cc54472166fb14590595427f60e98462694\" id:\"7b316c158e183853d88d166bd3e44cc54472166fb14590595427f60e98462694\" pid:3395 exited_at:{seconds:1751847802 nanos:129652435}" Jul 7 00:23:22.132792 containerd[1598]: time="2025-07-07T00:23:22.132763064Z" level=info msg="StopContainer for \"9633dcbb51a275ca9877b94bb40cbd0f91b30fbb815834904813f5b6d8e6db9e\" returns successfully" Jul 7 00:23:22.133310 containerd[1598]: time="2025-07-07T00:23:22.133180738Z" level=info msg="StopPodSandbox for \"ee11772bba322e00bfb172cff8ab6bae612598209f068244757153173d3a752b\"" Jul 7 00:23:22.133310 containerd[1598]: time="2025-07-07T00:23:22.133244791Z" level=info msg="Container to stop \"9633dcbb51a275ca9877b94bb40cbd0f91b30fbb815834904813f5b6d8e6db9e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 00:23:22.140735 systemd[1]: cri-containerd-ee11772bba322e00bfb172cff8ab6bae612598209f068244757153173d3a752b.scope: Deactivated successfully. Jul 7 00:23:22.142352 containerd[1598]: time="2025-07-07T00:23:22.142314905Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ee11772bba322e00bfb172cff8ab6bae612598209f068244757153173d3a752b\" id:\"ee11772bba322e00bfb172cff8ab6bae612598209f068244757153173d3a752b\" pid:2962 exit_status:137 exited_at:{seconds:1751847802 nanos:141975702}" Jul 7 00:23:22.152022 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7b316c158e183853d88d166bd3e44cc54472166fb14590595427f60e98462694-rootfs.mount: Deactivated successfully. Jul 7 00:23:22.162936 containerd[1598]: time="2025-07-07T00:23:22.162885229Z" level=info msg="StopContainer for \"7b316c158e183853d88d166bd3e44cc54472166fb14590595427f60e98462694\" returns successfully" Jul 7 00:23:22.163763 containerd[1598]: time="2025-07-07T00:23:22.163708254Z" level=info msg="StopPodSandbox for \"8666e16be6751ada9b3502d11572b29b0c620b989ca8c1af13e477d917254cd2\"" Jul 7 00:23:22.163891 containerd[1598]: time="2025-07-07T00:23:22.163781575Z" level=info msg="Container to stop \"93acd5dbb5e0c15e92f3293042a1bb38b7dafcf550984718584b3b5bd433f31a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 00:23:22.163891 containerd[1598]: time="2025-07-07T00:23:22.163793989Z" level=info msg="Container to stop \"7b316c158e183853d88d166bd3e44cc54472166fb14590595427f60e98462694\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 00:23:22.163891 containerd[1598]: time="2025-07-07T00:23:22.163802806Z" level=info msg="Container to stop \"6b09a645164b9579922f5c251b0152a509f7bb09fe3b0e12bfe916d2cda4d137\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 00:23:22.163891 containerd[1598]: time="2025-07-07T00:23:22.163811753Z" level=info msg="Container to stop \"c306b718674fa1b519537fe3405dc848ffa05c3ed1caa8d0e77c2b18fc2edfaa\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 00:23:22.163891 containerd[1598]: time="2025-07-07T00:23:22.163821211Z" level=info msg="Container to stop \"045373259523db2581d10adf6cf3c1e6dba564b4f47ead7e2cfadf2d2cf6eab0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 00:23:22.170113 systemd[1]: cri-containerd-8666e16be6751ada9b3502d11572b29b0c620b989ca8c1af13e477d917254cd2.scope: Deactivated successfully. Jul 7 00:23:22.173415 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ee11772bba322e00bfb172cff8ab6bae612598209f068244757153173d3a752b-rootfs.mount: Deactivated successfully. Jul 7 00:23:22.176264 containerd[1598]: time="2025-07-07T00:23:22.176199083Z" level=info msg="shim disconnected" id=ee11772bba322e00bfb172cff8ab6bae612598209f068244757153173d3a752b namespace=k8s.io Jul 7 00:23:22.176264 containerd[1598]: time="2025-07-07T00:23:22.176242817Z" level=warning msg="cleaning up after shim disconnected" id=ee11772bba322e00bfb172cff8ab6bae612598209f068244757153173d3a752b namespace=k8s.io Jul 7 00:23:22.184089 containerd[1598]: time="2025-07-07T00:23:22.176250792Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 00:23:22.194376 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8666e16be6751ada9b3502d11572b29b0c620b989ca8c1af13e477d917254cd2-rootfs.mount: Deactivated successfully. Jul 7 00:23:22.199262 containerd[1598]: time="2025-07-07T00:23:22.199188686Z" level=info msg="shim disconnected" id=8666e16be6751ada9b3502d11572b29b0c620b989ca8c1af13e477d917254cd2 namespace=k8s.io Jul 7 00:23:22.199262 containerd[1598]: time="2025-07-07T00:23:22.199207121Z" level=warning msg="cleaning up after shim disconnected" id=8666e16be6751ada9b3502d11572b29b0c620b989ca8c1af13e477d917254cd2 namespace=k8s.io Jul 7 00:23:22.199262 containerd[1598]: time="2025-07-07T00:23:22.199214355Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 00:23:22.214417 containerd[1598]: time="2025-07-07T00:23:22.214359214Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8666e16be6751ada9b3502d11572b29b0c620b989ca8c1af13e477d917254cd2\" id:\"8666e16be6751ada9b3502d11572b29b0c620b989ca8c1af13e477d917254cd2\" pid:2876 exit_status:137 exited_at:{seconds:1751847802 nanos:173378943}" Jul 7 00:23:22.216796 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ee11772bba322e00bfb172cff8ab6bae612598209f068244757153173d3a752b-shm.mount: Deactivated successfully. Jul 7 00:23:22.217984 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8666e16be6751ada9b3502d11572b29b0c620b989ca8c1af13e477d917254cd2-shm.mount: Deactivated successfully. Jul 7 00:23:22.224461 containerd[1598]: time="2025-07-07T00:23:22.224327356Z" level=info msg="TearDown network for sandbox \"ee11772bba322e00bfb172cff8ab6bae612598209f068244757153173d3a752b\" successfully" Jul 7 00:23:22.224461 containerd[1598]: time="2025-07-07T00:23:22.224372773Z" level=info msg="StopPodSandbox for \"ee11772bba322e00bfb172cff8ab6bae612598209f068244757153173d3a752b\" returns successfully" Jul 7 00:23:22.229610 containerd[1598]: time="2025-07-07T00:23:22.227861721Z" level=info msg="received exit event sandbox_id:\"ee11772bba322e00bfb172cff8ab6bae612598209f068244757153173d3a752b\" exit_status:137 exited_at:{seconds:1751847802 nanos:141975702}" Jul 7 00:23:22.229610 containerd[1598]: time="2025-07-07T00:23:22.228186165Z" level=info msg="received exit event sandbox_id:\"8666e16be6751ada9b3502d11572b29b0c620b989ca8c1af13e477d917254cd2\" exit_status:137 exited_at:{seconds:1751847802 nanos:173378943}" Jul 7 00:23:22.229720 containerd[1598]: time="2025-07-07T00:23:22.229602292Z" level=info msg="TearDown network for sandbox \"8666e16be6751ada9b3502d11572b29b0c620b989ca8c1af13e477d917254cd2\" successfully" Jul 7 00:23:22.229720 containerd[1598]: time="2025-07-07T00:23:22.229638592Z" level=info msg="StopPodSandbox for \"8666e16be6751ada9b3502d11572b29b0c620b989ca8c1af13e477d917254cd2\" returns successfully" Jul 7 00:23:22.287419 kubelet[2721]: I0707 00:23:22.287353 2721 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cf9b3fc7-319a-4ea9-9813-406bb0cb6545-bpf-maps\") pod \"cf9b3fc7-319a-4ea9-9813-406bb0cb6545\" (UID: \"cf9b3fc7-319a-4ea9-9813-406bb0cb6545\") " Jul 7 00:23:22.287419 kubelet[2721]: I0707 00:23:22.287405 2721 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cf9b3fc7-319a-4ea9-9813-406bb0cb6545-cni-path\") pod \"cf9b3fc7-319a-4ea9-9813-406bb0cb6545\" (UID: \"cf9b3fc7-319a-4ea9-9813-406bb0cb6545\") " Jul 7 00:23:22.287419 kubelet[2721]: I0707 00:23:22.287428 2721 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwp4b\" (UniqueName: \"kubernetes.io/projected/cf9b3fc7-319a-4ea9-9813-406bb0cb6545-kube-api-access-vwp4b\") pod \"cf9b3fc7-319a-4ea9-9813-406bb0cb6545\" (UID: \"cf9b3fc7-319a-4ea9-9813-406bb0cb6545\") " Jul 7 00:23:22.287936 kubelet[2721]: I0707 00:23:22.287450 2721 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cf9b3fc7-319a-4ea9-9813-406bb0cb6545-hubble-tls\") pod \"cf9b3fc7-319a-4ea9-9813-406bb0cb6545\" (UID: \"cf9b3fc7-319a-4ea9-9813-406bb0cb6545\") " Jul 7 00:23:22.287936 kubelet[2721]: I0707 00:23:22.287470 2721 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cf9b3fc7-319a-4ea9-9813-406bb0cb6545-cilium-config-path\") pod \"cf9b3fc7-319a-4ea9-9813-406bb0cb6545\" (UID: \"cf9b3fc7-319a-4ea9-9813-406bb0cb6545\") " Jul 7 00:23:22.287936 kubelet[2721]: I0707 00:23:22.287485 2721 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cf9b3fc7-319a-4ea9-9813-406bb0cb6545-cilium-cgroup\") pod \"cf9b3fc7-319a-4ea9-9813-406bb0cb6545\" (UID: \"cf9b3fc7-319a-4ea9-9813-406bb0cb6545\") " Jul 7 00:23:22.287936 kubelet[2721]: I0707 00:23:22.287504 2721 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ef970f4b-8dd0-43a5-8fa5-69cbfe2f415b-cilium-config-path\") pod \"ef970f4b-8dd0-43a5-8fa5-69cbfe2f415b\" (UID: \"ef970f4b-8dd0-43a5-8fa5-69cbfe2f415b\") " Jul 7 00:23:22.287936 kubelet[2721]: I0707 00:23:22.287522 2721 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cf9b3fc7-319a-4ea9-9813-406bb0cb6545-xtables-lock\") pod \"cf9b3fc7-319a-4ea9-9813-406bb0cb6545\" (UID: \"cf9b3fc7-319a-4ea9-9813-406bb0cb6545\") " Jul 7 00:23:22.287936 kubelet[2721]: I0707 00:23:22.287541 2721 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cf9b3fc7-319a-4ea9-9813-406bb0cb6545-lib-modules\") pod \"cf9b3fc7-319a-4ea9-9813-406bb0cb6545\" (UID: \"cf9b3fc7-319a-4ea9-9813-406bb0cb6545\") " Jul 7 00:23:22.288098 kubelet[2721]: I0707 00:23:22.287518 2721 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf9b3fc7-319a-4ea9-9813-406bb0cb6545-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "cf9b3fc7-319a-4ea9-9813-406bb0cb6545" (UID: "cf9b3fc7-319a-4ea9-9813-406bb0cb6545"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 00:23:22.288098 kubelet[2721]: I0707 00:23:22.287556 2721 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cf9b3fc7-319a-4ea9-9813-406bb0cb6545-host-proc-sys-kernel\") pod \"cf9b3fc7-319a-4ea9-9813-406bb0cb6545\" (UID: \"cf9b3fc7-319a-4ea9-9813-406bb0cb6545\") " Jul 7 00:23:22.288098 kubelet[2721]: I0707 00:23:22.287636 2721 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf9b3fc7-319a-4ea9-9813-406bb0cb6545-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "cf9b3fc7-319a-4ea9-9813-406bb0cb6545" (UID: "cf9b3fc7-319a-4ea9-9813-406bb0cb6545"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 00:23:22.288098 kubelet[2721]: I0707 00:23:22.287669 2721 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cf9b3fc7-319a-4ea9-9813-406bb0cb6545-host-proc-sys-net\") pod \"cf9b3fc7-319a-4ea9-9813-406bb0cb6545\" (UID: \"cf9b3fc7-319a-4ea9-9813-406bb0cb6545\") " Jul 7 00:23:22.288098 kubelet[2721]: I0707 00:23:22.287695 2721 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cf9b3fc7-319a-4ea9-9813-406bb0cb6545-clustermesh-secrets\") pod \"cf9b3fc7-319a-4ea9-9813-406bb0cb6545\" (UID: \"cf9b3fc7-319a-4ea9-9813-406bb0cb6545\") " Jul 7 00:23:22.288250 kubelet[2721]: I0707 00:23:22.287718 2721 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m44b7\" (UniqueName: \"kubernetes.io/projected/ef970f4b-8dd0-43a5-8fa5-69cbfe2f415b-kube-api-access-m44b7\") pod \"ef970f4b-8dd0-43a5-8fa5-69cbfe2f415b\" (UID: \"ef970f4b-8dd0-43a5-8fa5-69cbfe2f415b\") " Jul 7 00:23:22.288250 kubelet[2721]: I0707 00:23:22.287736 2721 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cf9b3fc7-319a-4ea9-9813-406bb0cb6545-cilium-run\") pod \"cf9b3fc7-319a-4ea9-9813-406bb0cb6545\" (UID: \"cf9b3fc7-319a-4ea9-9813-406bb0cb6545\") " Jul 7 00:23:22.288250 kubelet[2721]: I0707 00:23:22.287751 2721 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cf9b3fc7-319a-4ea9-9813-406bb0cb6545-hostproc\") pod \"cf9b3fc7-319a-4ea9-9813-406bb0cb6545\" (UID: \"cf9b3fc7-319a-4ea9-9813-406bb0cb6545\") " Jul 7 00:23:22.288250 kubelet[2721]: I0707 00:23:22.287765 2721 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cf9b3fc7-319a-4ea9-9813-406bb0cb6545-etc-cni-netd\") pod \"cf9b3fc7-319a-4ea9-9813-406bb0cb6545\" (UID: \"cf9b3fc7-319a-4ea9-9813-406bb0cb6545\") " Jul 7 00:23:22.288250 kubelet[2721]: I0707 00:23:22.287818 2721 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cf9b3fc7-319a-4ea9-9813-406bb0cb6545-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 7 00:23:22.288250 kubelet[2721]: I0707 00:23:22.287828 2721 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cf9b3fc7-319a-4ea9-9813-406bb0cb6545-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 7 00:23:22.288397 kubelet[2721]: I0707 00:23:22.287851 2721 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf9b3fc7-319a-4ea9-9813-406bb0cb6545-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "cf9b3fc7-319a-4ea9-9813-406bb0cb6545" (UID: "cf9b3fc7-319a-4ea9-9813-406bb0cb6545"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 00:23:22.288397 kubelet[2721]: I0707 00:23:22.287868 2721 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf9b3fc7-319a-4ea9-9813-406bb0cb6545-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "cf9b3fc7-319a-4ea9-9813-406bb0cb6545" (UID: "cf9b3fc7-319a-4ea9-9813-406bb0cb6545"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 00:23:22.288397 kubelet[2721]: I0707 00:23:22.288024 2721 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf9b3fc7-319a-4ea9-9813-406bb0cb6545-cni-path" (OuterVolumeSpecName: "cni-path") pod "cf9b3fc7-319a-4ea9-9813-406bb0cb6545" (UID: "cf9b3fc7-319a-4ea9-9813-406bb0cb6545"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 00:23:22.291608 kubelet[2721]: I0707 00:23:22.290998 2721 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf9b3fc7-319a-4ea9-9813-406bb0cb6545-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "cf9b3fc7-319a-4ea9-9813-406bb0cb6545" (UID: "cf9b3fc7-319a-4ea9-9813-406bb0cb6545"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 7 00:23:22.291608 kubelet[2721]: I0707 00:23:22.291039 2721 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf9b3fc7-319a-4ea9-9813-406bb0cb6545-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "cf9b3fc7-319a-4ea9-9813-406bb0cb6545" (UID: "cf9b3fc7-319a-4ea9-9813-406bb0cb6545"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 00:23:22.291910 kubelet[2721]: I0707 00:23:22.291878 2721 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf9b3fc7-319a-4ea9-9813-406bb0cb6545-kube-api-access-vwp4b" (OuterVolumeSpecName: "kube-api-access-vwp4b") pod "cf9b3fc7-319a-4ea9-9813-406bb0cb6545" (UID: "cf9b3fc7-319a-4ea9-9813-406bb0cb6545"). InnerVolumeSpecName "kube-api-access-vwp4b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 7 00:23:22.291979 kubelet[2721]: I0707 00:23:22.291958 2721 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf9b3fc7-319a-4ea9-9813-406bb0cb6545-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "cf9b3fc7-319a-4ea9-9813-406bb0cb6545" (UID: "cf9b3fc7-319a-4ea9-9813-406bb0cb6545"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 7 00:23:22.292011 kubelet[2721]: I0707 00:23:22.291987 2721 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf9b3fc7-319a-4ea9-9813-406bb0cb6545-hostproc" (OuterVolumeSpecName: "hostproc") pod "cf9b3fc7-319a-4ea9-9813-406bb0cb6545" (UID: "cf9b3fc7-319a-4ea9-9813-406bb0cb6545"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 00:23:22.292011 kubelet[2721]: I0707 00:23:22.292003 2721 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf9b3fc7-319a-4ea9-9813-406bb0cb6545-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "cf9b3fc7-319a-4ea9-9813-406bb0cb6545" (UID: "cf9b3fc7-319a-4ea9-9813-406bb0cb6545"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 00:23:22.292061 kubelet[2721]: I0707 00:23:22.292026 2721 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf9b3fc7-319a-4ea9-9813-406bb0cb6545-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "cf9b3fc7-319a-4ea9-9813-406bb0cb6545" (UID: "cf9b3fc7-319a-4ea9-9813-406bb0cb6545"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 00:23:22.292111 kubelet[2721]: I0707 00:23:22.292092 2721 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf9b3fc7-319a-4ea9-9813-406bb0cb6545-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "cf9b3fc7-319a-4ea9-9813-406bb0cb6545" (UID: "cf9b3fc7-319a-4ea9-9813-406bb0cb6545"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 00:23:22.292747 kubelet[2721]: I0707 00:23:22.292249 2721 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf9b3fc7-319a-4ea9-9813-406bb0cb6545-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "cf9b3fc7-319a-4ea9-9813-406bb0cb6545" (UID: "cf9b3fc7-319a-4ea9-9813-406bb0cb6545"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 7 00:23:22.293528 kubelet[2721]: I0707 00:23:22.293491 2721 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef970f4b-8dd0-43a5-8fa5-69cbfe2f415b-kube-api-access-m44b7" (OuterVolumeSpecName: "kube-api-access-m44b7") pod "ef970f4b-8dd0-43a5-8fa5-69cbfe2f415b" (UID: "ef970f4b-8dd0-43a5-8fa5-69cbfe2f415b"). InnerVolumeSpecName "kube-api-access-m44b7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 7 00:23:22.294446 kubelet[2721]: I0707 00:23:22.294416 2721 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef970f4b-8dd0-43a5-8fa5-69cbfe2f415b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ef970f4b-8dd0-43a5-8fa5-69cbfe2f415b" (UID: "ef970f4b-8dd0-43a5-8fa5-69cbfe2f415b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 7 00:23:22.389207 kubelet[2721]: I0707 00:23:22.388941 2721 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cf9b3fc7-319a-4ea9-9813-406bb0cb6545-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 7 00:23:22.389207 kubelet[2721]: I0707 00:23:22.388981 2721 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m44b7\" (UniqueName: \"kubernetes.io/projected/ef970f4b-8dd0-43a5-8fa5-69cbfe2f415b-kube-api-access-m44b7\") on node \"localhost\" DevicePath \"\"" Jul 7 00:23:22.389207 kubelet[2721]: I0707 00:23:22.388991 2721 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cf9b3fc7-319a-4ea9-9813-406bb0cb6545-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 7 00:23:22.389207 kubelet[2721]: I0707 00:23:22.389000 2721 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cf9b3fc7-319a-4ea9-9813-406bb0cb6545-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 7 00:23:22.389207 kubelet[2721]: I0707 00:23:22.389010 2721 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cf9b3fc7-319a-4ea9-9813-406bb0cb6545-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 7 00:23:22.389207 kubelet[2721]: I0707 00:23:22.389019 2721 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cf9b3fc7-319a-4ea9-9813-406bb0cb6545-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 7 00:23:22.389207 kubelet[2721]: I0707 00:23:22.389027 2721 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vwp4b\" (UniqueName: \"kubernetes.io/projected/cf9b3fc7-319a-4ea9-9813-406bb0cb6545-kube-api-access-vwp4b\") on node \"localhost\" DevicePath \"\"" Jul 7 00:23:22.389207 kubelet[2721]: I0707 00:23:22.389036 2721 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cf9b3fc7-319a-4ea9-9813-406bb0cb6545-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 7 00:23:22.389537 kubelet[2721]: I0707 00:23:22.389045 2721 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cf9b3fc7-319a-4ea9-9813-406bb0cb6545-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 7 00:23:22.389537 kubelet[2721]: I0707 00:23:22.389053 2721 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cf9b3fc7-319a-4ea9-9813-406bb0cb6545-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 7 00:23:22.389537 kubelet[2721]: I0707 00:23:22.389060 2721 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ef970f4b-8dd0-43a5-8fa5-69cbfe2f415b-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 7 00:23:22.389537 kubelet[2721]: I0707 00:23:22.389068 2721 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cf9b3fc7-319a-4ea9-9813-406bb0cb6545-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 7 00:23:22.389537 kubelet[2721]: I0707 00:23:22.389198 2721 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cf9b3fc7-319a-4ea9-9813-406bb0cb6545-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 7 00:23:22.389537 kubelet[2721]: I0707 00:23:22.389208 2721 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cf9b3fc7-319a-4ea9-9813-406bb0cb6545-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 7 00:23:23.119999 systemd[1]: var-lib-kubelet-pods-ef970f4b\x2d8dd0\x2d43a5\x2d8fa5\x2d69cbfe2f415b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dm44b7.mount: Deactivated successfully. Jul 7 00:23:23.120158 systemd[1]: var-lib-kubelet-pods-cf9b3fc7\x2d319a\x2d4ea9\x2d9813\x2d406bb0cb6545-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvwp4b.mount: Deactivated successfully. Jul 7 00:23:23.120269 systemd[1]: var-lib-kubelet-pods-cf9b3fc7\x2d319a\x2d4ea9\x2d9813\x2d406bb0cb6545-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 7 00:23:23.120374 systemd[1]: var-lib-kubelet-pods-cf9b3fc7\x2d319a\x2d4ea9\x2d9813\x2d406bb0cb6545-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 7 00:23:23.138740 kubelet[2721]: I0707 00:23:23.138669 2721 scope.go:117] "RemoveContainer" containerID="9633dcbb51a275ca9877b94bb40cbd0f91b30fbb815834904813f5b6d8e6db9e" Jul 7 00:23:23.141451 containerd[1598]: time="2025-07-07T00:23:23.141408083Z" level=info msg="RemoveContainer for \"9633dcbb51a275ca9877b94bb40cbd0f91b30fbb815834904813f5b6d8e6db9e\"" Jul 7 00:23:23.144348 systemd[1]: Removed slice kubepods-besteffort-podef970f4b_8dd0_43a5_8fa5_69cbfe2f415b.slice - libcontainer container kubepods-besteffort-podef970f4b_8dd0_43a5_8fa5_69cbfe2f415b.slice. Jul 7 00:23:23.150251 containerd[1598]: time="2025-07-07T00:23:23.150129573Z" level=info msg="RemoveContainer for \"9633dcbb51a275ca9877b94bb40cbd0f91b30fbb815834904813f5b6d8e6db9e\" returns successfully" Jul 7 00:23:23.150706 kubelet[2721]: I0707 00:23:23.150679 2721 scope.go:117] "RemoveContainer" containerID="7b316c158e183853d88d166bd3e44cc54472166fb14590595427f60e98462694" Jul 7 00:23:23.150996 systemd[1]: Removed slice kubepods-burstable-podcf9b3fc7_319a_4ea9_9813_406bb0cb6545.slice - libcontainer container kubepods-burstable-podcf9b3fc7_319a_4ea9_9813_406bb0cb6545.slice. Jul 7 00:23:23.151568 systemd[1]: kubepods-burstable-podcf9b3fc7_319a_4ea9_9813_406bb0cb6545.slice: Consumed 6.273s CPU time, 126.8M memory peak, 264K read from disk, 13.3M written to disk. Jul 7 00:23:23.152782 containerd[1598]: time="2025-07-07T00:23:23.152737891Z" level=info msg="RemoveContainer for \"7b316c158e183853d88d166bd3e44cc54472166fb14590595427f60e98462694\"" Jul 7 00:23:23.164322 containerd[1598]: time="2025-07-07T00:23:23.164285228Z" level=info msg="RemoveContainer for \"7b316c158e183853d88d166bd3e44cc54472166fb14590595427f60e98462694\" returns successfully" Jul 7 00:23:23.164580 kubelet[2721]: I0707 00:23:23.164540 2721 scope.go:117] "RemoveContainer" containerID="045373259523db2581d10adf6cf3c1e6dba564b4f47ead7e2cfadf2d2cf6eab0" Jul 7 00:23:23.166968 containerd[1598]: time="2025-07-07T00:23:23.166925378Z" level=info msg="RemoveContainer for \"045373259523db2581d10adf6cf3c1e6dba564b4f47ead7e2cfadf2d2cf6eab0\"" Jul 7 00:23:23.172174 containerd[1598]: time="2025-07-07T00:23:23.172143968Z" level=info msg="RemoveContainer for \"045373259523db2581d10adf6cf3c1e6dba564b4f47ead7e2cfadf2d2cf6eab0\" returns successfully" Jul 7 00:23:23.172412 kubelet[2721]: I0707 00:23:23.172382 2721 scope.go:117] "RemoveContainer" containerID="c306b718674fa1b519537fe3405dc848ffa05c3ed1caa8d0e77c2b18fc2edfaa" Jul 7 00:23:23.174696 containerd[1598]: time="2025-07-07T00:23:23.174663356Z" level=info msg="RemoveContainer for \"c306b718674fa1b519537fe3405dc848ffa05c3ed1caa8d0e77c2b18fc2edfaa\"" Jul 7 00:23:23.178975 containerd[1598]: time="2025-07-07T00:23:23.178935786Z" level=info msg="RemoveContainer for \"c306b718674fa1b519537fe3405dc848ffa05c3ed1caa8d0e77c2b18fc2edfaa\" returns successfully" Jul 7 00:23:23.179159 kubelet[2721]: I0707 00:23:23.179124 2721 scope.go:117] "RemoveContainer" containerID="93acd5dbb5e0c15e92f3293042a1bb38b7dafcf550984718584b3b5bd433f31a" Jul 7 00:23:23.183410 containerd[1598]: time="2025-07-07T00:23:23.183328868Z" level=info msg="RemoveContainer for \"93acd5dbb5e0c15e92f3293042a1bb38b7dafcf550984718584b3b5bd433f31a\"" Jul 7 00:23:23.187024 containerd[1598]: time="2025-07-07T00:23:23.186997666Z" level=info msg="RemoveContainer for \"93acd5dbb5e0c15e92f3293042a1bb38b7dafcf550984718584b3b5bd433f31a\" returns successfully" Jul 7 00:23:23.187145 kubelet[2721]: I0707 00:23:23.187123 2721 scope.go:117] "RemoveContainer" containerID="6b09a645164b9579922f5c251b0152a509f7bb09fe3b0e12bfe916d2cda4d137" Jul 7 00:23:23.188533 containerd[1598]: time="2025-07-07T00:23:23.188501470Z" level=info msg="RemoveContainer for \"6b09a645164b9579922f5c251b0152a509f7bb09fe3b0e12bfe916d2cda4d137\"" Jul 7 00:23:23.191835 containerd[1598]: time="2025-07-07T00:23:23.191799395Z" level=info msg="RemoveContainer for \"6b09a645164b9579922f5c251b0152a509f7bb09fe3b0e12bfe916d2cda4d137\" returns successfully" Jul 7 00:23:23.191967 kubelet[2721]: I0707 00:23:23.191936 2721 scope.go:117] "RemoveContainer" containerID="7b316c158e183853d88d166bd3e44cc54472166fb14590595427f60e98462694" Jul 7 00:23:23.192174 containerd[1598]: time="2025-07-07T00:23:23.192137435Z" level=error msg="ContainerStatus for \"7b316c158e183853d88d166bd3e44cc54472166fb14590595427f60e98462694\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7b316c158e183853d88d166bd3e44cc54472166fb14590595427f60e98462694\": not found" Jul 7 00:23:23.192335 kubelet[2721]: E0707 00:23:23.192305 2721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7b316c158e183853d88d166bd3e44cc54472166fb14590595427f60e98462694\": not found" containerID="7b316c158e183853d88d166bd3e44cc54472166fb14590595427f60e98462694" Jul 7 00:23:23.192380 kubelet[2721]: I0707 00:23:23.192340 2721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7b316c158e183853d88d166bd3e44cc54472166fb14590595427f60e98462694"} err="failed to get container status \"7b316c158e183853d88d166bd3e44cc54472166fb14590595427f60e98462694\": rpc error: code = NotFound desc = an error occurred when try to find container \"7b316c158e183853d88d166bd3e44cc54472166fb14590595427f60e98462694\": not found" Jul 7 00:23:23.192409 kubelet[2721]: I0707 00:23:23.192381 2721 scope.go:117] "RemoveContainer" containerID="045373259523db2581d10adf6cf3c1e6dba564b4f47ead7e2cfadf2d2cf6eab0" Jul 7 00:23:23.192661 containerd[1598]: time="2025-07-07T00:23:23.192620605Z" level=error msg="ContainerStatus for \"045373259523db2581d10adf6cf3c1e6dba564b4f47ead7e2cfadf2d2cf6eab0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"045373259523db2581d10adf6cf3c1e6dba564b4f47ead7e2cfadf2d2cf6eab0\": not found" Jul 7 00:23:23.192791 kubelet[2721]: E0707 00:23:23.192767 2721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"045373259523db2581d10adf6cf3c1e6dba564b4f47ead7e2cfadf2d2cf6eab0\": not found" containerID="045373259523db2581d10adf6cf3c1e6dba564b4f47ead7e2cfadf2d2cf6eab0" Jul 7 00:23:23.192821 kubelet[2721]: I0707 00:23:23.192794 2721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"045373259523db2581d10adf6cf3c1e6dba564b4f47ead7e2cfadf2d2cf6eab0"} err="failed to get container status \"045373259523db2581d10adf6cf3c1e6dba564b4f47ead7e2cfadf2d2cf6eab0\": rpc error: code = NotFound desc = an error occurred when try to find container \"045373259523db2581d10adf6cf3c1e6dba564b4f47ead7e2cfadf2d2cf6eab0\": not found" Jul 7 00:23:23.192821 kubelet[2721]: I0707 00:23:23.192812 2721 scope.go:117] "RemoveContainer" containerID="c306b718674fa1b519537fe3405dc848ffa05c3ed1caa8d0e77c2b18fc2edfaa" Jul 7 00:23:23.192956 containerd[1598]: time="2025-07-07T00:23:23.192932605Z" level=error msg="ContainerStatus for \"c306b718674fa1b519537fe3405dc848ffa05c3ed1caa8d0e77c2b18fc2edfaa\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c306b718674fa1b519537fe3405dc848ffa05c3ed1caa8d0e77c2b18fc2edfaa\": not found" Jul 7 00:23:23.193053 kubelet[2721]: E0707 00:23:23.193034 2721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c306b718674fa1b519537fe3405dc848ffa05c3ed1caa8d0e77c2b18fc2edfaa\": not found" containerID="c306b718674fa1b519537fe3405dc848ffa05c3ed1caa8d0e77c2b18fc2edfaa" Jul 7 00:23:23.193053 kubelet[2721]: I0707 00:23:23.193050 2721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c306b718674fa1b519537fe3405dc848ffa05c3ed1caa8d0e77c2b18fc2edfaa"} err="failed to get container status \"c306b718674fa1b519537fe3405dc848ffa05c3ed1caa8d0e77c2b18fc2edfaa\": rpc error: code = NotFound desc = an error occurred when try to find container \"c306b718674fa1b519537fe3405dc848ffa05c3ed1caa8d0e77c2b18fc2edfaa\": not found" Jul 7 00:23:23.193112 kubelet[2721]: I0707 00:23:23.193061 2721 scope.go:117] "RemoveContainer" containerID="93acd5dbb5e0c15e92f3293042a1bb38b7dafcf550984718584b3b5bd433f31a" Jul 7 00:23:23.193200 containerd[1598]: time="2025-07-07T00:23:23.193175953Z" level=error msg="ContainerStatus for \"93acd5dbb5e0c15e92f3293042a1bb38b7dafcf550984718584b3b5bd433f31a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"93acd5dbb5e0c15e92f3293042a1bb38b7dafcf550984718584b3b5bd433f31a\": not found" Jul 7 00:23:23.193283 kubelet[2721]: E0707 00:23:23.193260 2721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"93acd5dbb5e0c15e92f3293042a1bb38b7dafcf550984718584b3b5bd433f31a\": not found" containerID="93acd5dbb5e0c15e92f3293042a1bb38b7dafcf550984718584b3b5bd433f31a" Jul 7 00:23:23.193319 kubelet[2721]: I0707 00:23:23.193281 2721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"93acd5dbb5e0c15e92f3293042a1bb38b7dafcf550984718584b3b5bd433f31a"} err="failed to get container status \"93acd5dbb5e0c15e92f3293042a1bb38b7dafcf550984718584b3b5bd433f31a\": rpc error: code = NotFound desc = an error occurred when try to find container \"93acd5dbb5e0c15e92f3293042a1bb38b7dafcf550984718584b3b5bd433f31a\": not found" Jul 7 00:23:23.193319 kubelet[2721]: I0707 00:23:23.193296 2721 scope.go:117] "RemoveContainer" containerID="6b09a645164b9579922f5c251b0152a509f7bb09fe3b0e12bfe916d2cda4d137" Jul 7 00:23:23.193486 containerd[1598]: time="2025-07-07T00:23:23.193452175Z" level=error msg="ContainerStatus for \"6b09a645164b9579922f5c251b0152a509f7bb09fe3b0e12bfe916d2cda4d137\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6b09a645164b9579922f5c251b0152a509f7bb09fe3b0e12bfe916d2cda4d137\": not found" Jul 7 00:23:23.193659 kubelet[2721]: E0707 00:23:23.193630 2721 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6b09a645164b9579922f5c251b0152a509f7bb09fe3b0e12bfe916d2cda4d137\": not found" containerID="6b09a645164b9579922f5c251b0152a509f7bb09fe3b0e12bfe916d2cda4d137" Jul 7 00:23:23.193710 kubelet[2721]: I0707 00:23:23.193668 2721 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6b09a645164b9579922f5c251b0152a509f7bb09fe3b0e12bfe916d2cda4d137"} err="failed to get container status \"6b09a645164b9579922f5c251b0152a509f7bb09fe3b0e12bfe916d2cda4d137\": rpc error: code = NotFound desc = an error occurred when try to find container \"6b09a645164b9579922f5c251b0152a509f7bb09fe3b0e12bfe916d2cda4d137\": not found" Jul 7 00:23:23.950544 kubelet[2721]: I0707 00:23:23.950487 2721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf9b3fc7-319a-4ea9-9813-406bb0cb6545" path="/var/lib/kubelet/pods/cf9b3fc7-319a-4ea9-9813-406bb0cb6545/volumes" Jul 7 00:23:23.951306 kubelet[2721]: I0707 00:23:23.951277 2721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef970f4b-8dd0-43a5-8fa5-69cbfe2f415b" path="/var/lib/kubelet/pods/ef970f4b-8dd0-43a5-8fa5-69cbfe2f415b/volumes" Jul 7 00:23:24.034689 sshd[4330]: Connection closed by 10.0.0.1 port 41444 Jul 7 00:23:24.035153 sshd-session[4328]: pam_unix(sshd:session): session closed for user core Jul 7 00:23:24.048506 systemd[1]: sshd@22-10.0.0.140:22-10.0.0.1:41444.service: Deactivated successfully. Jul 7 00:23:24.050553 systemd[1]: session-23.scope: Deactivated successfully. Jul 7 00:23:24.051456 systemd-logind[1579]: Session 23 logged out. Waiting for processes to exit. Jul 7 00:23:24.054460 systemd[1]: Started sshd@23-10.0.0.140:22-10.0.0.1:41446.service - OpenSSH per-connection server daemon (10.0.0.1:41446). Jul 7 00:23:24.055196 systemd-logind[1579]: Removed session 23. Jul 7 00:23:24.104117 sshd[4479]: Accepted publickey for core from 10.0.0.1 port 41446 ssh2: RSA SHA256:vB2ZN40YeU5BcTegIv+9PTVQlt78XDAEBJuAoVsHXyE Jul 7 00:23:24.105408 sshd-session[4479]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:23:24.109771 systemd-logind[1579]: New session 24 of user core. Jul 7 00:23:24.125705 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 7 00:23:24.683678 sshd[4481]: Connection closed by 10.0.0.1 port 41446 Jul 7 00:23:24.684048 sshd-session[4479]: pam_unix(sshd:session): session closed for user core Jul 7 00:23:24.695095 systemd[1]: sshd@23-10.0.0.140:22-10.0.0.1:41446.service: Deactivated successfully. Jul 7 00:23:24.699447 systemd[1]: session-24.scope: Deactivated successfully. Jul 7 00:23:24.701052 systemd-logind[1579]: Session 24 logged out. Waiting for processes to exit. Jul 7 00:23:24.707252 systemd[1]: Started sshd@24-10.0.0.140:22-10.0.0.1:41448.service - OpenSSH per-connection server daemon (10.0.0.1:41448). Jul 7 00:23:24.710682 systemd-logind[1579]: Removed session 24. Jul 7 00:23:24.723458 systemd[1]: Created slice kubepods-burstable-pod8b05c471_2476_464d_89bc_d647f7bc6f53.slice - libcontainer container kubepods-burstable-pod8b05c471_2476_464d_89bc_d647f7bc6f53.slice. Jul 7 00:23:24.756746 sshd[4493]: Accepted publickey for core from 10.0.0.1 port 41448 ssh2: RSA SHA256:vB2ZN40YeU5BcTegIv+9PTVQlt78XDAEBJuAoVsHXyE Jul 7 00:23:24.757991 sshd-session[4493]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:23:24.762146 systemd-logind[1579]: New session 25 of user core. Jul 7 00:23:24.770726 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 7 00:23:24.804743 kubelet[2721]: I0707 00:23:24.804710 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8b05c471-2476-464d-89bc-d647f7bc6f53-etc-cni-netd\") pod \"cilium-8r7vl\" (UID: \"8b05c471-2476-464d-89bc-d647f7bc6f53\") " pod="kube-system/cilium-8r7vl" Jul 7 00:23:24.804813 kubelet[2721]: I0707 00:23:24.804745 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8b05c471-2476-464d-89bc-d647f7bc6f53-cilium-run\") pod \"cilium-8r7vl\" (UID: \"8b05c471-2476-464d-89bc-d647f7bc6f53\") " pod="kube-system/cilium-8r7vl" Jul 7 00:23:24.804813 kubelet[2721]: I0707 00:23:24.804764 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8b05c471-2476-464d-89bc-d647f7bc6f53-bpf-maps\") pod \"cilium-8r7vl\" (UID: \"8b05c471-2476-464d-89bc-d647f7bc6f53\") " pod="kube-system/cilium-8r7vl" Jul 7 00:23:24.804813 kubelet[2721]: I0707 00:23:24.804779 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8b05c471-2476-464d-89bc-d647f7bc6f53-hostproc\") pod \"cilium-8r7vl\" (UID: \"8b05c471-2476-464d-89bc-d647f7bc6f53\") " pod="kube-system/cilium-8r7vl" Jul 7 00:23:24.804813 kubelet[2721]: I0707 00:23:24.804796 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8b05c471-2476-464d-89bc-d647f7bc6f53-cni-path\") pod \"cilium-8r7vl\" (UID: \"8b05c471-2476-464d-89bc-d647f7bc6f53\") " pod="kube-system/cilium-8r7vl" Jul 7 00:23:24.804910 kubelet[2721]: I0707 00:23:24.804859 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8b05c471-2476-464d-89bc-d647f7bc6f53-xtables-lock\") pod \"cilium-8r7vl\" (UID: \"8b05c471-2476-464d-89bc-d647f7bc6f53\") " pod="kube-system/cilium-8r7vl" Jul 7 00:23:24.804910 kubelet[2721]: I0707 00:23:24.804896 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8b05c471-2476-464d-89bc-d647f7bc6f53-clustermesh-secrets\") pod \"cilium-8r7vl\" (UID: \"8b05c471-2476-464d-89bc-d647f7bc6f53\") " pod="kube-system/cilium-8r7vl" Jul 7 00:23:24.804957 kubelet[2721]: I0707 00:23:24.804913 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8b05c471-2476-464d-89bc-d647f7bc6f53-cilium-config-path\") pod \"cilium-8r7vl\" (UID: \"8b05c471-2476-464d-89bc-d647f7bc6f53\") " pod="kube-system/cilium-8r7vl" Jul 7 00:23:24.804957 kubelet[2721]: I0707 00:23:24.804929 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8b05c471-2476-464d-89bc-d647f7bc6f53-cilium-ipsec-secrets\") pod \"cilium-8r7vl\" (UID: \"8b05c471-2476-464d-89bc-d647f7bc6f53\") " pod="kube-system/cilium-8r7vl" Jul 7 00:23:24.805003 kubelet[2721]: I0707 00:23:24.804954 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlkz8\" (UniqueName: \"kubernetes.io/projected/8b05c471-2476-464d-89bc-d647f7bc6f53-kube-api-access-qlkz8\") pod \"cilium-8r7vl\" (UID: \"8b05c471-2476-464d-89bc-d647f7bc6f53\") " pod="kube-system/cilium-8r7vl" Jul 7 00:23:24.805003 kubelet[2721]: I0707 00:23:24.804988 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8b05c471-2476-464d-89bc-d647f7bc6f53-lib-modules\") pod \"cilium-8r7vl\" (UID: \"8b05c471-2476-464d-89bc-d647f7bc6f53\") " pod="kube-system/cilium-8r7vl" Jul 7 00:23:24.805047 kubelet[2721]: I0707 00:23:24.805005 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8b05c471-2476-464d-89bc-d647f7bc6f53-hubble-tls\") pod \"cilium-8r7vl\" (UID: \"8b05c471-2476-464d-89bc-d647f7bc6f53\") " pod="kube-system/cilium-8r7vl" Jul 7 00:23:24.805077 kubelet[2721]: I0707 00:23:24.805052 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8b05c471-2476-464d-89bc-d647f7bc6f53-host-proc-sys-net\") pod \"cilium-8r7vl\" (UID: \"8b05c471-2476-464d-89bc-d647f7bc6f53\") " pod="kube-system/cilium-8r7vl" Jul 7 00:23:24.805099 kubelet[2721]: I0707 00:23:24.805081 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8b05c471-2476-464d-89bc-d647f7bc6f53-host-proc-sys-kernel\") pod \"cilium-8r7vl\" (UID: \"8b05c471-2476-464d-89bc-d647f7bc6f53\") " pod="kube-system/cilium-8r7vl" Jul 7 00:23:24.805122 kubelet[2721]: I0707 00:23:24.805110 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8b05c471-2476-464d-89bc-d647f7bc6f53-cilium-cgroup\") pod \"cilium-8r7vl\" (UID: \"8b05c471-2476-464d-89bc-d647f7bc6f53\") " pod="kube-system/cilium-8r7vl" Jul 7 00:23:24.819417 sshd[4495]: Connection closed by 10.0.0.1 port 41448 Jul 7 00:23:24.819728 sshd-session[4493]: pam_unix(sshd:session): session closed for user core Jul 7 00:23:24.833390 systemd[1]: sshd@24-10.0.0.140:22-10.0.0.1:41448.service: Deactivated successfully. Jul 7 00:23:24.835357 systemd[1]: session-25.scope: Deactivated successfully. Jul 7 00:23:24.836244 systemd-logind[1579]: Session 25 logged out. Waiting for processes to exit. Jul 7 00:23:24.839714 systemd[1]: Started sshd@25-10.0.0.140:22-10.0.0.1:41460.service - OpenSSH per-connection server daemon (10.0.0.1:41460). Jul 7 00:23:24.840303 systemd-logind[1579]: Removed session 25. Jul 7 00:23:24.886492 sshd[4502]: Accepted publickey for core from 10.0.0.1 port 41460 ssh2: RSA SHA256:vB2ZN40YeU5BcTegIv+9PTVQlt78XDAEBJuAoVsHXyE Jul 7 00:23:24.887852 sshd-session[4502]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:23:24.892459 systemd-logind[1579]: New session 26 of user core. Jul 7 00:23:24.901730 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 7 00:23:25.032203 kubelet[2721]: E0707 00:23:25.032079 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:23:25.032853 containerd[1598]: time="2025-07-07T00:23:25.032611872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8r7vl,Uid:8b05c471-2476-464d-89bc-d647f7bc6f53,Namespace:kube-system,Attempt:0,}" Jul 7 00:23:25.048144 containerd[1598]: time="2025-07-07T00:23:25.048099680Z" level=info msg="connecting to shim cffa93c80c85eda15d2d59680fddd7ff3109b47a0976da8635c1234b919b3d0b" address="unix:///run/containerd/s/7dfe5ca72f57e999bf851111b7a0504399a30ea22ee6b8e3235b8b7d1b7a692f" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:23:25.078711 systemd[1]: Started cri-containerd-cffa93c80c85eda15d2d59680fddd7ff3109b47a0976da8635c1234b919b3d0b.scope - libcontainer container cffa93c80c85eda15d2d59680fddd7ff3109b47a0976da8635c1234b919b3d0b. Jul 7 00:23:25.103525 containerd[1598]: time="2025-07-07T00:23:25.103484447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8r7vl,Uid:8b05c471-2476-464d-89bc-d647f7bc6f53,Namespace:kube-system,Attempt:0,} returns sandbox id \"cffa93c80c85eda15d2d59680fddd7ff3109b47a0976da8635c1234b919b3d0b\"" Jul 7 00:23:25.104153 kubelet[2721]: E0707 00:23:25.104118 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:23:25.109174 containerd[1598]: time="2025-07-07T00:23:25.109126397Z" level=info msg="CreateContainer within sandbox \"cffa93c80c85eda15d2d59680fddd7ff3109b47a0976da8635c1234b919b3d0b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 7 00:23:25.117366 containerd[1598]: time="2025-07-07T00:23:25.117153662Z" level=info msg="Container 3b9845703c801241c32a02d23f2ca19d72027712cfeee60945ec7b35ef05b2bd: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:23:25.123649 containerd[1598]: time="2025-07-07T00:23:25.123555341Z" level=info msg="CreateContainer within sandbox \"cffa93c80c85eda15d2d59680fddd7ff3109b47a0976da8635c1234b919b3d0b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3b9845703c801241c32a02d23f2ca19d72027712cfeee60945ec7b35ef05b2bd\"" Jul 7 00:23:25.124084 containerd[1598]: time="2025-07-07T00:23:25.124055282Z" level=info msg="StartContainer for \"3b9845703c801241c32a02d23f2ca19d72027712cfeee60945ec7b35ef05b2bd\"" Jul 7 00:23:25.124881 containerd[1598]: time="2025-07-07T00:23:25.124858415Z" level=info msg="connecting to shim 3b9845703c801241c32a02d23f2ca19d72027712cfeee60945ec7b35ef05b2bd" address="unix:///run/containerd/s/7dfe5ca72f57e999bf851111b7a0504399a30ea22ee6b8e3235b8b7d1b7a692f" protocol=ttrpc version=3 Jul 7 00:23:25.145734 systemd[1]: Started cri-containerd-3b9845703c801241c32a02d23f2ca19d72027712cfeee60945ec7b35ef05b2bd.scope - libcontainer container 3b9845703c801241c32a02d23f2ca19d72027712cfeee60945ec7b35ef05b2bd. Jul 7 00:23:25.173759 containerd[1598]: time="2025-07-07T00:23:25.173722683Z" level=info msg="StartContainer for \"3b9845703c801241c32a02d23f2ca19d72027712cfeee60945ec7b35ef05b2bd\" returns successfully" Jul 7 00:23:25.181826 systemd[1]: cri-containerd-3b9845703c801241c32a02d23f2ca19d72027712cfeee60945ec7b35ef05b2bd.scope: Deactivated successfully. Jul 7 00:23:25.182849 containerd[1598]: time="2025-07-07T00:23:25.182814452Z" level=info msg="received exit event container_id:\"3b9845703c801241c32a02d23f2ca19d72027712cfeee60945ec7b35ef05b2bd\" id:\"3b9845703c801241c32a02d23f2ca19d72027712cfeee60945ec7b35ef05b2bd\" pid:4572 exited_at:{seconds:1751847805 nanos:182494287}" Jul 7 00:23:25.182960 containerd[1598]: time="2025-07-07T00:23:25.182819010Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3b9845703c801241c32a02d23f2ca19d72027712cfeee60945ec7b35ef05b2bd\" id:\"3b9845703c801241c32a02d23f2ca19d72027712cfeee60945ec7b35ef05b2bd\" pid:4572 exited_at:{seconds:1751847805 nanos:182494287}" Jul 7 00:23:25.910658 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount554814533.mount: Deactivated successfully. Jul 7 00:23:25.993257 kubelet[2721]: E0707 00:23:25.993226 2721 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 7 00:23:26.153028 kubelet[2721]: E0707 00:23:26.152996 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:23:26.195046 containerd[1598]: time="2025-07-07T00:23:26.194946249Z" level=info msg="CreateContainer within sandbox \"cffa93c80c85eda15d2d59680fddd7ff3109b47a0976da8635c1234b919b3d0b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 7 00:23:26.205944 containerd[1598]: time="2025-07-07T00:23:26.205898089Z" level=info msg="Container f2da57a9eeee7298b5f466a3404ce766dd576902ef9d8c6f20a1d2e1cf3bf310: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:23:26.222175 containerd[1598]: time="2025-07-07T00:23:26.222125247Z" level=info msg="CreateContainer within sandbox \"cffa93c80c85eda15d2d59680fddd7ff3109b47a0976da8635c1234b919b3d0b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f2da57a9eeee7298b5f466a3404ce766dd576902ef9d8c6f20a1d2e1cf3bf310\"" Jul 7 00:23:26.222865 containerd[1598]: time="2025-07-07T00:23:26.222729016Z" level=info msg="StartContainer for \"f2da57a9eeee7298b5f466a3404ce766dd576902ef9d8c6f20a1d2e1cf3bf310\"" Jul 7 00:23:26.223494 containerd[1598]: time="2025-07-07T00:23:26.223461973Z" level=info msg="connecting to shim f2da57a9eeee7298b5f466a3404ce766dd576902ef9d8c6f20a1d2e1cf3bf310" address="unix:///run/containerd/s/7dfe5ca72f57e999bf851111b7a0504399a30ea22ee6b8e3235b8b7d1b7a692f" protocol=ttrpc version=3 Jul 7 00:23:26.242812 systemd[1]: Started cri-containerd-f2da57a9eeee7298b5f466a3404ce766dd576902ef9d8c6f20a1d2e1cf3bf310.scope - libcontainer container f2da57a9eeee7298b5f466a3404ce766dd576902ef9d8c6f20a1d2e1cf3bf310. Jul 7 00:23:26.272840 containerd[1598]: time="2025-07-07T00:23:26.272732682Z" level=info msg="StartContainer for \"f2da57a9eeee7298b5f466a3404ce766dd576902ef9d8c6f20a1d2e1cf3bf310\" returns successfully" Jul 7 00:23:26.277516 systemd[1]: cri-containerd-f2da57a9eeee7298b5f466a3404ce766dd576902ef9d8c6f20a1d2e1cf3bf310.scope: Deactivated successfully. Jul 7 00:23:26.278294 containerd[1598]: time="2025-07-07T00:23:26.278264644Z" level=info msg="received exit event container_id:\"f2da57a9eeee7298b5f466a3404ce766dd576902ef9d8c6f20a1d2e1cf3bf310\" id:\"f2da57a9eeee7298b5f466a3404ce766dd576902ef9d8c6f20a1d2e1cf3bf310\" pid:4618 exited_at:{seconds:1751847806 nanos:277946664}" Jul 7 00:23:26.278668 containerd[1598]: time="2025-07-07T00:23:26.278345509Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f2da57a9eeee7298b5f466a3404ce766dd576902ef9d8c6f20a1d2e1cf3bf310\" id:\"f2da57a9eeee7298b5f466a3404ce766dd576902ef9d8c6f20a1d2e1cf3bf310\" pid:4618 exited_at:{seconds:1751847806 nanos:277946664}" Jul 7 00:23:26.297735 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f2da57a9eeee7298b5f466a3404ce766dd576902ef9d8c6f20a1d2e1cf3bf310-rootfs.mount: Deactivated successfully. Jul 7 00:23:27.156832 kubelet[2721]: E0707 00:23:27.156801 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:23:27.161505 containerd[1598]: time="2025-07-07T00:23:27.161454613Z" level=info msg="CreateContainer within sandbox \"cffa93c80c85eda15d2d59680fddd7ff3109b47a0976da8635c1234b919b3d0b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 7 00:23:27.171446 containerd[1598]: time="2025-07-07T00:23:27.171399388Z" level=info msg="Container a54972a4ab3c2e129bd827067bf5f53169373d13261f91d04bab382ace7c884e: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:23:27.178987 containerd[1598]: time="2025-07-07T00:23:27.178937924Z" level=info msg="CreateContainer within sandbox \"cffa93c80c85eda15d2d59680fddd7ff3109b47a0976da8635c1234b919b3d0b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a54972a4ab3c2e129bd827067bf5f53169373d13261f91d04bab382ace7c884e\"" Jul 7 00:23:27.179419 containerd[1598]: time="2025-07-07T00:23:27.179366976Z" level=info msg="StartContainer for \"a54972a4ab3c2e129bd827067bf5f53169373d13261f91d04bab382ace7c884e\"" Jul 7 00:23:27.180629 containerd[1598]: time="2025-07-07T00:23:27.180602708Z" level=info msg="connecting to shim a54972a4ab3c2e129bd827067bf5f53169373d13261f91d04bab382ace7c884e" address="unix:///run/containerd/s/7dfe5ca72f57e999bf851111b7a0504399a30ea22ee6b8e3235b8b7d1b7a692f" protocol=ttrpc version=3 Jul 7 00:23:27.200711 systemd[1]: Started cri-containerd-a54972a4ab3c2e129bd827067bf5f53169373d13261f91d04bab382ace7c884e.scope - libcontainer container a54972a4ab3c2e129bd827067bf5f53169373d13261f91d04bab382ace7c884e. Jul 7 00:23:27.245817 systemd[1]: cri-containerd-a54972a4ab3c2e129bd827067bf5f53169373d13261f91d04bab382ace7c884e.scope: Deactivated successfully. Jul 7 00:23:27.246271 containerd[1598]: time="2025-07-07T00:23:27.246240636Z" level=info msg="StartContainer for \"a54972a4ab3c2e129bd827067bf5f53169373d13261f91d04bab382ace7c884e\" returns successfully" Jul 7 00:23:27.247374 containerd[1598]: time="2025-07-07T00:23:27.247167434Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a54972a4ab3c2e129bd827067bf5f53169373d13261f91d04bab382ace7c884e\" id:\"a54972a4ab3c2e129bd827067bf5f53169373d13261f91d04bab382ace7c884e\" pid:4663 exited_at:{seconds:1751847807 nanos:246880023}" Jul 7 00:23:27.247374 containerd[1598]: time="2025-07-07T00:23:27.247259761Z" level=info msg="received exit event container_id:\"a54972a4ab3c2e129bd827067bf5f53169373d13261f91d04bab382ace7c884e\" id:\"a54972a4ab3c2e129bd827067bf5f53169373d13261f91d04bab382ace7c884e\" pid:4663 exited_at:{seconds:1751847807 nanos:246880023}" Jul 7 00:23:27.267177 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a54972a4ab3c2e129bd827067bf5f53169373d13261f91d04bab382ace7c884e-rootfs.mount: Deactivated successfully. Jul 7 00:23:27.778511 kubelet[2721]: I0707 00:23:27.778448 2721 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-07T00:23:27Z","lastTransitionTime":"2025-07-07T00:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 7 00:23:28.160964 kubelet[2721]: E0707 00:23:28.160934 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:23:28.165735 containerd[1598]: time="2025-07-07T00:23:28.165687439Z" level=info msg="CreateContainer within sandbox \"cffa93c80c85eda15d2d59680fddd7ff3109b47a0976da8635c1234b919b3d0b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 7 00:23:28.175218 containerd[1598]: time="2025-07-07T00:23:28.175170766Z" level=info msg="Container e5fe5f40156fd1c15a06505b18c1d856cf42397b079972b34067201400abbe21: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:23:28.181873 containerd[1598]: time="2025-07-07T00:23:28.181831780Z" level=info msg="CreateContainer within sandbox \"cffa93c80c85eda15d2d59680fddd7ff3109b47a0976da8635c1234b919b3d0b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e5fe5f40156fd1c15a06505b18c1d856cf42397b079972b34067201400abbe21\"" Jul 7 00:23:28.182243 containerd[1598]: time="2025-07-07T00:23:28.182218122Z" level=info msg="StartContainer for \"e5fe5f40156fd1c15a06505b18c1d856cf42397b079972b34067201400abbe21\"" Jul 7 00:23:28.183023 containerd[1598]: time="2025-07-07T00:23:28.182998367Z" level=info msg="connecting to shim e5fe5f40156fd1c15a06505b18c1d856cf42397b079972b34067201400abbe21" address="unix:///run/containerd/s/7dfe5ca72f57e999bf851111b7a0504399a30ea22ee6b8e3235b8b7d1b7a692f" protocol=ttrpc version=3 Jul 7 00:23:28.214738 systemd[1]: Started cri-containerd-e5fe5f40156fd1c15a06505b18c1d856cf42397b079972b34067201400abbe21.scope - libcontainer container e5fe5f40156fd1c15a06505b18c1d856cf42397b079972b34067201400abbe21. Jul 7 00:23:28.240777 systemd[1]: cri-containerd-e5fe5f40156fd1c15a06505b18c1d856cf42397b079972b34067201400abbe21.scope: Deactivated successfully. Jul 7 00:23:28.241167 containerd[1598]: time="2025-07-07T00:23:28.241129142Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e5fe5f40156fd1c15a06505b18c1d856cf42397b079972b34067201400abbe21\" id:\"e5fe5f40156fd1c15a06505b18c1d856cf42397b079972b34067201400abbe21\" pid:4702 exited_at:{seconds:1751847808 nanos:240890284}" Jul 7 00:23:28.242577 containerd[1598]: time="2025-07-07T00:23:28.242542533Z" level=info msg="received exit event container_id:\"e5fe5f40156fd1c15a06505b18c1d856cf42397b079972b34067201400abbe21\" id:\"e5fe5f40156fd1c15a06505b18c1d856cf42397b079972b34067201400abbe21\" pid:4702 exited_at:{seconds:1751847808 nanos:240890284}" Jul 7 00:23:28.252354 containerd[1598]: time="2025-07-07T00:23:28.252304634Z" level=info msg="StartContainer for \"e5fe5f40156fd1c15a06505b18c1d856cf42397b079972b34067201400abbe21\" returns successfully" Jul 7 00:23:28.266822 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e5fe5f40156fd1c15a06505b18c1d856cf42397b079972b34067201400abbe21-rootfs.mount: Deactivated successfully. Jul 7 00:23:29.165927 kubelet[2721]: E0707 00:23:29.165895 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:23:29.171557 containerd[1598]: time="2025-07-07T00:23:29.171508016Z" level=info msg="CreateContainer within sandbox \"cffa93c80c85eda15d2d59680fddd7ff3109b47a0976da8635c1234b919b3d0b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 7 00:23:29.181496 containerd[1598]: time="2025-07-07T00:23:29.181441185Z" level=info msg="Container 628092e6bb0188407cf57595b827174a34e1321cd6955356b481f1d77cead367: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:23:29.193037 containerd[1598]: time="2025-07-07T00:23:29.192989289Z" level=info msg="CreateContainer within sandbox \"cffa93c80c85eda15d2d59680fddd7ff3109b47a0976da8635c1234b919b3d0b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"628092e6bb0188407cf57595b827174a34e1321cd6955356b481f1d77cead367\"" Jul 7 00:23:29.193514 containerd[1598]: time="2025-07-07T00:23:29.193480841Z" level=info msg="StartContainer for \"628092e6bb0188407cf57595b827174a34e1321cd6955356b481f1d77cead367\"" Jul 7 00:23:29.194297 containerd[1598]: time="2025-07-07T00:23:29.194271206Z" level=info msg="connecting to shim 628092e6bb0188407cf57595b827174a34e1321cd6955356b481f1d77cead367" address="unix:///run/containerd/s/7dfe5ca72f57e999bf851111b7a0504399a30ea22ee6b8e3235b8b7d1b7a692f" protocol=ttrpc version=3 Jul 7 00:23:29.215719 systemd[1]: Started cri-containerd-628092e6bb0188407cf57595b827174a34e1321cd6955356b481f1d77cead367.scope - libcontainer container 628092e6bb0188407cf57595b827174a34e1321cd6955356b481f1d77cead367. Jul 7 00:23:29.256887 containerd[1598]: time="2025-07-07T00:23:29.256794464Z" level=info msg="StartContainer for \"628092e6bb0188407cf57595b827174a34e1321cd6955356b481f1d77cead367\" returns successfully" Jul 7 00:23:29.318301 containerd[1598]: time="2025-07-07T00:23:29.318252129Z" level=info msg="TaskExit event in podsandbox handler container_id:\"628092e6bb0188407cf57595b827174a34e1321cd6955356b481f1d77cead367\" id:\"a02f87053840187e75695256c7dac2452f5e4c238540502ee7f103642f8990f8\" pid:4771 exited_at:{seconds:1751847809 nanos:317946094}" Jul 7 00:23:29.654629 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Jul 7 00:23:30.171861 kubelet[2721]: E0707 00:23:30.171544 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:23:30.184849 kubelet[2721]: I0707 00:23:30.184788 2721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-8r7vl" podStartSLOduration=6.184772215 podStartE2EDuration="6.184772215s" podCreationTimestamp="2025-07-07 00:23:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:23:30.184320449 +0000 UTC m=+84.326513268" watchObservedRunningTime="2025-07-07 00:23:30.184772215 +0000 UTC m=+84.326965034" Jul 7 00:23:31.174119 kubelet[2721]: E0707 00:23:31.173758 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:23:31.189483 containerd[1598]: time="2025-07-07T00:23:31.189440189Z" level=info msg="TaskExit event in podsandbox handler container_id:\"628092e6bb0188407cf57595b827174a34e1321cd6955356b481f1d77cead367\" id:\"5947b5d9f52ca06bac1ac9f1650501bb7242b4731c7e87fad0a8e0b68065d2af\" pid:4912 exit_status:1 exited_at:{seconds:1751847811 nanos:189147639}" Jul 7 00:23:32.175493 kubelet[2721]: E0707 00:23:32.175462 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:23:32.689680 systemd-networkd[1491]: lxc_health: Link UP Jul 7 00:23:32.689984 systemd-networkd[1491]: lxc_health: Gained carrier Jul 7 00:23:33.177552 kubelet[2721]: E0707 00:23:33.177301 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:23:33.299262 containerd[1598]: time="2025-07-07T00:23:33.299210344Z" level=info msg="TaskExit event in podsandbox handler container_id:\"628092e6bb0188407cf57595b827174a34e1321cd6955356b481f1d77cead367\" id:\"9786f732256755dca3315cd39ac4adac92846225c11770ac75152eda78fd9b91\" pid:5303 exited_at:{seconds:1751847813 nanos:298916452}" Jul 7 00:23:34.178671 kubelet[2721]: E0707 00:23:34.178635 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:23:34.429673 systemd-networkd[1491]: lxc_health: Gained IPv6LL Jul 7 00:23:35.181464 kubelet[2721]: E0707 00:23:35.181017 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:23:35.388518 containerd[1598]: time="2025-07-07T00:23:35.388465518Z" level=info msg="TaskExit event in podsandbox handler container_id:\"628092e6bb0188407cf57595b827174a34e1321cd6955356b481f1d77cead367\" id:\"9aae0e76809486d7b34fee0aa9a8ac4c9ed3116f6d7603652d8db1e3cda9e419\" pid:5338 exited_at:{seconds:1751847815 nanos:387958610}" Jul 7 00:23:37.488937 containerd[1598]: time="2025-07-07T00:23:37.488893416Z" level=info msg="TaskExit event in podsandbox handler container_id:\"628092e6bb0188407cf57595b827174a34e1321cd6955356b481f1d77cead367\" id:\"e40b50860590826355baf5228381a5108adbbec4195966d2678c7a0f3f9cddff\" pid:5368 exited_at:{seconds:1751847817 nanos:488524252}" Jul 7 00:23:37.950303 kubelet[2721]: E0707 00:23:37.950262 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:23:38.948447 kubelet[2721]: E0707 00:23:38.948394 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:23:39.573747 containerd[1598]: time="2025-07-07T00:23:39.573691349Z" level=info msg="TaskExit event in podsandbox handler container_id:\"628092e6bb0188407cf57595b827174a34e1321cd6955356b481f1d77cead367\" id:\"3de73c58df0898293004dea23957648808620f2e8ca7958155ccb63057524d33\" pid:5391 exited_at:{seconds:1751847819 nanos:573182357}" Jul 7 00:23:39.592521 sshd[4504]: Connection closed by 10.0.0.1 port 41460 Jul 7 00:23:39.593034 sshd-session[4502]: pam_unix(sshd:session): session closed for user core Jul 7 00:23:39.597948 systemd[1]: sshd@25-10.0.0.140:22-10.0.0.1:41460.service: Deactivated successfully. Jul 7 00:23:39.600113 systemd[1]: session-26.scope: Deactivated successfully. Jul 7 00:23:39.601142 systemd-logind[1579]: Session 26 logged out. Waiting for processes to exit. Jul 7 00:23:39.602532 systemd-logind[1579]: Removed session 26. Jul 7 00:23:39.948186 kubelet[2721]: E0707 00:23:39.948145 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"