Jul 10 00:12:36.946542 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Wed Jul 9 22:15:30 -00 2025 Jul 10 00:12:36.946565 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=844005237fb9709f65a093d5533c4229fb6c54e8e257736d9c3d041b6d3080ea Jul 10 00:12:36.946577 kernel: BIOS-provided physical RAM map: Jul 10 00:12:36.946583 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000002ffff] usable Jul 10 00:12:36.946590 kernel: BIOS-e820: [mem 0x0000000000030000-0x000000000004ffff] reserved Jul 10 00:12:36.946596 kernel: BIOS-e820: [mem 0x0000000000050000-0x000000000009efff] usable Jul 10 00:12:36.946604 kernel: BIOS-e820: [mem 0x000000000009f000-0x000000000009ffff] reserved Jul 10 00:12:36.946611 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009b8ecfff] usable Jul 10 00:12:36.946620 kernel: BIOS-e820: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Jul 10 00:12:36.946627 kernel: BIOS-e820: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Jul 10 00:12:36.946634 kernel: BIOS-e820: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Jul 10 00:12:36.946642 kernel: BIOS-e820: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Jul 10 00:12:36.946649 kernel: BIOS-e820: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Jul 10 00:12:36.946656 kernel: BIOS-e820: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Jul 10 00:12:36.946664 kernel: BIOS-e820: [mem 0x000000009bfb7000-0x000000009bffffff] usable Jul 10 00:12:36.946671 kernel: BIOS-e820: [mem 0x000000009c000000-0x000000009cffffff] reserved Jul 10 00:12:36.946683 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jul 10 00:12:36.946690 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 10 00:12:36.946697 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 10 00:12:36.946704 kernel: NX (Execute Disable) protection: active Jul 10 00:12:36.946711 kernel: APIC: Static calls initialized Jul 10 00:12:36.946718 kernel: e820: update [mem 0x9a13e018-0x9a147c57] usable ==> usable Jul 10 00:12:36.946726 kernel: e820: update [mem 0x9a101018-0x9a13de57] usable ==> usable Jul 10 00:12:36.946733 kernel: extended physical RAM map: Jul 10 00:12:36.946740 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000002ffff] usable Jul 10 00:12:36.946747 kernel: reserve setup_data: [mem 0x0000000000030000-0x000000000004ffff] reserved Jul 10 00:12:36.946754 kernel: reserve setup_data: [mem 0x0000000000050000-0x000000000009efff] usable Jul 10 00:12:36.946763 kernel: reserve setup_data: [mem 0x000000000009f000-0x000000000009ffff] reserved Jul 10 00:12:36.946770 kernel: reserve setup_data: [mem 0x0000000000100000-0x000000009a101017] usable Jul 10 00:12:36.946777 kernel: reserve setup_data: [mem 0x000000009a101018-0x000000009a13de57] usable Jul 10 00:12:36.946784 kernel: reserve setup_data: [mem 0x000000009a13de58-0x000000009a13e017] usable Jul 10 00:12:36.946791 kernel: reserve setup_data: [mem 0x000000009a13e018-0x000000009a147c57] usable Jul 10 00:12:36.946798 kernel: reserve setup_data: [mem 0x000000009a147c58-0x000000009b8ecfff] usable Jul 10 00:12:36.946805 kernel: reserve setup_data: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Jul 10 00:12:36.946812 kernel: reserve setup_data: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Jul 10 00:12:36.946819 kernel: reserve setup_data: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Jul 10 00:12:36.946826 kernel: reserve setup_data: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Jul 10 00:12:36.946833 kernel: reserve setup_data: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Jul 10 00:12:36.946842 kernel: reserve setup_data: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Jul 10 00:12:36.946850 kernel: reserve setup_data: [mem 0x000000009bfb7000-0x000000009bffffff] usable Jul 10 00:12:36.946860 kernel: reserve setup_data: [mem 0x000000009c000000-0x000000009cffffff] reserved Jul 10 00:12:36.946867 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jul 10 00:12:36.946875 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 10 00:12:36.946882 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 10 00:12:36.946891 kernel: efi: EFI v2.7 by EDK II Jul 10 00:12:36.946899 kernel: efi: SMBIOS=0x9b9d5000 ACPI=0x9bb7e000 ACPI 2.0=0x9bb7e014 MEMATTR=0x9a1af018 RNG=0x9bb73018 Jul 10 00:12:36.946906 kernel: random: crng init done Jul 10 00:12:36.946914 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Jul 10 00:12:36.946921 kernel: secureboot: Secure boot enabled Jul 10 00:12:36.946928 kernel: SMBIOS 2.8 present. Jul 10 00:12:36.946935 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Jul 10 00:12:36.946943 kernel: DMI: Memory slots populated: 1/1 Jul 10 00:12:36.946950 kernel: Hypervisor detected: KVM Jul 10 00:12:36.946957 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 10 00:12:36.946965 kernel: kvm-clock: using sched offset of 6467509744 cycles Jul 10 00:12:36.946974 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 10 00:12:36.946982 kernel: tsc: Detected 2794.748 MHz processor Jul 10 00:12:36.946990 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 10 00:12:36.946998 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 10 00:12:36.947005 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x400000000 Jul 10 00:12:36.947019 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jul 10 00:12:36.947041 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 10 00:12:36.947048 kernel: Using GB pages for direct mapping Jul 10 00:12:36.947058 kernel: ACPI: Early table checksum verification disabled Jul 10 00:12:36.947069 kernel: ACPI: RSDP 0x000000009BB7E014 000024 (v02 BOCHS ) Jul 10 00:12:36.947077 kernel: ACPI: XSDT 0x000000009BB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jul 10 00:12:36.947084 kernel: ACPI: FACP 0x000000009BB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:12:36.947092 kernel: ACPI: DSDT 0x000000009BB7A000 002237 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:12:36.947099 kernel: ACPI: FACS 0x000000009BBDD000 000040 Jul 10 00:12:36.947107 kernel: ACPI: APIC 0x000000009BB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:12:36.947114 kernel: ACPI: HPET 0x000000009BB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:12:36.947122 kernel: ACPI: MCFG 0x000000009BB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:12:36.947129 kernel: ACPI: WAET 0x000000009BB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:12:36.947139 kernel: ACPI: BGRT 0x000000009BB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jul 10 00:12:36.947147 kernel: ACPI: Reserving FACP table memory at [mem 0x9bb79000-0x9bb790f3] Jul 10 00:12:36.947154 kernel: ACPI: Reserving DSDT table memory at [mem 0x9bb7a000-0x9bb7c236] Jul 10 00:12:36.947161 kernel: ACPI: Reserving FACS table memory at [mem 0x9bbdd000-0x9bbdd03f] Jul 10 00:12:36.947169 kernel: ACPI: Reserving APIC table memory at [mem 0x9bb78000-0x9bb7808f] Jul 10 00:12:36.947176 kernel: ACPI: Reserving HPET table memory at [mem 0x9bb77000-0x9bb77037] Jul 10 00:12:36.947184 kernel: ACPI: Reserving MCFG table memory at [mem 0x9bb76000-0x9bb7603b] Jul 10 00:12:36.947191 kernel: ACPI: Reserving WAET table memory at [mem 0x9bb75000-0x9bb75027] Jul 10 00:12:36.947212 kernel: ACPI: Reserving BGRT table memory at [mem 0x9bb74000-0x9bb74037] Jul 10 00:12:36.947234 kernel: No NUMA configuration found Jul 10 00:12:36.947241 kernel: Faking a node at [mem 0x0000000000000000-0x000000009bffffff] Jul 10 00:12:36.947249 kernel: NODE_DATA(0) allocated [mem 0x9bf57dc0-0x9bf5efff] Jul 10 00:12:36.947256 kernel: Zone ranges: Jul 10 00:12:36.947264 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 10 00:12:36.947271 kernel: DMA32 [mem 0x0000000001000000-0x000000009bffffff] Jul 10 00:12:36.947279 kernel: Normal empty Jul 10 00:12:36.947286 kernel: Device empty Jul 10 00:12:36.947293 kernel: Movable zone start for each node Jul 10 00:12:36.947303 kernel: Early memory node ranges Jul 10 00:12:36.947310 kernel: node 0: [mem 0x0000000000001000-0x000000000002ffff] Jul 10 00:12:36.947318 kernel: node 0: [mem 0x0000000000050000-0x000000000009efff] Jul 10 00:12:36.947325 kernel: node 0: [mem 0x0000000000100000-0x000000009b8ecfff] Jul 10 00:12:36.947333 kernel: node 0: [mem 0x000000009bbff000-0x000000009bfb0fff] Jul 10 00:12:36.947340 kernel: node 0: [mem 0x000000009bfb7000-0x000000009bffffff] Jul 10 00:12:36.947347 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009bffffff] Jul 10 00:12:36.947355 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 10 00:12:36.947362 kernel: On node 0, zone DMA: 32 pages in unavailable ranges Jul 10 00:12:36.947370 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 10 00:12:36.947380 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jul 10 00:12:36.947387 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Jul 10 00:12:36.947395 kernel: On node 0, zone DMA32: 16384 pages in unavailable ranges Jul 10 00:12:36.947402 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 10 00:12:36.947409 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 10 00:12:36.947417 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 10 00:12:36.947424 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 10 00:12:36.947432 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 10 00:12:36.947442 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 10 00:12:36.947452 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 10 00:12:36.947460 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 10 00:12:36.947467 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 10 00:12:36.947474 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 10 00:12:36.947482 kernel: TSC deadline timer available Jul 10 00:12:36.947489 kernel: CPU topo: Max. logical packages: 1 Jul 10 00:12:36.947497 kernel: CPU topo: Max. logical dies: 1 Jul 10 00:12:36.947504 kernel: CPU topo: Max. dies per package: 1 Jul 10 00:12:36.947520 kernel: CPU topo: Max. threads per core: 1 Jul 10 00:12:36.947528 kernel: CPU topo: Num. cores per package: 4 Jul 10 00:12:36.947535 kernel: CPU topo: Num. threads per package: 4 Jul 10 00:12:36.947543 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jul 10 00:12:36.947555 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 10 00:12:36.947563 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 10 00:12:36.947570 kernel: kvm-guest: setup PV sched yield Jul 10 00:12:36.947578 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Jul 10 00:12:36.947588 kernel: Booting paravirtualized kernel on KVM Jul 10 00:12:36.947596 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 10 00:12:36.947604 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jul 10 00:12:36.947612 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jul 10 00:12:36.947620 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jul 10 00:12:36.947628 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 10 00:12:36.947635 kernel: kvm-guest: PV spinlocks enabled Jul 10 00:12:36.947643 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 10 00:12:36.947652 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=844005237fb9709f65a093d5533c4229fb6c54e8e257736d9c3d041b6d3080ea Jul 10 00:12:36.947663 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 10 00:12:36.947671 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 10 00:12:36.947679 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 10 00:12:36.947687 kernel: Fallback order for Node 0: 0 Jul 10 00:12:36.947694 kernel: Built 1 zonelists, mobility grouping on. Total pages: 638054 Jul 10 00:12:36.947702 kernel: Policy zone: DMA32 Jul 10 00:12:36.947710 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 10 00:12:36.947717 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 10 00:12:36.947727 kernel: ftrace: allocating 40095 entries in 157 pages Jul 10 00:12:36.947735 kernel: ftrace: allocated 157 pages with 5 groups Jul 10 00:12:36.947743 kernel: Dynamic Preempt: voluntary Jul 10 00:12:36.947750 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 10 00:12:36.947759 kernel: rcu: RCU event tracing is enabled. Jul 10 00:12:36.947767 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 10 00:12:36.947775 kernel: Trampoline variant of Tasks RCU enabled. Jul 10 00:12:36.947783 kernel: Rude variant of Tasks RCU enabled. Jul 10 00:12:36.947790 kernel: Tracing variant of Tasks RCU enabled. Jul 10 00:12:36.947798 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 10 00:12:36.947808 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 10 00:12:36.947816 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 10 00:12:36.947824 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 10 00:12:36.947834 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 10 00:12:36.947842 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 10 00:12:36.947850 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 10 00:12:36.947858 kernel: Console: colour dummy device 80x25 Jul 10 00:12:36.947865 kernel: printk: legacy console [ttyS0] enabled Jul 10 00:12:36.947875 kernel: ACPI: Core revision 20240827 Jul 10 00:12:36.947883 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 10 00:12:36.947891 kernel: APIC: Switch to symmetric I/O mode setup Jul 10 00:12:36.947899 kernel: x2apic enabled Jul 10 00:12:36.947907 kernel: APIC: Switched APIC routing to: physical x2apic Jul 10 00:12:36.947914 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jul 10 00:12:36.947923 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jul 10 00:12:36.947930 kernel: kvm-guest: setup PV IPIs Jul 10 00:12:36.947938 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 10 00:12:36.947946 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Jul 10 00:12:36.947956 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jul 10 00:12:36.947964 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 10 00:12:36.947972 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 10 00:12:36.947979 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 10 00:12:36.947989 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 10 00:12:36.947997 kernel: Spectre V2 : Mitigation: Retpolines Jul 10 00:12:36.948005 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 10 00:12:36.948013 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 10 00:12:36.948031 kernel: RETBleed: Mitigation: untrained return thunk Jul 10 00:12:36.948039 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 10 00:12:36.948047 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 10 00:12:36.948055 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jul 10 00:12:36.948064 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jul 10 00:12:36.948072 kernel: x86/bugs: return thunk changed Jul 10 00:12:36.948079 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jul 10 00:12:36.948087 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 10 00:12:36.948095 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 10 00:12:36.948105 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 10 00:12:36.948113 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 10 00:12:36.948121 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 10 00:12:36.948128 kernel: Freeing SMP alternatives memory: 32K Jul 10 00:12:36.948136 kernel: pid_max: default: 32768 minimum: 301 Jul 10 00:12:36.948144 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 10 00:12:36.948151 kernel: landlock: Up and running. Jul 10 00:12:36.948159 kernel: SELinux: Initializing. Jul 10 00:12:36.948167 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 10 00:12:36.948177 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 10 00:12:36.948185 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 10 00:12:36.948193 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 10 00:12:36.948285 kernel: ... version: 0 Jul 10 00:12:36.948292 kernel: ... bit width: 48 Jul 10 00:12:36.948303 kernel: ... generic registers: 6 Jul 10 00:12:36.948311 kernel: ... value mask: 0000ffffffffffff Jul 10 00:12:36.948318 kernel: ... max period: 00007fffffffffff Jul 10 00:12:36.948326 kernel: ... fixed-purpose events: 0 Jul 10 00:12:36.948337 kernel: ... event mask: 000000000000003f Jul 10 00:12:36.948345 kernel: signal: max sigframe size: 1776 Jul 10 00:12:36.948353 kernel: rcu: Hierarchical SRCU implementation. Jul 10 00:12:36.948360 kernel: rcu: Max phase no-delay instances is 400. Jul 10 00:12:36.948368 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 10 00:12:36.948376 kernel: smp: Bringing up secondary CPUs ... Jul 10 00:12:36.948384 kernel: smpboot: x86: Booting SMP configuration: Jul 10 00:12:36.948391 kernel: .... node #0, CPUs: #1 #2 #3 Jul 10 00:12:36.948399 kernel: smp: Brought up 1 node, 4 CPUs Jul 10 00:12:36.948409 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jul 10 00:12:36.948417 kernel: Memory: 2409216K/2552216K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54420K init, 2548K bss, 137064K reserved, 0K cma-reserved) Jul 10 00:12:36.948425 kernel: devtmpfs: initialized Jul 10 00:12:36.948433 kernel: x86/mm: Memory block size: 128MB Jul 10 00:12:36.948440 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bb7f000-0x9bbfefff] (524288 bytes) Jul 10 00:12:36.948448 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bfb5000-0x9bfb6fff] (8192 bytes) Jul 10 00:12:36.948456 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 10 00:12:36.948464 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 10 00:12:36.948472 kernel: pinctrl core: initialized pinctrl subsystem Jul 10 00:12:36.948482 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 10 00:12:36.948490 kernel: audit: initializing netlink subsys (disabled) Jul 10 00:12:36.948498 kernel: audit: type=2000 audit(1752106354.144:1): state=initialized audit_enabled=0 res=1 Jul 10 00:12:36.948507 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 10 00:12:36.948516 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 10 00:12:36.948525 kernel: cpuidle: using governor menu Jul 10 00:12:36.948534 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 10 00:12:36.948542 kernel: dca service started, version 1.12.1 Jul 10 00:12:36.948549 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Jul 10 00:12:36.948559 kernel: PCI: Using configuration type 1 for base access Jul 10 00:12:36.948567 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 10 00:12:36.948575 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 10 00:12:36.948583 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 10 00:12:36.948591 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 10 00:12:36.948598 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 10 00:12:36.948606 kernel: ACPI: Added _OSI(Module Device) Jul 10 00:12:36.948614 kernel: ACPI: Added _OSI(Processor Device) Jul 10 00:12:36.948623 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 10 00:12:36.948631 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 10 00:12:36.948639 kernel: ACPI: Interpreter enabled Jul 10 00:12:36.948646 kernel: ACPI: PM: (supports S0 S5) Jul 10 00:12:36.948654 kernel: ACPI: Using IOAPIC for interrupt routing Jul 10 00:12:36.948662 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 10 00:12:36.948670 kernel: PCI: Using E820 reservations for host bridge windows Jul 10 00:12:36.948677 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 10 00:12:36.948685 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 10 00:12:36.948910 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 10 00:12:36.949049 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jul 10 00:12:36.949171 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jul 10 00:12:36.949182 kernel: PCI host bridge to bus 0000:00 Jul 10 00:12:36.949332 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 10 00:12:36.949445 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 10 00:12:36.949567 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 10 00:12:36.949681 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Jul 10 00:12:36.949791 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jul 10 00:12:36.949899 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Jul 10 00:12:36.950007 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 10 00:12:36.950188 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jul 10 00:12:36.950344 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jul 10 00:12:36.950470 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Jul 10 00:12:36.950590 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Jul 10 00:12:36.950708 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Jul 10 00:12:36.950826 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 10 00:12:36.950967 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jul 10 00:12:36.951100 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Jul 10 00:12:36.951249 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Jul 10 00:12:36.951379 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Jul 10 00:12:36.951533 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jul 10 00:12:36.951656 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Jul 10 00:12:36.951776 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Jul 10 00:12:36.951895 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Jul 10 00:12:36.952042 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jul 10 00:12:36.952169 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Jul 10 00:12:36.952308 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Jul 10 00:12:36.952428 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Jul 10 00:12:36.952547 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Jul 10 00:12:36.952822 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jul 10 00:12:36.952944 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 10 00:12:36.953089 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jul 10 00:12:36.953258 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Jul 10 00:12:36.953388 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Jul 10 00:12:36.953527 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jul 10 00:12:36.953654 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Jul 10 00:12:36.953665 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 10 00:12:36.953673 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 10 00:12:36.953681 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 10 00:12:36.953689 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 10 00:12:36.953702 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 10 00:12:36.953710 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 10 00:12:36.953717 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 10 00:12:36.953725 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 10 00:12:36.953733 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 10 00:12:36.953741 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 10 00:12:36.953749 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 10 00:12:36.953757 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 10 00:12:36.953764 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 10 00:12:36.953774 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 10 00:12:36.953782 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 10 00:12:36.953790 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 10 00:12:36.953798 kernel: iommu: Default domain type: Translated Jul 10 00:12:36.953806 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 10 00:12:36.953813 kernel: efivars: Registered efivars operations Jul 10 00:12:36.953821 kernel: PCI: Using ACPI for IRQ routing Jul 10 00:12:36.953829 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 10 00:12:36.953837 kernel: e820: reserve RAM buffer [mem 0x0009f000-0x0009ffff] Jul 10 00:12:36.953846 kernel: e820: reserve RAM buffer [mem 0x9a101018-0x9bffffff] Jul 10 00:12:36.953854 kernel: e820: reserve RAM buffer [mem 0x9a13e018-0x9bffffff] Jul 10 00:12:36.953862 kernel: e820: reserve RAM buffer [mem 0x9b8ed000-0x9bffffff] Jul 10 00:12:36.953869 kernel: e820: reserve RAM buffer [mem 0x9bfb1000-0x9bffffff] Jul 10 00:12:36.953989 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 10 00:12:36.954119 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 10 00:12:36.954255 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 10 00:12:36.954267 kernel: vgaarb: loaded Jul 10 00:12:36.954279 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 10 00:12:36.954287 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 10 00:12:36.954295 kernel: clocksource: Switched to clocksource kvm-clock Jul 10 00:12:36.954303 kernel: VFS: Disk quotas dquot_6.6.0 Jul 10 00:12:36.954310 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 10 00:12:36.954318 kernel: pnp: PnP ACPI init Jul 10 00:12:36.954461 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Jul 10 00:12:36.954473 kernel: pnp: PnP ACPI: found 6 devices Jul 10 00:12:36.954481 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 10 00:12:36.954492 kernel: NET: Registered PF_INET protocol family Jul 10 00:12:36.954500 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 10 00:12:36.954508 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 10 00:12:36.954516 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 10 00:12:36.954524 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 10 00:12:36.954531 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 10 00:12:36.954539 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 10 00:12:36.954547 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 10 00:12:36.954557 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 10 00:12:36.954565 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 10 00:12:36.954573 kernel: NET: Registered PF_XDP protocol family Jul 10 00:12:36.954694 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Jul 10 00:12:36.954814 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Jul 10 00:12:36.954927 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 10 00:12:36.955055 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 10 00:12:36.955168 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 10 00:12:36.955302 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Jul 10 00:12:36.955413 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jul 10 00:12:36.955526 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Jul 10 00:12:36.955539 kernel: PCI: CLS 0 bytes, default 64 Jul 10 00:12:36.955547 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Jul 10 00:12:36.955561 kernel: Initialise system trusted keyrings Jul 10 00:12:36.955569 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 10 00:12:36.955577 kernel: Key type asymmetric registered Jul 10 00:12:36.955585 kernel: Asymmetric key parser 'x509' registered Jul 10 00:12:36.955597 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 10 00:12:36.955620 kernel: io scheduler mq-deadline registered Jul 10 00:12:36.955630 kernel: io scheduler kyber registered Jul 10 00:12:36.955638 kernel: io scheduler bfq registered Jul 10 00:12:36.955646 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 10 00:12:36.955655 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 10 00:12:36.955663 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 10 00:12:36.955671 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jul 10 00:12:36.955679 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 10 00:12:36.955689 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 10 00:12:36.955697 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 10 00:12:36.955705 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 10 00:12:36.955713 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 10 00:12:36.955855 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 10 00:12:36.955868 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 10 00:12:36.955994 kernel: rtc_cmos 00:04: registered as rtc0 Jul 10 00:12:36.956127 kernel: rtc_cmos 00:04: setting system clock to 2025-07-10T00:12:36 UTC (1752106356) Jul 10 00:12:36.956264 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jul 10 00:12:36.956275 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 10 00:12:36.956283 kernel: efifb: probing for efifb Jul 10 00:12:36.956292 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Jul 10 00:12:36.956300 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jul 10 00:12:36.956308 kernel: efifb: scrolling: redraw Jul 10 00:12:36.956316 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 10 00:12:36.956324 kernel: Console: switching to colour frame buffer device 160x50 Jul 10 00:12:36.956332 kernel: fb0: EFI VGA frame buffer device Jul 10 00:12:36.956343 kernel: pstore: Using crash dump compression: deflate Jul 10 00:12:36.956352 kernel: pstore: Registered efi_pstore as persistent store backend Jul 10 00:12:36.956362 kernel: NET: Registered PF_INET6 protocol family Jul 10 00:12:36.956370 kernel: Segment Routing with IPv6 Jul 10 00:12:36.956378 kernel: In-situ OAM (IOAM) with IPv6 Jul 10 00:12:36.956386 kernel: NET: Registered PF_PACKET protocol family Jul 10 00:12:36.956396 kernel: Key type dns_resolver registered Jul 10 00:12:36.956404 kernel: IPI shorthand broadcast: enabled Jul 10 00:12:36.956412 kernel: sched_clock: Marking stable (3505002568, 148178530)->(3688686013, -35504915) Jul 10 00:12:36.956420 kernel: registered taskstats version 1 Jul 10 00:12:36.956428 kernel: Loading compiled-in X.509 certificates Jul 10 00:12:36.956436 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: f515550de55d4e43b2ea11ae212aa0cb3a4e55cf' Jul 10 00:12:36.956444 kernel: Demotion targets for Node 0: null Jul 10 00:12:36.956452 kernel: Key type .fscrypt registered Jul 10 00:12:36.956460 kernel: Key type fscrypt-provisioning registered Jul 10 00:12:36.956470 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 10 00:12:36.956480 kernel: ima: Allocated hash algorithm: sha1 Jul 10 00:12:36.956488 kernel: ima: No architecture policies found Jul 10 00:12:36.956496 kernel: clk: Disabling unused clocks Jul 10 00:12:36.956504 kernel: Warning: unable to open an initial console. Jul 10 00:12:36.956519 kernel: Freeing unused kernel image (initmem) memory: 54420K Jul 10 00:12:36.956527 kernel: Write protecting the kernel read-only data: 24576k Jul 10 00:12:36.956535 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 10 00:12:36.956545 kernel: Run /init as init process Jul 10 00:12:36.956553 kernel: with arguments: Jul 10 00:12:36.956561 kernel: /init Jul 10 00:12:36.956569 kernel: with environment: Jul 10 00:12:36.956577 kernel: HOME=/ Jul 10 00:12:36.956585 kernel: TERM=linux Jul 10 00:12:36.956593 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 10 00:12:36.956602 systemd[1]: Successfully made /usr/ read-only. Jul 10 00:12:36.956613 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 10 00:12:36.956624 systemd[1]: Detected virtualization kvm. Jul 10 00:12:36.956633 systemd[1]: Detected architecture x86-64. Jul 10 00:12:36.956641 systemd[1]: Running in initrd. Jul 10 00:12:36.956649 systemd[1]: No hostname configured, using default hostname. Jul 10 00:12:36.956658 systemd[1]: Hostname set to . Jul 10 00:12:36.956667 systemd[1]: Initializing machine ID from VM UUID. Jul 10 00:12:36.956675 systemd[1]: Queued start job for default target initrd.target. Jul 10 00:12:36.956686 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 00:12:36.956695 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 00:12:36.956704 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 10 00:12:36.956713 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 10 00:12:36.956721 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 10 00:12:36.956731 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 10 00:12:36.956740 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 10 00:12:36.956752 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 10 00:12:36.956760 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 00:12:36.956769 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 10 00:12:36.956777 systemd[1]: Reached target paths.target - Path Units. Jul 10 00:12:36.956786 systemd[1]: Reached target slices.target - Slice Units. Jul 10 00:12:36.956794 systemd[1]: Reached target swap.target - Swaps. Jul 10 00:12:36.956803 systemd[1]: Reached target timers.target - Timer Units. Jul 10 00:12:36.956811 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 10 00:12:36.956822 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 10 00:12:36.956831 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 10 00:12:36.956839 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 10 00:12:36.956848 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 10 00:12:36.956856 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 10 00:12:36.956865 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 00:12:36.956873 systemd[1]: Reached target sockets.target - Socket Units. Jul 10 00:12:36.956882 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 10 00:12:36.956890 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 10 00:12:36.956901 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 10 00:12:36.956911 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 10 00:12:36.956919 systemd[1]: Starting systemd-fsck-usr.service... Jul 10 00:12:36.956928 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 10 00:12:36.956936 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 10 00:12:36.956945 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:12:36.956954 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 10 00:12:36.956965 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 00:12:36.956973 systemd[1]: Finished systemd-fsck-usr.service. Jul 10 00:12:36.957012 systemd-journald[220]: Collecting audit messages is disabled. Jul 10 00:12:36.957043 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 10 00:12:36.957052 systemd-journald[220]: Journal started Jul 10 00:12:36.957073 systemd-journald[220]: Runtime Journal (/run/log/journal/9a136db335864723a4e551455373038d) is 6M, max 48.2M, 42.2M free. Jul 10 00:12:36.942803 systemd-modules-load[221]: Inserted module 'overlay' Jul 10 00:12:36.971118 systemd[1]: Started systemd-journald.service - Journal Service. Jul 10 00:12:36.972546 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:12:36.975562 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 10 00:12:36.979902 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 10 00:12:36.982747 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 10 00:12:36.985461 kernel: Bridge firewalling registered Jul 10 00:12:36.985264 systemd-modules-load[221]: Inserted module 'br_netfilter' Jul 10 00:12:36.985395 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 10 00:12:37.000262 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 10 00:12:37.000639 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 10 00:12:37.004321 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 10 00:12:37.017118 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 00:12:37.017387 systemd-tmpfiles[242]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 10 00:12:37.020620 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 10 00:12:37.023821 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 00:12:37.031018 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 10 00:12:37.034699 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 10 00:12:37.037462 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 10 00:12:37.065622 dracut-cmdline[262]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=844005237fb9709f65a093d5533c4229fb6c54e8e257736d9c3d041b6d3080ea Jul 10 00:12:37.087533 systemd-resolved[259]: Positive Trust Anchors: Jul 10 00:12:37.087549 systemd-resolved[259]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 10 00:12:37.087582 systemd-resolved[259]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 10 00:12:37.090359 systemd-resolved[259]: Defaulting to hostname 'linux'. Jul 10 00:12:37.091591 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 10 00:12:37.097164 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 10 00:12:37.198250 kernel: SCSI subsystem initialized Jul 10 00:12:37.210232 kernel: Loading iSCSI transport class v2.0-870. Jul 10 00:12:37.224228 kernel: iscsi: registered transport (tcp) Jul 10 00:12:37.246230 kernel: iscsi: registered transport (qla4xxx) Jul 10 00:12:37.246272 kernel: QLogic iSCSI HBA Driver Jul 10 00:12:37.269574 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 10 00:12:37.302993 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 10 00:12:37.304114 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 10 00:12:37.368977 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 10 00:12:37.372615 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 10 00:12:37.433244 kernel: raid6: avx2x4 gen() 29279 MB/s Jul 10 00:12:37.450230 kernel: raid6: avx2x2 gen() 31070 MB/s Jul 10 00:12:37.467414 kernel: raid6: avx2x1 gen() 25605 MB/s Jul 10 00:12:37.467443 kernel: raid6: using algorithm avx2x2 gen() 31070 MB/s Jul 10 00:12:37.485290 kernel: raid6: .... xor() 19721 MB/s, rmw enabled Jul 10 00:12:37.485329 kernel: raid6: using avx2x2 recovery algorithm Jul 10 00:12:37.506226 kernel: xor: automatically using best checksumming function avx Jul 10 00:12:37.698246 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 10 00:12:37.707723 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 10 00:12:37.710539 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 00:12:37.745440 systemd-udevd[472]: Using default interface naming scheme 'v255'. Jul 10 00:12:37.750981 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 00:12:37.779304 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 10 00:12:37.814415 dracut-pre-trigger[483]: rd.md=0: removing MD RAID activation Jul 10 00:12:37.844998 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 10 00:12:37.846490 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 10 00:12:37.938211 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 00:12:37.941306 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 10 00:12:37.974228 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jul 10 00:12:37.976972 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 10 00:12:37.985107 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 10 00:12:37.985131 kernel: GPT:9289727 != 19775487 Jul 10 00:12:37.985142 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 10 00:12:37.985152 kernel: GPT:9289727 != 19775487 Jul 10 00:12:37.985162 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 10 00:12:37.985172 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 00:12:37.999221 kernel: cryptd: max_cpu_qlen set to 1000 Jul 10 00:12:38.014234 kernel: AES CTR mode by8 optimization enabled Jul 10 00:12:38.028251 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jul 10 00:12:38.028794 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 00:12:38.029082 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:12:38.033337 kernel: libata version 3.00 loaded. Jul 10 00:12:38.033539 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:12:38.036667 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:12:38.038401 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 10 00:12:38.050223 kernel: ahci 0000:00:1f.2: version 3.0 Jul 10 00:12:38.050429 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 10 00:12:38.055265 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jul 10 00:12:38.055442 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jul 10 00:12:38.055585 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 10 00:12:38.061224 kernel: scsi host0: ahci Jul 10 00:12:38.061433 kernel: scsi host1: ahci Jul 10 00:12:38.065523 kernel: scsi host2: ahci Jul 10 00:12:38.066220 kernel: scsi host3: ahci Jul 10 00:12:38.067216 kernel: scsi host4: ahci Jul 10 00:12:38.067509 kernel: scsi host5: ahci Jul 10 00:12:38.068714 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 0 Jul 10 00:12:38.068738 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 0 Jul 10 00:12:38.071249 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 0 Jul 10 00:12:38.071280 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 0 Jul 10 00:12:38.070918 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 10 00:12:38.081566 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 0 Jul 10 00:12:38.081587 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 0 Jul 10 00:12:38.089188 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 10 00:12:38.091926 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:12:38.120872 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 10 00:12:38.122126 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 10 00:12:38.132789 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 10 00:12:38.135644 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 10 00:12:38.196928 disk-uuid[636]: Primary Header is updated. Jul 10 00:12:38.196928 disk-uuid[636]: Secondary Entries is updated. Jul 10 00:12:38.196928 disk-uuid[636]: Secondary Header is updated. Jul 10 00:12:38.204240 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 00:12:38.212241 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 00:12:38.395237 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 10 00:12:38.395317 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jul 10 00:12:38.396224 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 10 00:12:38.397247 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 10 00:12:38.398222 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 10 00:12:38.398240 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 10 00:12:38.399228 kernel: ata3.00: applying bridge limits Jul 10 00:12:38.399244 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 10 00:12:38.400250 kernel: ata3.00: configured for UDMA/100 Jul 10 00:12:38.401233 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 10 00:12:38.464244 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 10 00:12:38.464582 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 10 00:12:38.490241 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 10 00:12:38.920565 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 10 00:12:38.921170 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 10 00:12:38.921378 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 00:12:38.921673 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 10 00:12:38.922863 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 10 00:12:38.946729 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 10 00:12:39.209252 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 00:12:39.209833 disk-uuid[637]: The operation has completed successfully. Jul 10 00:12:39.242333 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 10 00:12:39.242456 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 10 00:12:39.276320 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 10 00:12:39.302325 sh[666]: Success Jul 10 00:12:39.323234 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 10 00:12:39.323269 kernel: device-mapper: uevent: version 1.0.3 Jul 10 00:12:39.324470 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 10 00:12:39.334238 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jul 10 00:12:39.368079 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 10 00:12:39.370387 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 10 00:12:39.383105 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 10 00:12:39.391459 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 10 00:12:39.391488 kernel: BTRFS: device fsid c4cb30b0-bb74-4f98-aab6-7a1c6f47edee devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (678) Jul 10 00:12:39.392809 kernel: BTRFS info (device dm-0): first mount of filesystem c4cb30b0-bb74-4f98-aab6-7a1c6f47edee Jul 10 00:12:39.392827 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 10 00:12:39.393658 kernel: BTRFS info (device dm-0): using free-space-tree Jul 10 00:12:39.398938 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 10 00:12:39.399573 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 10 00:12:39.401731 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 10 00:12:39.402676 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 10 00:12:39.404521 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 10 00:12:39.440261 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (711) Jul 10 00:12:39.443263 kernel: BTRFS info (device vda6): first mount of filesystem 66535909-6865-4f30-ad42-a3000fffd5f6 Jul 10 00:12:39.443330 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 10 00:12:39.443342 kernel: BTRFS info (device vda6): using free-space-tree Jul 10 00:12:39.452228 kernel: BTRFS info (device vda6): last unmount of filesystem 66535909-6865-4f30-ad42-a3000fffd5f6 Jul 10 00:12:39.452555 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 10 00:12:39.456090 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 10 00:12:39.580665 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 10 00:12:39.584105 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 10 00:12:39.614832 ignition[755]: Ignition 2.21.0 Jul 10 00:12:39.614848 ignition[755]: Stage: fetch-offline Jul 10 00:12:39.614898 ignition[755]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:12:39.614912 ignition[755]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:12:39.615047 ignition[755]: parsed url from cmdline: "" Jul 10 00:12:39.615052 ignition[755]: no config URL provided Jul 10 00:12:39.615060 ignition[755]: reading system config file "/usr/lib/ignition/user.ign" Jul 10 00:12:39.615073 ignition[755]: no config at "/usr/lib/ignition/user.ign" Jul 10 00:12:39.615105 ignition[755]: op(1): [started] loading QEMU firmware config module Jul 10 00:12:39.615110 ignition[755]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 10 00:12:39.625048 ignition[755]: op(1): [finished] loading QEMU firmware config module Jul 10 00:12:39.639966 systemd-networkd[853]: lo: Link UP Jul 10 00:12:39.639976 systemd-networkd[853]: lo: Gained carrier Jul 10 00:12:39.643056 systemd-networkd[853]: Enumeration completed Jul 10 00:12:39.644105 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 10 00:12:39.645524 systemd-networkd[853]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 00:12:39.645533 systemd-networkd[853]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 10 00:12:39.646474 systemd[1]: Reached target network.target - Network. Jul 10 00:12:39.647053 systemd-networkd[853]: eth0: Link UP Jul 10 00:12:39.647058 systemd-networkd[853]: eth0: Gained carrier Jul 10 00:12:39.647075 systemd-networkd[853]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 00:12:39.676172 ignition[755]: parsing config with SHA512: ece9d31af597aa7a63d11ec35cdc2f229c596d2f44fd1bee147e93e7a9023bfa5e93785be3e546f17eaf01c13b35d7c13536f2bec0ac0e99c3c8ed95264d52a2 Jul 10 00:12:39.677269 systemd-networkd[853]: eth0: DHCPv4 address 10.0.0.19/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 10 00:12:39.680033 unknown[755]: fetched base config from "system" Jul 10 00:12:39.680047 unknown[755]: fetched user config from "qemu" Jul 10 00:12:39.680362 ignition[755]: fetch-offline: fetch-offline passed Jul 10 00:12:39.680421 ignition[755]: Ignition finished successfully Jul 10 00:12:39.683944 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 10 00:12:39.685698 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 10 00:12:39.686925 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 10 00:12:39.739919 ignition[861]: Ignition 2.21.0 Jul 10 00:12:39.739940 ignition[861]: Stage: kargs Jul 10 00:12:39.740455 ignition[861]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:12:39.740469 ignition[861]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:12:39.744580 ignition[861]: kargs: kargs passed Jul 10 00:12:39.745320 ignition[861]: Ignition finished successfully Jul 10 00:12:39.750682 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 10 00:12:39.752864 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 10 00:12:39.790654 ignition[869]: Ignition 2.21.0 Jul 10 00:12:39.790668 ignition[869]: Stage: disks Jul 10 00:12:39.790830 ignition[869]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:12:39.790843 ignition[869]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:12:39.793308 ignition[869]: disks: disks passed Jul 10 00:12:39.793372 ignition[869]: Ignition finished successfully Jul 10 00:12:39.796672 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 10 00:12:39.799029 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 10 00:12:39.800284 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 10 00:12:39.801330 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 10 00:12:39.804466 systemd[1]: Reached target sysinit.target - System Initialization. Jul 10 00:12:39.804668 systemd[1]: Reached target basic.target - Basic System. Jul 10 00:12:39.806055 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 10 00:12:39.834752 systemd-resolved[259]: Detected conflict on linux IN A 10.0.0.19 Jul 10 00:12:39.834764 systemd-resolved[259]: Hostname conflict, changing published hostname from 'linux' to 'linux11'. Jul 10 00:12:39.841251 systemd-fsck[879]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 10 00:12:39.851922 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 10 00:12:39.855668 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 10 00:12:39.969251 kernel: EXT4-fs (vda9): mounted filesystem a310c019-7915-47f5-9fce-db4a09ac26c2 r/w with ordered data mode. Quota mode: none. Jul 10 00:12:39.970083 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 10 00:12:39.971702 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 10 00:12:39.974670 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 10 00:12:39.976535 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 10 00:12:39.978054 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 10 00:12:39.978098 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 10 00:12:39.978120 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 10 00:12:40.007549 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 10 00:12:40.009332 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 10 00:12:40.013291 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (888) Jul 10 00:12:40.013322 kernel: BTRFS info (device vda6): first mount of filesystem 66535909-6865-4f30-ad42-a3000fffd5f6 Jul 10 00:12:40.015521 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 10 00:12:40.015539 kernel: BTRFS info (device vda6): using free-space-tree Jul 10 00:12:40.020595 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 10 00:12:40.063495 initrd-setup-root[912]: cut: /sysroot/etc/passwd: No such file or directory Jul 10 00:12:40.068879 initrd-setup-root[919]: cut: /sysroot/etc/group: No such file or directory Jul 10 00:12:40.074935 initrd-setup-root[926]: cut: /sysroot/etc/shadow: No such file or directory Jul 10 00:12:40.079841 initrd-setup-root[933]: cut: /sysroot/etc/gshadow: No such file or directory Jul 10 00:12:40.185281 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 10 00:12:40.186878 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 10 00:12:40.189320 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 10 00:12:40.208228 kernel: BTRFS info (device vda6): last unmount of filesystem 66535909-6865-4f30-ad42-a3000fffd5f6 Jul 10 00:12:40.222562 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 10 00:12:40.236624 ignition[1002]: INFO : Ignition 2.21.0 Jul 10 00:12:40.236624 ignition[1002]: INFO : Stage: mount Jul 10 00:12:40.238513 ignition[1002]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 00:12:40.238513 ignition[1002]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:12:40.242252 ignition[1002]: INFO : mount: mount passed Jul 10 00:12:40.243105 ignition[1002]: INFO : Ignition finished successfully Jul 10 00:12:40.247086 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 10 00:12:40.249423 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 10 00:12:40.390896 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 10 00:12:40.393245 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 10 00:12:40.429069 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1014) Jul 10 00:12:40.429115 kernel: BTRFS info (device vda6): first mount of filesystem 66535909-6865-4f30-ad42-a3000fffd5f6 Jul 10 00:12:40.429128 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 10 00:12:40.429898 kernel: BTRFS info (device vda6): using free-space-tree Jul 10 00:12:40.434301 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 10 00:12:40.477217 ignition[1031]: INFO : Ignition 2.21.0 Jul 10 00:12:40.477217 ignition[1031]: INFO : Stage: files Jul 10 00:12:40.479095 ignition[1031]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 00:12:40.479095 ignition[1031]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:12:40.482112 ignition[1031]: DEBUG : files: compiled without relabeling support, skipping Jul 10 00:12:40.484482 ignition[1031]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 10 00:12:40.484482 ignition[1031]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 10 00:12:40.490334 ignition[1031]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 10 00:12:40.492285 ignition[1031]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 10 00:12:40.494453 unknown[1031]: wrote ssh authorized keys file for user: core Jul 10 00:12:40.495767 ignition[1031]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 10 00:12:40.498788 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jul 10 00:12:40.501163 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jul 10 00:12:40.566523 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 10 00:12:40.725401 systemd-networkd[853]: eth0: Gained IPv6LL Jul 10 00:12:40.770485 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jul 10 00:12:40.772438 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 10 00:12:40.772438 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 10 00:12:41.246356 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 10 00:12:41.358541 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 10 00:12:41.358541 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 10 00:12:41.362525 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 10 00:12:41.362525 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 10 00:12:41.362525 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 10 00:12:41.362525 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 10 00:12:41.362525 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 10 00:12:41.362525 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 10 00:12:41.362525 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 10 00:12:41.380188 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 10 00:12:41.400265 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 10 00:12:41.402318 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 10 00:12:41.404979 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 10 00:12:41.404979 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 10 00:12:41.404979 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jul 10 00:12:41.907674 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 10 00:12:42.463193 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 10 00:12:42.463193 ignition[1031]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 10 00:12:42.467003 ignition[1031]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 10 00:12:42.474501 ignition[1031]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 10 00:12:42.474501 ignition[1031]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 10 00:12:42.474501 ignition[1031]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jul 10 00:12:42.479290 ignition[1031]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 10 00:12:42.481288 ignition[1031]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 10 00:12:42.481288 ignition[1031]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jul 10 00:12:42.485006 ignition[1031]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jul 10 00:12:42.507006 ignition[1031]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 10 00:12:42.513477 ignition[1031]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 10 00:12:42.515357 ignition[1031]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jul 10 00:12:42.515357 ignition[1031]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jul 10 00:12:42.518240 ignition[1031]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jul 10 00:12:42.518240 ignition[1031]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 10 00:12:42.518240 ignition[1031]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 10 00:12:42.518240 ignition[1031]: INFO : files: files passed Jul 10 00:12:42.518240 ignition[1031]: INFO : Ignition finished successfully Jul 10 00:12:42.525593 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 10 00:12:42.529816 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 10 00:12:42.531882 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 10 00:12:42.560123 initrd-setup-root-after-ignition[1059]: grep: /sysroot/oem/oem-release: No such file or directory Jul 10 00:12:42.560863 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 10 00:12:42.561011 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 10 00:12:42.567102 initrd-setup-root-after-ignition[1062]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 10 00:12:42.567102 initrd-setup-root-after-ignition[1062]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 10 00:12:42.570233 initrd-setup-root-after-ignition[1066]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 10 00:12:42.573905 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 10 00:12:42.576591 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 10 00:12:42.577586 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 10 00:12:42.629558 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 10 00:12:42.629737 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 10 00:12:42.630837 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 10 00:12:42.632794 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 10 00:12:42.635524 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 10 00:12:42.637742 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 10 00:12:42.664745 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 10 00:12:42.667245 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 10 00:12:42.695954 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 10 00:12:42.696149 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 00:12:42.699477 systemd[1]: Stopped target timers.target - Timer Units. Jul 10 00:12:42.700775 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 10 00:12:42.700918 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 10 00:12:42.706304 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 10 00:12:42.706453 systemd[1]: Stopped target basic.target - Basic System. Jul 10 00:12:42.708653 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 10 00:12:42.709001 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 10 00:12:42.709727 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 10 00:12:42.710094 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 10 00:12:42.710658 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 10 00:12:42.711033 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 10 00:12:42.711597 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 10 00:12:42.711959 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 10 00:12:42.712516 systemd[1]: Stopped target swap.target - Swaps. Jul 10 00:12:42.728766 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 10 00:12:42.728946 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 10 00:12:42.731720 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 10 00:12:42.731867 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 00:12:42.734932 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 10 00:12:42.735052 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 00:12:42.761535 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 10 00:12:42.761654 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 10 00:12:42.763780 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 10 00:12:42.763902 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 10 00:12:42.766521 systemd[1]: Stopped target paths.target - Path Units. Jul 10 00:12:42.768397 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 10 00:12:42.773336 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 00:12:42.776124 systemd[1]: Stopped target slices.target - Slice Units. Jul 10 00:12:42.776356 systemd[1]: Stopped target sockets.target - Socket Units. Jul 10 00:12:42.779566 systemd[1]: iscsid.socket: Deactivated successfully. Jul 10 00:12:42.779726 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 10 00:12:42.782305 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 10 00:12:42.782448 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 10 00:12:42.785183 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 10 00:12:42.785413 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 10 00:12:42.788728 systemd[1]: ignition-files.service: Deactivated successfully. Jul 10 00:12:42.788903 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 10 00:12:42.792848 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 10 00:12:42.794945 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 10 00:12:42.795821 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 10 00:12:42.795955 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 00:12:42.797973 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 10 00:12:42.798134 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 10 00:12:42.803760 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 10 00:12:42.809389 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 10 00:12:42.831009 ignition[1086]: INFO : Ignition 2.21.0 Jul 10 00:12:42.831009 ignition[1086]: INFO : Stage: umount Jul 10 00:12:42.831009 ignition[1086]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 00:12:42.831009 ignition[1086]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:12:42.836013 ignition[1086]: INFO : umount: umount passed Jul 10 00:12:42.836013 ignition[1086]: INFO : Ignition finished successfully Jul 10 00:12:42.832452 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 10 00:12:42.836462 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 10 00:12:42.836613 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 10 00:12:42.837373 systemd[1]: Stopped target network.target - Network. Jul 10 00:12:42.837728 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 10 00:12:42.837784 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 10 00:12:42.838133 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 10 00:12:42.838181 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 10 00:12:42.838680 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 10 00:12:42.838738 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 10 00:12:42.839048 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 10 00:12:42.839091 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 10 00:12:42.839894 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 10 00:12:42.840179 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 10 00:12:42.854447 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 10 00:12:42.854566 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 10 00:12:42.860641 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 10 00:12:42.861346 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 10 00:12:42.861451 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 00:12:42.865992 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 10 00:12:42.872115 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 10 00:12:42.872338 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 10 00:12:42.877336 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 10 00:12:42.877570 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 10 00:12:42.878835 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 10 00:12:42.878905 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 10 00:12:42.891065 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 10 00:12:42.892946 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 10 00:12:42.893057 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 10 00:12:42.896407 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 10 00:12:42.896494 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 10 00:12:42.899456 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 10 00:12:42.899540 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 10 00:12:42.902475 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 00:12:42.903906 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 10 00:12:42.921350 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 10 00:12:42.922388 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 00:12:42.939787 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 10 00:12:42.939857 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 10 00:12:42.940931 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 10 00:12:42.940980 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 00:12:42.943537 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 10 00:12:42.943609 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 10 00:12:42.946631 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 10 00:12:42.946704 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 10 00:12:42.947448 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 10 00:12:42.947520 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 10 00:12:42.955988 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 10 00:12:42.957068 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 10 00:12:42.957129 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 10 00:12:42.960334 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 10 00:12:42.960392 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 00:12:42.964836 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 10 00:12:42.964901 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 10 00:12:42.968486 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 10 00:12:42.968543 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 00:12:42.968765 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 00:12:42.968809 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:12:42.974445 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 10 00:12:42.977368 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 10 00:12:42.987557 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 10 00:12:42.987709 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 10 00:12:43.045388 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 10 00:12:43.045562 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 10 00:12:43.047913 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 10 00:12:43.048664 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 10 00:12:43.048725 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 10 00:12:43.051851 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 10 00:12:43.078005 systemd[1]: Switching root. Jul 10 00:12:43.113579 systemd-journald[220]: Journal stopped Jul 10 00:12:44.447932 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). Jul 10 00:12:44.448003 kernel: SELinux: policy capability network_peer_controls=1 Jul 10 00:12:44.448017 kernel: SELinux: policy capability open_perms=1 Jul 10 00:12:44.448033 kernel: SELinux: policy capability extended_socket_class=1 Jul 10 00:12:44.448050 kernel: SELinux: policy capability always_check_network=0 Jul 10 00:12:44.448061 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 10 00:12:44.448073 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 10 00:12:44.448084 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 10 00:12:44.448095 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 10 00:12:44.448111 kernel: SELinux: policy capability userspace_initial_context=0 Jul 10 00:12:44.448123 kernel: audit: type=1403 audit(1752106363.621:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 10 00:12:44.448140 systemd[1]: Successfully loaded SELinux policy in 53.815ms. Jul 10 00:12:44.448163 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 14.306ms. Jul 10 00:12:44.448177 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 10 00:12:44.448190 systemd[1]: Detected virtualization kvm. Jul 10 00:12:44.448216 systemd[1]: Detected architecture x86-64. Jul 10 00:12:44.448228 systemd[1]: Detected first boot. Jul 10 00:12:44.448240 systemd[1]: Initializing machine ID from VM UUID. Jul 10 00:12:44.448262 zram_generator::config[1132]: No configuration found. Jul 10 00:12:44.448275 kernel: Guest personality initialized and is inactive Jul 10 00:12:44.448287 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 10 00:12:44.448298 kernel: Initialized host personality Jul 10 00:12:44.448309 kernel: NET: Registered PF_VSOCK protocol family Jul 10 00:12:44.448320 systemd[1]: Populated /etc with preset unit settings. Jul 10 00:12:44.448333 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 10 00:12:44.448345 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 10 00:12:44.448363 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 10 00:12:44.448376 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 10 00:12:44.448388 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 10 00:12:44.448400 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 10 00:12:44.448412 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 10 00:12:44.448424 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 10 00:12:44.448436 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 10 00:12:44.448448 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 10 00:12:44.448460 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 10 00:12:44.448477 systemd[1]: Created slice user.slice - User and Session Slice. Jul 10 00:12:44.448489 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 00:12:44.448502 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 00:12:44.448514 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 10 00:12:44.448526 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 10 00:12:44.448539 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 10 00:12:44.448551 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 10 00:12:44.448568 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 10 00:12:44.448580 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 00:12:44.448592 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 10 00:12:44.448605 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 10 00:12:44.448617 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 10 00:12:44.448630 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 10 00:12:44.448642 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 10 00:12:44.448654 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 00:12:44.448671 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 10 00:12:44.448683 systemd[1]: Reached target slices.target - Slice Units. Jul 10 00:12:44.448699 systemd[1]: Reached target swap.target - Swaps. Jul 10 00:12:44.448711 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 10 00:12:44.448723 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 10 00:12:44.448735 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 10 00:12:44.448748 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 10 00:12:44.448760 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 10 00:12:44.448772 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 00:12:44.448784 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 10 00:12:44.448796 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 10 00:12:44.448812 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 10 00:12:44.448824 systemd[1]: Mounting media.mount - External Media Directory... Jul 10 00:12:44.448844 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:12:44.448856 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 10 00:12:44.448868 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 10 00:12:44.448880 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 10 00:12:44.448893 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 10 00:12:44.448907 systemd[1]: Reached target machines.target - Containers. Jul 10 00:12:44.448929 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 10 00:12:44.448941 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 00:12:44.448953 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 10 00:12:44.448965 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 10 00:12:44.448977 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 00:12:44.448989 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 10 00:12:44.449001 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 00:12:44.449013 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 10 00:12:44.449025 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 00:12:44.449042 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 10 00:12:44.449054 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 10 00:12:44.449067 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 10 00:12:44.449082 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 10 00:12:44.449097 systemd[1]: Stopped systemd-fsck-usr.service. Jul 10 00:12:44.449112 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 10 00:12:44.449127 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 10 00:12:44.449142 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 10 00:12:44.449165 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 10 00:12:44.449180 kernel: fuse: init (API version 7.41) Jul 10 00:12:44.449194 kernel: loop: module loaded Jul 10 00:12:44.449222 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 10 00:12:44.449234 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 10 00:12:44.449247 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 10 00:12:44.449265 systemd[1]: verity-setup.service: Deactivated successfully. Jul 10 00:12:44.449277 systemd[1]: Stopped verity-setup.service. Jul 10 00:12:44.449290 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:12:44.449301 kernel: ACPI: bus type drm_connector registered Jul 10 00:12:44.449313 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 10 00:12:44.449325 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 10 00:12:44.449337 systemd[1]: Mounted media.mount - External Media Directory. Jul 10 00:12:44.449350 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 10 00:12:44.449367 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 10 00:12:44.449404 systemd-journald[1203]: Collecting audit messages is disabled. Jul 10 00:12:44.449428 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 10 00:12:44.449440 systemd-journald[1203]: Journal started Jul 10 00:12:44.449469 systemd-journald[1203]: Runtime Journal (/run/log/journal/9a136db335864723a4e551455373038d) is 6M, max 48.2M, 42.2M free. Jul 10 00:12:44.449512 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 10 00:12:44.180008 systemd[1]: Queued start job for default target multi-user.target. Jul 10 00:12:44.201285 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 10 00:12:44.201830 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 10 00:12:44.453222 systemd[1]: Started systemd-journald.service - Journal Service. Jul 10 00:12:44.454605 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 00:12:44.456129 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 10 00:12:44.456386 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 10 00:12:44.457857 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:12:44.458130 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 00:12:44.459581 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 10 00:12:44.459855 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 10 00:12:44.461239 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:12:44.461601 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 00:12:44.463354 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 10 00:12:44.463596 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 10 00:12:44.465117 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:12:44.465366 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 00:12:44.466886 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 10 00:12:44.468433 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 10 00:12:44.470215 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 10 00:12:44.471978 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 10 00:12:44.490550 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 10 00:12:44.493986 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 10 00:12:44.499277 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 10 00:12:44.500553 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 10 00:12:44.500682 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 10 00:12:44.503070 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 10 00:12:44.514329 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 10 00:12:44.515761 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 00:12:44.517428 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 10 00:12:44.521076 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 10 00:12:44.523418 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 00:12:44.524940 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 10 00:12:44.526140 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 10 00:12:44.529309 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 10 00:12:44.532038 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 10 00:12:44.548915 systemd-journald[1203]: Time spent on flushing to /var/log/journal/9a136db335864723a4e551455373038d is 13.247ms for 1038 entries. Jul 10 00:12:44.548915 systemd-journald[1203]: System Journal (/var/log/journal/9a136db335864723a4e551455373038d) is 8M, max 195.6M, 187.6M free. Jul 10 00:12:44.588916 systemd-journald[1203]: Received client request to flush runtime journal. Jul 10 00:12:44.588973 kernel: loop0: detected capacity change from 0 to 113872 Jul 10 00:12:44.538451 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 10 00:12:44.543375 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 00:12:44.546498 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 10 00:12:44.548740 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 10 00:12:44.567423 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 10 00:12:44.569037 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 10 00:12:44.576144 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 10 00:12:44.579044 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 10 00:12:44.603982 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 10 00:12:44.606466 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 10 00:12:44.617518 systemd-tmpfiles[1252]: ACLs are not supported, ignoring. Jul 10 00:12:44.618076 systemd-tmpfiles[1252]: ACLs are not supported, ignoring. Jul 10 00:12:44.618127 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 10 00:12:44.626408 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 10 00:12:44.629792 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 10 00:12:44.635235 kernel: loop1: detected capacity change from 0 to 146240 Jul 10 00:12:44.671235 kernel: loop2: detected capacity change from 0 to 229808 Jul 10 00:12:44.673646 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 10 00:12:44.678702 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 10 00:12:44.701750 systemd-tmpfiles[1272]: ACLs are not supported, ignoring. Jul 10 00:12:44.701771 systemd-tmpfiles[1272]: ACLs are not supported, ignoring. Jul 10 00:12:44.707178 kernel: loop3: detected capacity change from 0 to 113872 Jul 10 00:12:44.711280 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 00:12:44.718244 kernel: loop4: detected capacity change from 0 to 146240 Jul 10 00:12:44.730633 kernel: loop5: detected capacity change from 0 to 229808 Jul 10 00:12:44.739181 (sd-merge)[1275]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 10 00:12:44.739858 (sd-merge)[1275]: Merged extensions into '/usr'. Jul 10 00:12:44.744663 systemd[1]: Reload requested from client PID 1251 ('systemd-sysext') (unit systemd-sysext.service)... Jul 10 00:12:44.744688 systemd[1]: Reloading... Jul 10 00:12:44.799236 zram_generator::config[1302]: No configuration found. Jul 10 00:12:44.938399 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:12:45.000236 ldconfig[1246]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 10 00:12:45.025636 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 10 00:12:45.025774 systemd[1]: Reloading finished in 280 ms. Jul 10 00:12:45.082550 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 10 00:12:45.084522 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 10 00:12:45.105739 systemd[1]: Starting ensure-sysext.service... Jul 10 00:12:45.108300 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 10 00:12:45.120558 systemd[1]: Reload requested from client PID 1339 ('systemctl') (unit ensure-sysext.service)... Jul 10 00:12:45.120581 systemd[1]: Reloading... Jul 10 00:12:45.136506 systemd-tmpfiles[1340]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 10 00:12:45.136902 systemd-tmpfiles[1340]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 10 00:12:45.137259 systemd-tmpfiles[1340]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 10 00:12:45.137524 systemd-tmpfiles[1340]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 10 00:12:45.138608 systemd-tmpfiles[1340]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 10 00:12:45.138943 systemd-tmpfiles[1340]: ACLs are not supported, ignoring. Jul 10 00:12:45.139087 systemd-tmpfiles[1340]: ACLs are not supported, ignoring. Jul 10 00:12:45.169889 systemd-tmpfiles[1340]: Detected autofs mount point /boot during canonicalization of boot. Jul 10 00:12:45.170070 systemd-tmpfiles[1340]: Skipping /boot Jul 10 00:12:45.183223 zram_generator::config[1373]: No configuration found. Jul 10 00:12:45.189377 systemd-tmpfiles[1340]: Detected autofs mount point /boot during canonicalization of boot. Jul 10 00:12:45.189577 systemd-tmpfiles[1340]: Skipping /boot Jul 10 00:12:45.292100 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:12:45.374045 systemd[1]: Reloading finished in 252 ms. Jul 10 00:12:45.395274 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 00:12:45.400354 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 10 00:12:45.403476 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 10 00:12:45.407895 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 10 00:12:45.444732 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 10 00:12:45.448331 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 10 00:12:45.475318 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 10 00:12:45.485437 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 10 00:12:45.491086 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:12:45.491415 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 00:12:45.493930 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 00:12:45.500405 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 00:12:45.505503 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 00:12:45.507423 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 00:12:45.507559 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 10 00:12:45.507657 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:12:45.508657 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 10 00:12:45.515684 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:12:45.515950 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 00:12:45.516381 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 00:12:45.516479 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 10 00:12:45.516563 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:12:45.521854 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:12:45.522098 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 00:12:45.532286 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 10 00:12:45.533544 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 00:12:45.533660 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 10 00:12:45.533796 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:12:45.534919 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:12:45.537248 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 00:12:45.539411 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:12:45.539670 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 00:12:45.541512 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:12:45.541741 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 00:12:45.542111 augenrules[1436]: No rules Jul 10 00:12:45.543539 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 10 00:12:45.543768 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 10 00:12:45.545386 systemd[1]: audit-rules.service: Deactivated successfully. Jul 10 00:12:45.545646 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 10 00:12:45.551804 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 10 00:12:45.554467 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 10 00:12:45.556240 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 10 00:12:45.562218 systemd[1]: Finished ensure-sysext.service. Jul 10 00:12:45.566591 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 00:12:45.566656 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 10 00:12:45.568997 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 10 00:12:45.572897 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 00:12:45.575841 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 10 00:12:45.576964 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 10 00:12:45.595311 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 10 00:12:45.619555 systemd-udevd[1455]: Using default interface naming scheme 'v255'. Jul 10 00:12:45.627912 systemd-resolved[1406]: Positive Trust Anchors: Jul 10 00:12:45.627933 systemd-resolved[1406]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 10 00:12:45.627971 systemd-resolved[1406]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 10 00:12:45.632415 systemd-resolved[1406]: Defaulting to hostname 'linux'. Jul 10 00:12:45.634414 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 10 00:12:45.635732 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 10 00:12:45.641602 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 00:12:45.645233 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 10 00:12:45.648267 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 10 00:12:45.649680 systemd[1]: Reached target sysinit.target - System Initialization. Jul 10 00:12:45.650913 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 10 00:12:45.652274 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 10 00:12:45.653736 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 10 00:12:45.654977 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 10 00:12:45.656335 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 10 00:12:45.656371 systemd[1]: Reached target paths.target - Path Units. Jul 10 00:12:45.657277 systemd[1]: Reached target time-set.target - System Time Set. Jul 10 00:12:45.658480 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 10 00:12:45.659637 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 10 00:12:45.660857 systemd[1]: Reached target timers.target - Timer Units. Jul 10 00:12:45.665951 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 10 00:12:45.669833 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 10 00:12:45.676122 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 10 00:12:45.678818 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 10 00:12:45.680611 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 10 00:12:45.695055 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 10 00:12:45.696737 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 10 00:12:45.698681 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 10 00:12:45.705899 systemd[1]: Reached target sockets.target - Socket Units. Jul 10 00:12:45.708362 systemd[1]: Reached target basic.target - Basic System. Jul 10 00:12:45.709355 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 10 00:12:45.709388 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 10 00:12:45.712352 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 10 00:12:45.714347 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 10 00:12:45.718460 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 10 00:12:45.722530 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 10 00:12:45.779428 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 10 00:12:45.787364 jq[1492]: false Jul 10 00:12:45.787837 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 10 00:12:45.793639 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 10 00:12:45.797625 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 10 00:12:45.800607 google_oslogin_nss_cache[1498]: oslogin_cache_refresh[1498]: Refreshing passwd entry cache Jul 10 00:12:45.800619 oslogin_cache_refresh[1498]: Refreshing passwd entry cache Jul 10 00:12:45.802250 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 10 00:12:45.804542 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 10 00:12:45.804711 oslogin_cache_refresh[1498]: Failure getting users, quitting Jul 10 00:12:45.807335 google_oslogin_nss_cache[1498]: oslogin_cache_refresh[1498]: Failure getting users, quitting Jul 10 00:12:45.807335 google_oslogin_nss_cache[1498]: oslogin_cache_refresh[1498]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 10 00:12:45.807335 google_oslogin_nss_cache[1498]: oslogin_cache_refresh[1498]: Refreshing group entry cache Jul 10 00:12:45.807335 google_oslogin_nss_cache[1498]: oslogin_cache_refresh[1498]: Failure getting groups, quitting Jul 10 00:12:45.807335 google_oslogin_nss_cache[1498]: oslogin_cache_refresh[1498]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 10 00:12:45.804727 oslogin_cache_refresh[1498]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 10 00:12:45.804774 oslogin_cache_refresh[1498]: Refreshing group entry cache Jul 10 00:12:45.805293 oslogin_cache_refresh[1498]: Failure getting groups, quitting Jul 10 00:12:45.805303 oslogin_cache_refresh[1498]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 10 00:12:45.811966 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 10 00:12:45.814221 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 10 00:12:45.814797 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 10 00:12:45.817114 systemd[1]: Starting update-engine.service - Update Engine... Jul 10 00:12:45.820371 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 10 00:12:45.830977 extend-filesystems[1493]: Found /dev/vda6 Jul 10 00:12:45.840582 extend-filesystems[1493]: Found /dev/vda9 Jul 10 00:12:45.842847 extend-filesystems[1493]: Checking size of /dev/vda9 Jul 10 00:12:45.850217 jq[1513]: true Jul 10 00:12:45.853848 update_engine[1511]: I20250710 00:12:45.853770 1511 main.cc:92] Flatcar Update Engine starting Jul 10 00:12:45.863393 extend-filesystems[1493]: Resized partition /dev/vda9 Jul 10 00:12:45.873219 kernel: mousedev: PS/2 mouse device common for all mice Jul 10 00:12:45.872725 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 10 00:12:45.874287 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 10 00:12:45.874543 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 10 00:12:45.874878 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 10 00:12:45.875119 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 10 00:12:45.876123 extend-filesystems[1525]: resize2fs 1.47.2 (1-Jan-2025) Jul 10 00:12:45.876798 systemd[1]: motdgen.service: Deactivated successfully. Jul 10 00:12:45.877499 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 10 00:12:45.879557 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 10 00:12:45.880255 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 10 00:12:45.891232 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 10 00:12:45.926689 dbus-daemon[1489]: [system] SELinux support is enabled Jul 10 00:12:45.902736 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 10 00:12:45.942587 jq[1529]: true Jul 10 00:12:45.942817 update_engine[1511]: I20250710 00:12:45.942425 1511 update_check_scheduler.cc:74] Next update check in 2m44s Jul 10 00:12:45.926851 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 10 00:12:45.930513 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 10 00:12:45.930533 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 10 00:12:45.931807 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 10 00:12:45.931822 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 10 00:12:45.942321 systemd[1]: Started update-engine.service - Update Engine. Jul 10 00:12:45.946178 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 10 00:12:45.950226 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 10 00:12:45.965302 kernel: ACPI: button: Power Button [PWRF] Jul 10 00:12:45.955929 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 10 00:12:45.967252 systemd-networkd[1462]: lo: Link UP Jul 10 00:12:45.967264 systemd-networkd[1462]: lo: Gained carrier Jul 10 00:12:45.969908 extend-filesystems[1525]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 10 00:12:45.969908 extend-filesystems[1525]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 10 00:12:45.969908 extend-filesystems[1525]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 10 00:12:45.970108 systemd-networkd[1462]: Enumeration completed Jul 10 00:12:45.970240 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 10 00:12:45.971398 systemd[1]: Reached target network.target - Network. Jul 10 00:12:45.972128 systemd-networkd[1462]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 00:12:45.972132 systemd-networkd[1462]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 10 00:12:45.973351 systemd[1]: Starting containerd.service - containerd container runtime... Jul 10 00:12:45.973682 systemd-networkd[1462]: eth0: Link UP Jul 10 00:12:45.974854 systemd-networkd[1462]: eth0: Gained carrier Jul 10 00:12:45.974868 systemd-networkd[1462]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 00:12:45.975970 extend-filesystems[1493]: Resized filesystem in /dev/vda9 Jul 10 00:12:45.978931 tar[1528]: linux-amd64/LICENSE Jul 10 00:12:45.979148 tar[1528]: linux-amd64/helm Jul 10 00:12:45.986556 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 10 00:12:45.991868 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 10 00:12:45.992711 systemd-networkd[1462]: eth0: DHCPv4 address 10.0.0.19/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 10 00:12:45.993709 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 10 00:12:45.994027 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 10 00:12:45.995306 systemd-timesyncd[1454]: Network configuration changed, trying to establish connection. Jul 10 00:12:47.090608 systemd-resolved[1406]: Clock change detected. Flushing caches. Jul 10 00:12:47.090833 systemd-timesyncd[1454]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 10 00:12:47.090883 systemd-timesyncd[1454]: Initial clock synchronization to Thu 2025-07-10 00:12:47.090569 UTC. Jul 10 00:12:47.093528 systemd-logind[1509]: New seat seat0. Jul 10 00:12:47.096438 systemd[1]: Started systemd-logind.service - User Login Management. Jul 10 00:12:47.118888 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 10 00:12:47.128916 bash[1561]: Updated "/home/core/.ssh/authorized_keys" Jul 10 00:12:47.129109 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 10 00:12:47.131630 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 10 00:12:47.145160 (ntainerd)[1573]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 10 00:12:47.148700 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 10 00:12:47.152001 sshd_keygen[1519]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 10 00:12:47.161216 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jul 10 00:12:47.161522 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 10 00:12:47.161685 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 10 00:12:47.163865 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 10 00:12:47.250413 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 10 00:12:47.258274 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 10 00:12:47.269961 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 10 00:12:47.272261 locksmithd[1542]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 10 00:12:47.277614 systemd[1]: issuegen.service: Deactivated successfully. Jul 10 00:12:47.277928 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 10 00:12:47.282024 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 10 00:12:47.340102 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 10 00:12:47.345184 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 10 00:12:47.350329 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 10 00:12:47.351660 systemd[1]: Reached target getty.target - Login Prompts. Jul 10 00:12:47.387930 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:12:47.412018 systemd-logind[1509]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 10 00:12:47.414518 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 00:12:47.414847 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:12:47.428861 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:12:47.468876 systemd-logind[1509]: Watching system buttons on /dev/input/event2 (Power Button) Jul 10 00:12:47.512950 kernel: kvm_amd: TSC scaling supported Jul 10 00:12:47.513017 kernel: kvm_amd: Nested Virtualization enabled Jul 10 00:12:47.513031 kernel: kvm_amd: Nested Paging enabled Jul 10 00:12:47.514001 kernel: kvm_amd: LBR virtualization supported Jul 10 00:12:47.517825 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jul 10 00:12:47.517852 kernel: kvm_amd: Virtual GIF supported Jul 10 00:12:47.586824 kernel: EDAC MC: Ver: 3.0.0 Jul 10 00:12:47.590060 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:12:47.604864 containerd[1573]: time="2025-07-10T00:12:47Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 10 00:12:47.605842 containerd[1573]: time="2025-07-10T00:12:47.605776221Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 10 00:12:47.617810 containerd[1573]: time="2025-07-10T00:12:47.617721603Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="519.254µs" Jul 10 00:12:47.617853 containerd[1573]: time="2025-07-10T00:12:47.617764944Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 10 00:12:47.617972 containerd[1573]: time="2025-07-10T00:12:47.617940684Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 10 00:12:47.618185 containerd[1573]: time="2025-07-10T00:12:47.618162420Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 10 00:12:47.618218 containerd[1573]: time="2025-07-10T00:12:47.618188449Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 10 00:12:47.618238 containerd[1573]: time="2025-07-10T00:12:47.618218245Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 10 00:12:47.618334 containerd[1573]: time="2025-07-10T00:12:47.618307442Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 10 00:12:47.618334 containerd[1573]: time="2025-07-10T00:12:47.618327600Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 10 00:12:47.621480 containerd[1573]: time="2025-07-10T00:12:47.621232057Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 10 00:12:47.621576 containerd[1573]: time="2025-07-10T00:12:47.621551666Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 10 00:12:47.621642 containerd[1573]: time="2025-07-10T00:12:47.621628009Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 10 00:12:47.621706 containerd[1573]: time="2025-07-10T00:12:47.621692049Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 10 00:12:47.621910 containerd[1573]: time="2025-07-10T00:12:47.621892305Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 10 00:12:47.622211 containerd[1573]: time="2025-07-10T00:12:47.622187859Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 10 00:12:47.622313 containerd[1573]: time="2025-07-10T00:12:47.622293367Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 10 00:12:47.622401 containerd[1573]: time="2025-07-10T00:12:47.622372455Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 10 00:12:47.622487 containerd[1573]: time="2025-07-10T00:12:47.622472523Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 10 00:12:47.622825 containerd[1573]: time="2025-07-10T00:12:47.622783847Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 10 00:12:47.622942 containerd[1573]: time="2025-07-10T00:12:47.622926795Z" level=info msg="metadata content store policy set" policy=shared Jul 10 00:12:47.630812 containerd[1573]: time="2025-07-10T00:12:47.629877059Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 10 00:12:47.630812 containerd[1573]: time="2025-07-10T00:12:47.629914800Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 10 00:12:47.630812 containerd[1573]: time="2025-07-10T00:12:47.629928155Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 10 00:12:47.630812 containerd[1573]: time="2025-07-10T00:12:47.629940197Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 10 00:12:47.630812 containerd[1573]: time="2025-07-10T00:12:47.629953643Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 10 00:12:47.630812 containerd[1573]: time="2025-07-10T00:12:47.629964683Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 10 00:12:47.630812 containerd[1573]: time="2025-07-10T00:12:47.629977217Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 10 00:12:47.630812 containerd[1573]: time="2025-07-10T00:12:47.629989710Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 10 00:12:47.630812 containerd[1573]: time="2025-07-10T00:12:47.630001011Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 10 00:12:47.630812 containerd[1573]: time="2025-07-10T00:12:47.630018645Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 10 00:12:47.630812 containerd[1573]: time="2025-07-10T00:12:47.630029836Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 10 00:12:47.630812 containerd[1573]: time="2025-07-10T00:12:47.630043261Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 10 00:12:47.630812 containerd[1573]: time="2025-07-10T00:12:47.630152866Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 10 00:12:47.630812 containerd[1573]: time="2025-07-10T00:12:47.630170309Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 10 00:12:47.631081 containerd[1573]: time="2025-07-10T00:12:47.630184095Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 10 00:12:47.631081 containerd[1573]: time="2025-07-10T00:12:47.630194334Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 10 00:12:47.631081 containerd[1573]: time="2025-07-10T00:12:47.630212919Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 10 00:12:47.631081 containerd[1573]: time="2025-07-10T00:12:47.630227436Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 10 00:12:47.631081 containerd[1573]: time="2025-07-10T00:12:47.630239819Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 10 00:12:47.631081 containerd[1573]: time="2025-07-10T00:12:47.630252673Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 10 00:12:47.631081 containerd[1573]: time="2025-07-10T00:12:47.630265437Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 10 00:12:47.631081 containerd[1573]: time="2025-07-10T00:12:47.630278011Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 10 00:12:47.631081 containerd[1573]: time="2025-07-10T00:12:47.630293941Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 10 00:12:47.631081 containerd[1573]: time="2025-07-10T00:12:47.630365094Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 10 00:12:47.631081 containerd[1573]: time="2025-07-10T00:12:47.630394650Z" level=info msg="Start snapshots syncer" Jul 10 00:12:47.631081 containerd[1573]: time="2025-07-10T00:12:47.630417623Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 10 00:12:47.631324 containerd[1573]: time="2025-07-10T00:12:47.630643316Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 10 00:12:47.631324 containerd[1573]: time="2025-07-10T00:12:47.630695814Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 10 00:12:47.632158 containerd[1573]: time="2025-07-10T00:12:47.632133941Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 10 00:12:47.632319 containerd[1573]: time="2025-07-10T00:12:47.632301816Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 10 00:12:47.632404 containerd[1573]: time="2025-07-10T00:12:47.632387948Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 10 00:12:47.632471 containerd[1573]: time="2025-07-10T00:12:47.632456927Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 10 00:12:47.632524 containerd[1573]: time="2025-07-10T00:12:47.632512121Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 10 00:12:47.632577 containerd[1573]: time="2025-07-10T00:12:47.632565621Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 10 00:12:47.632634 containerd[1573]: time="2025-07-10T00:12:47.632622187Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 10 00:12:47.632684 containerd[1573]: time="2025-07-10T00:12:47.632673213Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 10 00:12:47.632747 containerd[1573]: time="2025-07-10T00:12:47.632734848Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 10 00:12:47.632827 containerd[1573]: time="2025-07-10T00:12:47.632812815Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 10 00:12:47.632882 containerd[1573]: time="2025-07-10T00:12:47.632870964Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 10 00:12:47.632966 containerd[1573]: time="2025-07-10T00:12:47.632951755Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 10 00:12:47.633026 containerd[1573]: time="2025-07-10T00:12:47.633011397Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 10 00:12:47.633071 containerd[1573]: time="2025-07-10T00:12:47.633060569Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 10 00:12:47.633122 containerd[1573]: time="2025-07-10T00:12:47.633109491Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 10 00:12:47.633167 containerd[1573]: time="2025-07-10T00:12:47.633156429Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 10 00:12:47.633242 containerd[1573]: time="2025-07-10T00:12:47.633225288Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 10 00:12:47.633305 containerd[1573]: time="2025-07-10T00:12:47.633290020Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 10 00:12:47.633375 containerd[1573]: time="2025-07-10T00:12:47.633361524Z" level=info msg="runtime interface created" Jul 10 00:12:47.633441 containerd[1573]: time="2025-07-10T00:12:47.633429130Z" level=info msg="created NRI interface" Jul 10 00:12:47.633513 containerd[1573]: time="2025-07-10T00:12:47.633498991Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 10 00:12:47.633565 containerd[1573]: time="2025-07-10T00:12:47.633553764Z" level=info msg="Connect containerd service" Jul 10 00:12:47.633642 containerd[1573]: time="2025-07-10T00:12:47.633627282Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 10 00:12:47.634519 containerd[1573]: time="2025-07-10T00:12:47.634496041Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 10 00:12:47.791281 containerd[1573]: time="2025-07-10T00:12:47.791201239Z" level=info msg="Start subscribing containerd event" Jul 10 00:12:47.791438 containerd[1573]: time="2025-07-10T00:12:47.791318730Z" level=info msg="Start recovering state" Jul 10 00:12:47.791438 containerd[1573]: time="2025-07-10T00:12:47.791346873Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 10 00:12:47.791487 containerd[1573]: time="2025-07-10T00:12:47.791458191Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 10 00:12:47.791508 containerd[1573]: time="2025-07-10T00:12:47.791485943Z" level=info msg="Start event monitor" Jul 10 00:12:47.791528 containerd[1573]: time="2025-07-10T00:12:47.791506873Z" level=info msg="Start cni network conf syncer for default" Jul 10 00:12:47.791528 containerd[1573]: time="2025-07-10T00:12:47.791518605Z" level=info msg="Start streaming server" Jul 10 00:12:47.791565 containerd[1573]: time="2025-07-10T00:12:47.791549623Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 10 00:12:47.791594 containerd[1573]: time="2025-07-10T00:12:47.791561836Z" level=info msg="runtime interface starting up..." Jul 10 00:12:47.791594 containerd[1573]: time="2025-07-10T00:12:47.791572496Z" level=info msg="starting plugins..." Jul 10 00:12:47.791646 containerd[1573]: time="2025-07-10T00:12:47.791596461Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 10 00:12:47.791880 containerd[1573]: time="2025-07-10T00:12:47.791854274Z" level=info msg="containerd successfully booted in 0.187459s" Jul 10 00:12:47.791985 systemd[1]: Started containerd.service - containerd container runtime. Jul 10 00:12:47.873584 tar[1528]: linux-amd64/README.md Jul 10 00:12:47.907099 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 10 00:12:48.412123 systemd-networkd[1462]: eth0: Gained IPv6LL Jul 10 00:12:48.415784 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 10 00:12:48.417943 systemd[1]: Reached target network-online.target - Network is Online. Jul 10 00:12:48.420941 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 10 00:12:48.423712 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:12:48.425974 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 10 00:12:48.455050 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 10 00:12:48.457021 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 10 00:12:48.457315 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 10 00:12:48.459697 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 10 00:12:49.609206 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:12:49.611152 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 10 00:12:49.612511 systemd[1]: Startup finished in 3.573s (kernel) + 6.979s (initrd) + 4.949s (userspace) = 15.502s. Jul 10 00:12:49.624254 (kubelet)[1668]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 00:12:50.144519 kubelet[1668]: E0710 00:12:50.144431 1668 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 00:12:50.148903 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 00:12:50.149118 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 00:12:50.149574 systemd[1]: kubelet.service: Consumed 1.543s CPU time, 266.9M memory peak. Jul 10 00:12:51.730431 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 10 00:12:51.732047 systemd[1]: Started sshd@0-10.0.0.19:22-10.0.0.1:50872.service - OpenSSH per-connection server daemon (10.0.0.1:50872). Jul 10 00:12:51.797909 sshd[1681]: Accepted publickey for core from 10.0.0.1 port 50872 ssh2: RSA SHA256:a/WzkVKs173+YSebQY64/4LigDpieaPOYRH6W2gWTe4 Jul 10 00:12:51.799781 sshd-session[1681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:12:51.807020 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 10 00:12:51.808210 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 10 00:12:51.815279 systemd-logind[1509]: New session 1 of user core. Jul 10 00:12:51.838053 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 10 00:12:51.841210 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 10 00:12:51.858262 (systemd)[1685]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:12:51.860756 systemd-logind[1509]: New session c1 of user core. Jul 10 00:12:52.005266 systemd[1685]: Queued start job for default target default.target. Jul 10 00:12:52.027181 systemd[1685]: Created slice app.slice - User Application Slice. Jul 10 00:12:52.027207 systemd[1685]: Reached target paths.target - Paths. Jul 10 00:12:52.027245 systemd[1685]: Reached target timers.target - Timers. Jul 10 00:12:52.029077 systemd[1685]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 10 00:12:52.041211 systemd[1685]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 10 00:12:52.041349 systemd[1685]: Reached target sockets.target - Sockets. Jul 10 00:12:52.041391 systemd[1685]: Reached target basic.target - Basic System. Jul 10 00:12:52.041431 systemd[1685]: Reached target default.target - Main User Target. Jul 10 00:12:52.041466 systemd[1685]: Startup finished in 173ms. Jul 10 00:12:52.042214 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 10 00:12:52.044160 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 10 00:12:52.108259 systemd[1]: Started sshd@1-10.0.0.19:22-10.0.0.1:50878.service - OpenSSH per-connection server daemon (10.0.0.1:50878). Jul 10 00:12:52.157785 sshd[1696]: Accepted publickey for core from 10.0.0.1 port 50878 ssh2: RSA SHA256:a/WzkVKs173+YSebQY64/4LigDpieaPOYRH6W2gWTe4 Jul 10 00:12:52.159154 sshd-session[1696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:12:52.163482 systemd-logind[1509]: New session 2 of user core. Jul 10 00:12:52.172927 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 10 00:12:52.225757 sshd[1698]: Connection closed by 10.0.0.1 port 50878 Jul 10 00:12:52.226120 sshd-session[1696]: pam_unix(sshd:session): session closed for user core Jul 10 00:12:52.239511 systemd[1]: sshd@1-10.0.0.19:22-10.0.0.1:50878.service: Deactivated successfully. Jul 10 00:12:52.241438 systemd[1]: session-2.scope: Deactivated successfully. Jul 10 00:12:52.242148 systemd-logind[1509]: Session 2 logged out. Waiting for processes to exit. Jul 10 00:12:52.245274 systemd[1]: Started sshd@2-10.0.0.19:22-10.0.0.1:50882.service - OpenSSH per-connection server daemon (10.0.0.1:50882). Jul 10 00:12:52.245841 systemd-logind[1509]: Removed session 2. Jul 10 00:12:52.297326 sshd[1704]: Accepted publickey for core from 10.0.0.1 port 50882 ssh2: RSA SHA256:a/WzkVKs173+YSebQY64/4LigDpieaPOYRH6W2gWTe4 Jul 10 00:12:52.298570 sshd-session[1704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:12:52.302981 systemd-logind[1509]: New session 3 of user core. Jul 10 00:12:52.316951 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 10 00:12:52.365784 sshd[1707]: Connection closed by 10.0.0.1 port 50882 Jul 10 00:12:52.366355 sshd-session[1704]: pam_unix(sshd:session): session closed for user core Jul 10 00:12:52.381534 systemd[1]: sshd@2-10.0.0.19:22-10.0.0.1:50882.service: Deactivated successfully. Jul 10 00:12:52.383584 systemd[1]: session-3.scope: Deactivated successfully. Jul 10 00:12:52.384325 systemd-logind[1509]: Session 3 logged out. Waiting for processes to exit. Jul 10 00:12:52.387405 systemd[1]: Started sshd@3-10.0.0.19:22-10.0.0.1:50898.service - OpenSSH per-connection server daemon (10.0.0.1:50898). Jul 10 00:12:52.388002 systemd-logind[1509]: Removed session 3. Jul 10 00:12:52.437191 sshd[1713]: Accepted publickey for core from 10.0.0.1 port 50898 ssh2: RSA SHA256:a/WzkVKs173+YSebQY64/4LigDpieaPOYRH6W2gWTe4 Jul 10 00:12:52.438716 sshd-session[1713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:12:52.443097 systemd-logind[1509]: New session 4 of user core. Jul 10 00:12:52.452915 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 10 00:12:52.506099 sshd[1715]: Connection closed by 10.0.0.1 port 50898 Jul 10 00:12:52.506404 sshd-session[1713]: pam_unix(sshd:session): session closed for user core Jul 10 00:12:52.519919 systemd[1]: sshd@3-10.0.0.19:22-10.0.0.1:50898.service: Deactivated successfully. Jul 10 00:12:52.522034 systemd[1]: session-4.scope: Deactivated successfully. Jul 10 00:12:52.522761 systemd-logind[1509]: Session 4 logged out. Waiting for processes to exit. Jul 10 00:12:52.525904 systemd[1]: Started sshd@4-10.0.0.19:22-10.0.0.1:50900.service - OpenSSH per-connection server daemon (10.0.0.1:50900). Jul 10 00:12:52.526515 systemd-logind[1509]: Removed session 4. Jul 10 00:12:52.582327 sshd[1721]: Accepted publickey for core from 10.0.0.1 port 50900 ssh2: RSA SHA256:a/WzkVKs173+YSebQY64/4LigDpieaPOYRH6W2gWTe4 Jul 10 00:12:52.583679 sshd-session[1721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:12:52.587725 systemd-logind[1509]: New session 5 of user core. Jul 10 00:12:52.600919 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 10 00:12:52.658138 sudo[1724]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 10 00:12:52.658477 sudo[1724]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 00:12:52.845662 sudo[1724]: pam_unix(sudo:session): session closed for user root Jul 10 00:12:52.847668 sshd[1723]: Connection closed by 10.0.0.1 port 50900 Jul 10 00:12:52.848055 sshd-session[1721]: pam_unix(sshd:session): session closed for user core Jul 10 00:12:52.859701 systemd[1]: sshd@4-10.0.0.19:22-10.0.0.1:50900.service: Deactivated successfully. Jul 10 00:12:52.862485 systemd[1]: session-5.scope: Deactivated successfully. Jul 10 00:12:52.863696 systemd-logind[1509]: Session 5 logged out. Waiting for processes to exit. Jul 10 00:12:52.867226 systemd[1]: Started sshd@5-10.0.0.19:22-10.0.0.1:50906.service - OpenSSH per-connection server daemon (10.0.0.1:50906). Jul 10 00:12:52.867749 systemd-logind[1509]: Removed session 5. Jul 10 00:12:52.935208 sshd[1730]: Accepted publickey for core from 10.0.0.1 port 50906 ssh2: RSA SHA256:a/WzkVKs173+YSebQY64/4LigDpieaPOYRH6W2gWTe4 Jul 10 00:12:52.936670 sshd-session[1730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:12:52.941233 systemd-logind[1509]: New session 6 of user core. Jul 10 00:12:52.951918 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 10 00:12:53.007951 sudo[1734]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 10 00:12:53.008299 sudo[1734]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 00:12:53.016284 sudo[1734]: pam_unix(sudo:session): session closed for user root Jul 10 00:12:53.024590 sudo[1733]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 10 00:12:53.024938 sudo[1733]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 00:12:53.035854 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 10 00:12:53.091883 augenrules[1756]: No rules Jul 10 00:12:53.093867 systemd[1]: audit-rules.service: Deactivated successfully. Jul 10 00:12:53.094167 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 10 00:12:53.095482 sudo[1733]: pam_unix(sudo:session): session closed for user root Jul 10 00:12:53.097212 sshd[1732]: Connection closed by 10.0.0.1 port 50906 Jul 10 00:12:53.097488 sshd-session[1730]: pam_unix(sshd:session): session closed for user core Jul 10 00:12:53.113338 systemd[1]: sshd@5-10.0.0.19:22-10.0.0.1:50906.service: Deactivated successfully. Jul 10 00:12:53.115392 systemd[1]: session-6.scope: Deactivated successfully. Jul 10 00:12:53.116204 systemd-logind[1509]: Session 6 logged out. Waiting for processes to exit. Jul 10 00:12:53.119954 systemd[1]: Started sshd@6-10.0.0.19:22-10.0.0.1:50916.service - OpenSSH per-connection server daemon (10.0.0.1:50916). Jul 10 00:12:53.120521 systemd-logind[1509]: Removed session 6. Jul 10 00:12:53.174552 sshd[1765]: Accepted publickey for core from 10.0.0.1 port 50916 ssh2: RSA SHA256:a/WzkVKs173+YSebQY64/4LigDpieaPOYRH6W2gWTe4 Jul 10 00:12:53.206074 sshd-session[1765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:12:53.211144 systemd-logind[1509]: New session 7 of user core. Jul 10 00:12:53.224951 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 10 00:12:53.280254 sudo[1768]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 10 00:12:53.280593 sudo[1768]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 00:12:53.633057 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 10 00:12:53.658147 (dockerd)[1788]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 10 00:12:54.118941 dockerd[1788]: time="2025-07-10T00:12:54.118580224Z" level=info msg="Starting up" Jul 10 00:12:54.120012 dockerd[1788]: time="2025-07-10T00:12:54.119986421Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 10 00:12:54.548183 dockerd[1788]: time="2025-07-10T00:12:54.547990221Z" level=info msg="Loading containers: start." Jul 10 00:12:54.558854 kernel: Initializing XFRM netlink socket Jul 10 00:12:54.815555 systemd-networkd[1462]: docker0: Link UP Jul 10 00:12:54.823024 dockerd[1788]: time="2025-07-10T00:12:54.822930963Z" level=info msg="Loading containers: done." Jul 10 00:12:55.091712 dockerd[1788]: time="2025-07-10T00:12:55.091571920Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 10 00:12:55.091712 dockerd[1788]: time="2025-07-10T00:12:55.091680534Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 10 00:12:55.092029 dockerd[1788]: time="2025-07-10T00:12:55.091825225Z" level=info msg="Initializing buildkit" Jul 10 00:12:55.136777 dockerd[1788]: time="2025-07-10T00:12:55.136723886Z" level=info msg="Completed buildkit initialization" Jul 10 00:12:55.143022 dockerd[1788]: time="2025-07-10T00:12:55.142956875Z" level=info msg="Daemon has completed initialization" Jul 10 00:12:55.143022 dockerd[1788]: time="2025-07-10T00:12:55.143027187Z" level=info msg="API listen on /run/docker.sock" Jul 10 00:12:55.143284 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 10 00:12:55.956228 containerd[1573]: time="2025-07-10T00:12:55.956162767Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\"" Jul 10 00:12:56.570216 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4082795408.mount: Deactivated successfully. Jul 10 00:12:57.600669 containerd[1573]: time="2025-07-10T00:12:57.600580096Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:12:57.601423 containerd[1573]: time="2025-07-10T00:12:57.601330784Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.2: active requests=0, bytes read=30079099" Jul 10 00:12:57.602611 containerd[1573]: time="2025-07-10T00:12:57.602555351Z" level=info msg="ImageCreate event name:\"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:12:57.606377 containerd[1573]: time="2025-07-10T00:12:57.606332745Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:12:57.607225 containerd[1573]: time="2025-07-10T00:12:57.607194320Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.2\" with image id \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\", size \"30075899\" in 1.650966101s" Jul 10 00:12:57.607271 containerd[1573]: time="2025-07-10T00:12:57.607229226Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\" returns image reference \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\"" Jul 10 00:12:57.607877 containerd[1573]: time="2025-07-10T00:12:57.607849960Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\"" Jul 10 00:12:59.275408 containerd[1573]: time="2025-07-10T00:12:59.275302188Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:12:59.276018 containerd[1573]: time="2025-07-10T00:12:59.275972996Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.2: active requests=0, bytes read=26018946" Jul 10 00:12:59.277182 containerd[1573]: time="2025-07-10T00:12:59.277134254Z" level=info msg="ImageCreate event name:\"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:12:59.279406 containerd[1573]: time="2025-07-10T00:12:59.279368063Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:12:59.280224 containerd[1573]: time="2025-07-10T00:12:59.280191327Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.2\" with image id \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\", size \"27646507\" in 1.672308325s" Jul 10 00:12:59.280224 containerd[1573]: time="2025-07-10T00:12:59.280223898Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\" returns image reference \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\"" Jul 10 00:12:59.280875 containerd[1573]: time="2025-07-10T00:12:59.280838991Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\"" Jul 10 00:13:00.399904 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 10 00:13:00.402560 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:13:00.722121 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:13:00.742112 (kubelet)[2070]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 00:13:01.219143 containerd[1573]: time="2025-07-10T00:13:01.218982362Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:13:01.220363 containerd[1573]: time="2025-07-10T00:13:01.220303790Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.2: active requests=0, bytes read=20155055" Jul 10 00:13:01.221607 containerd[1573]: time="2025-07-10T00:13:01.221561569Z" level=info msg="ImageCreate event name:\"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:13:01.224283 containerd[1573]: time="2025-07-10T00:13:01.224236746Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:13:01.225495 containerd[1573]: time="2025-07-10T00:13:01.225412922Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.2\" with image id \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\", size \"21782634\" in 1.944527984s" Jul 10 00:13:01.225495 containerd[1573]: time="2025-07-10T00:13:01.225490056Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\" returns image reference \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\"" Jul 10 00:13:01.226077 containerd[1573]: time="2025-07-10T00:13:01.226022795Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jul 10 00:13:01.358678 kubelet[2070]: E0710 00:13:01.358060 2070 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 00:13:01.367208 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 00:13:01.367456 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 00:13:01.367923 systemd[1]: kubelet.service: Consumed 470ms CPU time, 110.9M memory peak. Jul 10 00:13:03.087961 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount572969706.mount: Deactivated successfully. Jul 10 00:13:03.566595 containerd[1573]: time="2025-07-10T00:13:03.566436997Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:13:03.567381 containerd[1573]: time="2025-07-10T00:13:03.567347064Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.2: active requests=0, bytes read=31892746" Jul 10 00:13:03.568396 containerd[1573]: time="2025-07-10T00:13:03.568356948Z" level=info msg="ImageCreate event name:\"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:13:03.570401 containerd[1573]: time="2025-07-10T00:13:03.570351028Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:13:03.571003 containerd[1573]: time="2025-07-10T00:13:03.570958938Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.2\" with image id \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\", repo tag \"registry.k8s.io/kube-proxy:v1.33.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\", size \"31891765\" in 2.344889825s" Jul 10 00:13:03.571040 containerd[1573]: time="2025-07-10T00:13:03.571001948Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\"" Jul 10 00:13:03.571533 containerd[1573]: time="2025-07-10T00:13:03.571492228Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jul 10 00:13:04.106845 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2578893033.mount: Deactivated successfully. Jul 10 00:13:05.148566 containerd[1573]: time="2025-07-10T00:13:05.148491955Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:13:05.149226 containerd[1573]: time="2025-07-10T00:13:05.149191407Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Jul 10 00:13:05.150526 containerd[1573]: time="2025-07-10T00:13:05.150483951Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:13:05.153348 containerd[1573]: time="2025-07-10T00:13:05.153302407Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:13:05.154204 containerd[1573]: time="2025-07-10T00:13:05.154167349Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.582645245s" Jul 10 00:13:05.154257 containerd[1573]: time="2025-07-10T00:13:05.154205871Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jul 10 00:13:05.154776 containerd[1573]: time="2025-07-10T00:13:05.154642550Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 10 00:13:05.663435 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount47056406.mount: Deactivated successfully. Jul 10 00:13:05.668940 containerd[1573]: time="2025-07-10T00:13:05.668883155Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 00:13:05.669637 containerd[1573]: time="2025-07-10T00:13:05.669616410Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 10 00:13:05.670759 containerd[1573]: time="2025-07-10T00:13:05.670715601Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 00:13:05.672756 containerd[1573]: time="2025-07-10T00:13:05.672705603Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 00:13:05.673388 containerd[1573]: time="2025-07-10T00:13:05.673349531Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 518.679549ms" Jul 10 00:13:05.673388 containerd[1573]: time="2025-07-10T00:13:05.673378735Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 10 00:13:05.673873 containerd[1573]: time="2025-07-10T00:13:05.673847244Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jul 10 00:13:06.230153 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2990608411.mount: Deactivated successfully. Jul 10 00:13:08.371816 containerd[1573]: time="2025-07-10T00:13:08.371716963Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:13:08.373211 containerd[1573]: time="2025-07-10T00:13:08.373094546Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58247175" Jul 10 00:13:08.376809 containerd[1573]: time="2025-07-10T00:13:08.375425387Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:13:08.379653 containerd[1573]: time="2025-07-10T00:13:08.379571974Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:13:08.380949 containerd[1573]: time="2025-07-10T00:13:08.380901717Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.707024908s" Jul 10 00:13:08.380949 containerd[1573]: time="2025-07-10T00:13:08.380940079Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jul 10 00:13:11.277291 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:13:11.277461 systemd[1]: kubelet.service: Consumed 470ms CPU time, 110.9M memory peak. Jul 10 00:13:11.279721 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:13:11.306640 systemd[1]: Reload requested from client PID 2229 ('systemctl') (unit session-7.scope)... Jul 10 00:13:11.306657 systemd[1]: Reloading... Jul 10 00:13:11.390963 zram_generator::config[2272]: No configuration found. Jul 10 00:13:11.610353 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:13:11.746698 systemd[1]: Reloading finished in 439 ms. Jul 10 00:13:11.820635 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 10 00:13:11.820760 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 10 00:13:11.821164 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:13:11.821221 systemd[1]: kubelet.service: Consumed 165ms CPU time, 98.3M memory peak. Jul 10 00:13:11.823020 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:13:11.991929 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:13:11.996836 (kubelet)[2320]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 10 00:13:12.039940 kubelet[2320]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:13:12.040379 kubelet[2320]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 10 00:13:12.040379 kubelet[2320]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:13:12.040482 kubelet[2320]: I0710 00:13:12.040444 2320 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 10 00:13:12.720293 kubelet[2320]: I0710 00:13:12.720236 2320 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 10 00:13:12.720293 kubelet[2320]: I0710 00:13:12.720272 2320 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 10 00:13:12.720554 kubelet[2320]: I0710 00:13:12.720530 2320 server.go:956] "Client rotation is on, will bootstrap in background" Jul 10 00:13:12.775816 kubelet[2320]: E0710 00:13:12.775705 2320 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.19:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 10 00:13:12.775968 kubelet[2320]: I0710 00:13:12.775863 2320 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 10 00:13:12.792960 kubelet[2320]: I0710 00:13:12.792928 2320 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 10 00:13:12.801823 kubelet[2320]: I0710 00:13:12.801685 2320 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 10 00:13:12.802161 kubelet[2320]: I0710 00:13:12.802114 2320 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 10 00:13:12.802935 kubelet[2320]: I0710 00:13:12.802157 2320 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 10 00:13:12.802935 kubelet[2320]: I0710 00:13:12.802948 2320 topology_manager.go:138] "Creating topology manager with none policy" Jul 10 00:13:12.803320 kubelet[2320]: I0710 00:13:12.802964 2320 container_manager_linux.go:303] "Creating device plugin manager" Jul 10 00:13:12.806678 kubelet[2320]: I0710 00:13:12.806639 2320 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:13:12.843049 kubelet[2320]: I0710 00:13:12.843001 2320 kubelet.go:480] "Attempting to sync node with API server" Jul 10 00:13:12.843049 kubelet[2320]: I0710 00:13:12.843033 2320 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 10 00:13:12.843176 kubelet[2320]: I0710 00:13:12.843098 2320 kubelet.go:386] "Adding apiserver pod source" Jul 10 00:13:12.861460 kubelet[2320]: I0710 00:13:12.861317 2320 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 10 00:13:12.861460 kubelet[2320]: E0710 00:13:12.861402 2320 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 10 00:13:12.862510 kubelet[2320]: E0710 00:13:12.862474 2320 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.19:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 10 00:13:12.877842 kubelet[2320]: I0710 00:13:12.877811 2320 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 10 00:13:12.878266 kubelet[2320]: I0710 00:13:12.878238 2320 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 10 00:13:12.878955 kubelet[2320]: W0710 00:13:12.878929 2320 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 10 00:13:12.882198 kubelet[2320]: I0710 00:13:12.882172 2320 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 10 00:13:12.882251 kubelet[2320]: I0710 00:13:12.882229 2320 server.go:1289] "Started kubelet" Jul 10 00:13:12.888163 kubelet[2320]: I0710 00:13:12.888126 2320 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 10 00:13:12.888163 kubelet[2320]: I0710 00:13:12.888143 2320 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 10 00:13:12.889028 kubelet[2320]: I0710 00:13:12.888994 2320 server.go:317] "Adding debug handlers to kubelet server" Jul 10 00:13:12.896259 kubelet[2320]: I0710 00:13:12.896232 2320 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 10 00:13:12.897010 kubelet[2320]: I0710 00:13:12.896989 2320 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 10 00:13:12.897173 kubelet[2320]: E0710 00:13:12.897153 2320 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:13:12.897724 kubelet[2320]: I0710 00:13:12.897696 2320 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 10 00:13:12.897820 kubelet[2320]: I0710 00:13:12.897782 2320 reconciler.go:26] "Reconciler: start to sync state" Jul 10 00:13:12.899494 kubelet[2320]: I0710 00:13:12.899422 2320 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 10 00:13:12.899768 kubelet[2320]: I0710 00:13:12.899718 2320 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 10 00:13:12.899876 kubelet[2320]: E0710 00:13:12.899773 2320 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.19:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 10 00:13:12.899876 kubelet[2320]: E0710 00:13:12.899860 2320 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.19:6443: connect: connection refused" interval="200ms" Jul 10 00:13:12.901338 kubelet[2320]: E0710 00:13:12.901301 2320 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 10 00:13:12.901813 kubelet[2320]: I0710 00:13:12.901738 2320 factory.go:223] Registration of the containerd container factory successfully Jul 10 00:13:12.901813 kubelet[2320]: I0710 00:13:12.901767 2320 factory.go:223] Registration of the systemd container factory successfully Jul 10 00:13:12.901960 kubelet[2320]: I0710 00:13:12.901923 2320 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 10 00:13:12.955881 kubelet[2320]: E0710 00:13:12.952625 2320 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.19:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.19:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1850bb7b346f2d30 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-10 00:13:12.88219576 +0000 UTC m=+0.878089553,LastTimestamp:2025-07-10 00:13:12.88219576 +0000 UTC m=+0.878089553,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 10 00:13:12.956466 kubelet[2320]: I0710 00:13:12.956170 2320 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 10 00:13:12.956466 kubelet[2320]: I0710 00:13:12.956190 2320 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 10 00:13:12.956466 kubelet[2320]: I0710 00:13:12.956217 2320 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:13:12.961079 kubelet[2320]: I0710 00:13:12.961032 2320 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 10 00:13:12.962761 kubelet[2320]: I0710 00:13:12.962720 2320 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 10 00:13:12.962861 kubelet[2320]: I0710 00:13:12.962785 2320 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 10 00:13:12.963824 kubelet[2320]: I0710 00:13:12.962980 2320 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 10 00:13:12.963824 kubelet[2320]: I0710 00:13:12.962994 2320 kubelet.go:2436] "Starting kubelet main sync loop" Jul 10 00:13:12.963824 kubelet[2320]: E0710 00:13:12.963042 2320 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 10 00:13:12.964089 kubelet[2320]: E0710 00:13:12.964022 2320 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.19:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 10 00:13:12.997338 kubelet[2320]: E0710 00:13:12.997211 2320 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:13:13.063535 kubelet[2320]: E0710 00:13:13.063444 2320 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 10 00:13:13.097763 kubelet[2320]: E0710 00:13:13.097686 2320 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:13:13.100519 kubelet[2320]: E0710 00:13:13.100465 2320 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.19:6443: connect: connection refused" interval="400ms" Jul 10 00:13:13.198571 kubelet[2320]: E0710 00:13:13.198534 2320 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:13:13.264025 kubelet[2320]: E0710 00:13:13.263888 2320 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 10 00:13:13.299525 kubelet[2320]: E0710 00:13:13.299468 2320 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:13:13.400537 kubelet[2320]: E0710 00:13:13.400452 2320 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:13:13.501240 kubelet[2320]: E0710 00:13:13.501171 2320 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:13:13.501723 kubelet[2320]: E0710 00:13:13.501684 2320 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.19:6443: connect: connection refused" interval="800ms" Jul 10 00:13:13.602436 kubelet[2320]: E0710 00:13:13.602243 2320 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:13:13.664588 kubelet[2320]: E0710 00:13:13.664484 2320 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 10 00:13:13.669330 kubelet[2320]: E0710 00:13:13.669279 2320 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.19:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 10 00:13:13.675237 kubelet[2320]: E0710 00:13:13.675187 2320 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 10 00:13:13.702860 kubelet[2320]: E0710 00:13:13.702785 2320 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:13:13.747319 kubelet[2320]: I0710 00:13:13.747260 2320 policy_none.go:49] "None policy: Start" Jul 10 00:13:13.747387 kubelet[2320]: I0710 00:13:13.747323 2320 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 10 00:13:13.747387 kubelet[2320]: I0710 00:13:13.747344 2320 state_mem.go:35] "Initializing new in-memory state store" Jul 10 00:13:13.755652 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 10 00:13:13.767260 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 10 00:13:13.770651 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 10 00:13:13.786036 kubelet[2320]: E0710 00:13:13.786002 2320 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 10 00:13:13.786314 kubelet[2320]: I0710 00:13:13.786292 2320 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 10 00:13:13.786351 kubelet[2320]: I0710 00:13:13.786311 2320 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 10 00:13:13.786591 kubelet[2320]: I0710 00:13:13.786559 2320 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 10 00:13:13.787555 kubelet[2320]: E0710 00:13:13.787517 2320 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 10 00:13:13.787607 kubelet[2320]: E0710 00:13:13.787558 2320 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 10 00:13:13.887832 kubelet[2320]: I0710 00:13:13.887723 2320 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 10 00:13:13.888198 kubelet[2320]: E0710 00:13:13.888142 2320 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.19:6443/api/v1/nodes\": dial tcp 10.0.0.19:6443: connect: connection refused" node="localhost" Jul 10 00:13:13.969107 kubelet[2320]: E0710 00:13:13.969045 2320 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.19:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 10 00:13:14.090669 kubelet[2320]: I0710 00:13:14.090611 2320 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 10 00:13:14.091188 kubelet[2320]: E0710 00:13:14.091111 2320 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.19:6443/api/v1/nodes\": dial tcp 10.0.0.19:6443: connect: connection refused" node="localhost" Jul 10 00:13:14.113436 kubelet[2320]: E0710 00:13:14.113305 2320 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.19:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 10 00:13:14.303254 kubelet[2320]: E0710 00:13:14.303169 2320 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.19:6443: connect: connection refused" interval="1.6s" Jul 10 00:13:14.478422 systemd[1]: Created slice kubepods-burstable-pod604a504c60e55c8b681b0294a2ed3222.slice - libcontainer container kubepods-burstable-pod604a504c60e55c8b681b0294a2ed3222.slice. Jul 10 00:13:14.490019 kubelet[2320]: E0710 00:13:14.489962 2320 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 00:13:14.492723 kubelet[2320]: I0710 00:13:14.492659 2320 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 10 00:13:14.493020 systemd[1]: Created slice kubepods-burstable-pod84b858ec27c8b2738b1d9ff9927e0dcb.slice - libcontainer container kubepods-burstable-pod84b858ec27c8b2738b1d9ff9927e0dcb.slice. Jul 10 00:13:14.493694 kubelet[2320]: E0710 00:13:14.493174 2320 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.19:6443/api/v1/nodes\": dial tcp 10.0.0.19:6443: connect: connection refused" node="localhost" Jul 10 00:13:14.504525 kubelet[2320]: E0710 00:13:14.504493 2320 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 00:13:14.506524 kubelet[2320]: I0710 00:13:14.506484 2320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/604a504c60e55c8b681b0294a2ed3222-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"604a504c60e55c8b681b0294a2ed3222\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:13:14.506592 kubelet[2320]: I0710 00:13:14.506529 2320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/604a504c60e55c8b681b0294a2ed3222-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"604a504c60e55c8b681b0294a2ed3222\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:13:14.506592 kubelet[2320]: I0710 00:13:14.506559 2320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/604a504c60e55c8b681b0294a2ed3222-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"604a504c60e55c8b681b0294a2ed3222\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:13:14.506642 kubelet[2320]: I0710 00:13:14.506612 2320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:13:14.506845 kubelet[2320]: I0710 00:13:14.506646 2320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:13:14.506845 kubelet[2320]: I0710 00:13:14.506721 2320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:13:14.506845 kubelet[2320]: I0710 00:13:14.506753 2320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:13:14.506845 kubelet[2320]: I0710 00:13:14.506778 2320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/834ee54f1daa06092e339273649eb5ea-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"834ee54f1daa06092e339273649eb5ea\") " pod="kube-system/kube-scheduler-localhost" Jul 10 00:13:14.506845 kubelet[2320]: I0710 00:13:14.506822 2320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:13:14.508020 systemd[1]: Created slice kubepods-burstable-pod834ee54f1daa06092e339273649eb5ea.slice - libcontainer container kubepods-burstable-pod834ee54f1daa06092e339273649eb5ea.slice. Jul 10 00:13:14.510181 kubelet[2320]: E0710 00:13:14.510161 2320 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 00:13:14.790936 kubelet[2320]: E0710 00:13:14.790882 2320 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:13:14.791711 containerd[1573]: time="2025-07-10T00:13:14.791646272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:604a504c60e55c8b681b0294a2ed3222,Namespace:kube-system,Attempt:0,}" Jul 10 00:13:14.799150 kubelet[2320]: E0710 00:13:14.799099 2320 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.19:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 10 00:13:14.805585 kubelet[2320]: E0710 00:13:14.805558 2320 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:13:14.806122 containerd[1573]: time="2025-07-10T00:13:14.806078227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:84b858ec27c8b2738b1d9ff9927e0dcb,Namespace:kube-system,Attempt:0,}" Jul 10 00:13:14.810672 kubelet[2320]: E0710 00:13:14.810636 2320 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:13:14.813454 containerd[1573]: time="2025-07-10T00:13:14.813028421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:834ee54f1daa06092e339273649eb5ea,Namespace:kube-system,Attempt:0,}" Jul 10 00:13:14.827864 containerd[1573]: time="2025-07-10T00:13:14.827810062Z" level=info msg="connecting to shim 906525a1b61a159c4cd0a158f564c5ec3e0c52fa6667f8e75d5c9760d7e731a9" address="unix:///run/containerd/s/338d3f11aca7bc9c2b14de17f1384913cb4c77b4b863a6a0f1e53e38b31efc1d" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:13:14.891199 containerd[1573]: time="2025-07-10T00:13:14.891107227Z" level=info msg="connecting to shim bc5b5715621b8b1c4f3e4d68fc07f0f081bf4ca928e6621da5cccc14d8826857" address="unix:///run/containerd/s/b0b33ab1dd67b87627d4ff99a64f08c0ceb2be716fde2b12c1fb50f48ff82899" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:13:14.894481 containerd[1573]: time="2025-07-10T00:13:14.894421642Z" level=info msg="connecting to shim f97b15e13791fe025ce08c3f8993c154047010d3c7c9722aef4ce0e92b2c67ae" address="unix:///run/containerd/s/92953bc8ca7b80ced7ad2487dd6342b5a1f543c1779a6895e408c66e3bccc14d" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:13:14.909032 systemd[1]: Started cri-containerd-906525a1b61a159c4cd0a158f564c5ec3e0c52fa6667f8e75d5c9760d7e731a9.scope - libcontainer container 906525a1b61a159c4cd0a158f564c5ec3e0c52fa6667f8e75d5c9760d7e731a9. Jul 10 00:13:14.950102 systemd[1]: Started cri-containerd-bc5b5715621b8b1c4f3e4d68fc07f0f081bf4ca928e6621da5cccc14d8826857.scope - libcontainer container bc5b5715621b8b1c4f3e4d68fc07f0f081bf4ca928e6621da5cccc14d8826857. Jul 10 00:13:14.954894 systemd[1]: Started cri-containerd-f97b15e13791fe025ce08c3f8993c154047010d3c7c9722aef4ce0e92b2c67ae.scope - libcontainer container f97b15e13791fe025ce08c3f8993c154047010d3c7c9722aef4ce0e92b2c67ae. Jul 10 00:13:15.013838 containerd[1573]: time="2025-07-10T00:13:15.013758361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:604a504c60e55c8b681b0294a2ed3222,Namespace:kube-system,Attempt:0,} returns sandbox id \"906525a1b61a159c4cd0a158f564c5ec3e0c52fa6667f8e75d5c9760d7e731a9\"" Jul 10 00:13:15.016442 kubelet[2320]: E0710 00:13:15.016409 2320 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:13:15.017705 containerd[1573]: time="2025-07-10T00:13:15.017670498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:834ee54f1daa06092e339273649eb5ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"bc5b5715621b8b1c4f3e4d68fc07f0f081bf4ca928e6621da5cccc14d8826857\"" Jul 10 00:13:15.019780 kubelet[2320]: E0710 00:13:15.019758 2320 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:13:15.022828 containerd[1573]: time="2025-07-10T00:13:15.022776313Z" level=info msg="CreateContainer within sandbox \"906525a1b61a159c4cd0a158f564c5ec3e0c52fa6667f8e75d5c9760d7e731a9\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 10 00:13:15.023201 containerd[1573]: time="2025-07-10T00:13:15.023172436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:84b858ec27c8b2738b1d9ff9927e0dcb,Namespace:kube-system,Attempt:0,} returns sandbox id \"f97b15e13791fe025ce08c3f8993c154047010d3c7c9722aef4ce0e92b2c67ae\"" Jul 10 00:13:15.023683 kubelet[2320]: E0710 00:13:15.023652 2320 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:13:15.025322 containerd[1573]: time="2025-07-10T00:13:15.025246145Z" level=info msg="CreateContainer within sandbox \"bc5b5715621b8b1c4f3e4d68fc07f0f081bf4ca928e6621da5cccc14d8826857\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 10 00:13:15.027948 containerd[1573]: time="2025-07-10T00:13:15.027915190Z" level=info msg="CreateContainer within sandbox \"f97b15e13791fe025ce08c3f8993c154047010d3c7c9722aef4ce0e92b2c67ae\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 10 00:13:15.034074 containerd[1573]: time="2025-07-10T00:13:15.034019328Z" level=info msg="Container 6b5c4ba233a1a9683f31b8d9141c8700eb76324bdc292273725bd2a9e4b97a04: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:13:15.039038 containerd[1573]: time="2025-07-10T00:13:15.039000349Z" level=info msg="Container 6539f7a7a11d6808dc59974f20b5aab6f15c9fa4cf88378469ac84edf44c5a42: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:13:15.045510 containerd[1573]: time="2025-07-10T00:13:15.045408757Z" level=info msg="Container d147fa35217b25bb53c8fab496344bf69453689d97b05de068b873c8484302fe: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:13:15.046640 containerd[1573]: time="2025-07-10T00:13:15.046588479Z" level=info msg="CreateContainer within sandbox \"906525a1b61a159c4cd0a158f564c5ec3e0c52fa6667f8e75d5c9760d7e731a9\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6b5c4ba233a1a9683f31b8d9141c8700eb76324bdc292273725bd2a9e4b97a04\"" Jul 10 00:13:15.047899 containerd[1573]: time="2025-07-10T00:13:15.047868620Z" level=info msg="StartContainer for \"6b5c4ba233a1a9683f31b8d9141c8700eb76324bdc292273725bd2a9e4b97a04\"" Jul 10 00:13:15.049501 containerd[1573]: time="2025-07-10T00:13:15.049470063Z" level=info msg="connecting to shim 6b5c4ba233a1a9683f31b8d9141c8700eb76324bdc292273725bd2a9e4b97a04" address="unix:///run/containerd/s/338d3f11aca7bc9c2b14de17f1384913cb4c77b4b863a6a0f1e53e38b31efc1d" protocol=ttrpc version=3 Jul 10 00:13:15.052649 containerd[1573]: time="2025-07-10T00:13:15.052597338Z" level=info msg="CreateContainer within sandbox \"bc5b5715621b8b1c4f3e4d68fc07f0f081bf4ca928e6621da5cccc14d8826857\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6539f7a7a11d6808dc59974f20b5aab6f15c9fa4cf88378469ac84edf44c5a42\"" Jul 10 00:13:15.053354 containerd[1573]: time="2025-07-10T00:13:15.053320484Z" level=info msg="StartContainer for \"6539f7a7a11d6808dc59974f20b5aab6f15c9fa4cf88378469ac84edf44c5a42\"" Jul 10 00:13:15.055173 containerd[1573]: time="2025-07-10T00:13:15.055115921Z" level=info msg="connecting to shim 6539f7a7a11d6808dc59974f20b5aab6f15c9fa4cf88378469ac84edf44c5a42" address="unix:///run/containerd/s/b0b33ab1dd67b87627d4ff99a64f08c0ceb2be716fde2b12c1fb50f48ff82899" protocol=ttrpc version=3 Jul 10 00:13:15.059321 containerd[1573]: time="2025-07-10T00:13:15.059282936Z" level=info msg="CreateContainer within sandbox \"f97b15e13791fe025ce08c3f8993c154047010d3c7c9722aef4ce0e92b2c67ae\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d147fa35217b25bb53c8fab496344bf69453689d97b05de068b873c8484302fe\"" Jul 10 00:13:15.059981 containerd[1573]: time="2025-07-10T00:13:15.059949526Z" level=info msg="StartContainer for \"d147fa35217b25bb53c8fab496344bf69453689d97b05de068b873c8484302fe\"" Jul 10 00:13:15.061849 containerd[1573]: time="2025-07-10T00:13:15.061756204Z" level=info msg="connecting to shim d147fa35217b25bb53c8fab496344bf69453689d97b05de068b873c8484302fe" address="unix:///run/containerd/s/92953bc8ca7b80ced7ad2487dd6342b5a1f543c1779a6895e408c66e3bccc14d" protocol=ttrpc version=3 Jul 10 00:13:15.089086 systemd[1]: Started cri-containerd-6b5c4ba233a1a9683f31b8d9141c8700eb76324bdc292273725bd2a9e4b97a04.scope - libcontainer container 6b5c4ba233a1a9683f31b8d9141c8700eb76324bdc292273725bd2a9e4b97a04. Jul 10 00:13:15.100048 systemd[1]: Started cri-containerd-6539f7a7a11d6808dc59974f20b5aab6f15c9fa4cf88378469ac84edf44c5a42.scope - libcontainer container 6539f7a7a11d6808dc59974f20b5aab6f15c9fa4cf88378469ac84edf44c5a42. Jul 10 00:13:15.109013 systemd[1]: Started cri-containerd-d147fa35217b25bb53c8fab496344bf69453689d97b05de068b873c8484302fe.scope - libcontainer container d147fa35217b25bb53c8fab496344bf69453689d97b05de068b873c8484302fe. Jul 10 00:13:15.155232 containerd[1573]: time="2025-07-10T00:13:15.155015098Z" level=info msg="StartContainer for \"6b5c4ba233a1a9683f31b8d9141c8700eb76324bdc292273725bd2a9e4b97a04\" returns successfully" Jul 10 00:13:15.166174 containerd[1573]: time="2025-07-10T00:13:15.163746442Z" level=info msg="StartContainer for \"6539f7a7a11d6808dc59974f20b5aab6f15c9fa4cf88378469ac84edf44c5a42\" returns successfully" Jul 10 00:13:15.220062 containerd[1573]: time="2025-07-10T00:13:15.220009957Z" level=info msg="StartContainer for \"d147fa35217b25bb53c8fab496344bf69453689d97b05de068b873c8484302fe\" returns successfully" Jul 10 00:13:15.295669 kubelet[2320]: I0710 00:13:15.295521 2320 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 10 00:13:15.978147 kubelet[2320]: E0710 00:13:15.978099 2320 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 00:13:15.978355 kubelet[2320]: E0710 00:13:15.978228 2320 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:13:15.982555 kubelet[2320]: E0710 00:13:15.982522 2320 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 00:13:15.982677 kubelet[2320]: E0710 00:13:15.982643 2320 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:13:15.988185 kubelet[2320]: E0710 00:13:15.988155 2320 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 00:13:15.988286 kubelet[2320]: E0710 00:13:15.988263 2320 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:13:16.648827 kubelet[2320]: E0710 00:13:16.648742 2320 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 10 00:13:16.832384 kubelet[2320]: I0710 00:13:16.832317 2320 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 10 00:13:16.864064 kubelet[2320]: I0710 00:13:16.864010 2320 apiserver.go:52] "Watching apiserver" Jul 10 00:13:16.897956 kubelet[2320]: I0710 00:13:16.897914 2320 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 10 00:13:16.898153 kubelet[2320]: I0710 00:13:16.897988 2320 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 10 00:13:16.903899 kubelet[2320]: E0710 00:13:16.903745 2320 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 10 00:13:16.904028 kubelet[2320]: I0710 00:13:16.903990 2320 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 10 00:13:16.906269 kubelet[2320]: E0710 00:13:16.906225 2320 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jul 10 00:13:16.906269 kubelet[2320]: I0710 00:13:16.906253 2320 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 10 00:13:16.907694 kubelet[2320]: E0710 00:13:16.907633 2320 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 10 00:13:16.988787 kubelet[2320]: I0710 00:13:16.988718 2320 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 10 00:13:16.989139 kubelet[2320]: I0710 00:13:16.988957 2320 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 10 00:13:16.991203 kubelet[2320]: E0710 00:13:16.991159 2320 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 10 00:13:16.991350 kubelet[2320]: E0710 00:13:16.991328 2320 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:13:16.993100 kubelet[2320]: E0710 00:13:16.993056 2320 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 10 00:13:16.993236 kubelet[2320]: E0710 00:13:16.993213 2320 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:13:18.468260 kubelet[2320]: I0710 00:13:18.468205 2320 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 10 00:13:18.588552 kubelet[2320]: E0710 00:13:18.588511 2320 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:13:18.992496 kubelet[2320]: E0710 00:13:18.992447 2320 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:13:19.946420 systemd[1]: Reload requested from client PID 2608 ('systemctl') (unit session-7.scope)... Jul 10 00:13:19.946435 systemd[1]: Reloading... Jul 10 00:13:20.030854 zram_generator::config[2652]: No configuration found. Jul 10 00:13:20.130037 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:13:20.275371 systemd[1]: Reloading finished in 328 ms. Jul 10 00:13:20.303507 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:13:20.332235 systemd[1]: kubelet.service: Deactivated successfully. Jul 10 00:13:20.332609 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:13:20.332662 systemd[1]: kubelet.service: Consumed 1.362s CPU time, 135M memory peak. Jul 10 00:13:20.334700 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:13:20.538668 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:13:20.548378 (kubelet)[2696]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 10 00:13:20.587621 kubelet[2696]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:13:20.587621 kubelet[2696]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 10 00:13:20.587621 kubelet[2696]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:13:20.588083 kubelet[2696]: I0710 00:13:20.587648 2696 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 10 00:13:20.593783 kubelet[2696]: I0710 00:13:20.593749 2696 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 10 00:13:20.593783 kubelet[2696]: I0710 00:13:20.593773 2696 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 10 00:13:20.595266 kubelet[2696]: I0710 00:13:20.594343 2696 server.go:956] "Client rotation is on, will bootstrap in background" Jul 10 00:13:20.596357 kubelet[2696]: I0710 00:13:20.596334 2696 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jul 10 00:13:20.653157 kubelet[2696]: I0710 00:13:20.653071 2696 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 10 00:13:20.658718 kubelet[2696]: I0710 00:13:20.658688 2696 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 10 00:13:20.664433 kubelet[2696]: I0710 00:13:20.664400 2696 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 10 00:13:20.664747 kubelet[2696]: I0710 00:13:20.664699 2696 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 10 00:13:20.664914 kubelet[2696]: I0710 00:13:20.664738 2696 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 10 00:13:20.664998 kubelet[2696]: I0710 00:13:20.664930 2696 topology_manager.go:138] "Creating topology manager with none policy" Jul 10 00:13:20.664998 kubelet[2696]: I0710 00:13:20.664942 2696 container_manager_linux.go:303] "Creating device plugin manager" Jul 10 00:13:20.665046 kubelet[2696]: I0710 00:13:20.665004 2696 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:13:20.665215 kubelet[2696]: I0710 00:13:20.665198 2696 kubelet.go:480] "Attempting to sync node with API server" Jul 10 00:13:20.665243 kubelet[2696]: I0710 00:13:20.665232 2696 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 10 00:13:20.665747 kubelet[2696]: I0710 00:13:20.665302 2696 kubelet.go:386] "Adding apiserver pod source" Jul 10 00:13:20.665747 kubelet[2696]: I0710 00:13:20.665328 2696 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 10 00:13:20.667042 kubelet[2696]: I0710 00:13:20.667019 2696 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 10 00:13:20.667767 kubelet[2696]: I0710 00:13:20.667700 2696 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 10 00:13:20.673883 kubelet[2696]: I0710 00:13:20.673858 2696 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 10 00:13:20.674086 kubelet[2696]: I0710 00:13:20.674063 2696 server.go:1289] "Started kubelet" Jul 10 00:13:20.674291 kubelet[2696]: I0710 00:13:20.674227 2696 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 10 00:13:20.674400 kubelet[2696]: I0710 00:13:20.674256 2696 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 10 00:13:20.674902 kubelet[2696]: I0710 00:13:20.674870 2696 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 10 00:13:20.676693 kubelet[2696]: I0710 00:13:20.675973 2696 server.go:317] "Adding debug handlers to kubelet server" Jul 10 00:13:20.678055 kubelet[2696]: I0710 00:13:20.677831 2696 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 10 00:13:20.679325 kubelet[2696]: I0710 00:13:20.679162 2696 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 10 00:13:20.681178 kubelet[2696]: I0710 00:13:20.681137 2696 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 10 00:13:20.684280 kubelet[2696]: E0710 00:13:20.683243 2696 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 10 00:13:20.684280 kubelet[2696]: I0710 00:13:20.683338 2696 factory.go:223] Registration of the containerd container factory successfully Jul 10 00:13:20.684280 kubelet[2696]: I0710 00:13:20.683346 2696 factory.go:223] Registration of the systemd container factory successfully Jul 10 00:13:20.684280 kubelet[2696]: I0710 00:13:20.684196 2696 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 10 00:13:20.685736 kubelet[2696]: I0710 00:13:20.685702 2696 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 10 00:13:20.685970 kubelet[2696]: I0710 00:13:20.685941 2696 reconciler.go:26] "Reconciler: start to sync state" Jul 10 00:13:20.697222 kubelet[2696]: I0710 00:13:20.696874 2696 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 10 00:13:20.698482 kubelet[2696]: I0710 00:13:20.698458 2696 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 10 00:13:20.698482 kubelet[2696]: I0710 00:13:20.698480 2696 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 10 00:13:20.698588 kubelet[2696]: I0710 00:13:20.698498 2696 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 10 00:13:20.698588 kubelet[2696]: I0710 00:13:20.698507 2696 kubelet.go:2436] "Starting kubelet main sync loop" Jul 10 00:13:20.698588 kubelet[2696]: E0710 00:13:20.698564 2696 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 10 00:13:20.720193 kubelet[2696]: I0710 00:13:20.720148 2696 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 10 00:13:20.720193 kubelet[2696]: I0710 00:13:20.720167 2696 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 10 00:13:20.720193 kubelet[2696]: I0710 00:13:20.720188 2696 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:13:20.720425 kubelet[2696]: I0710 00:13:20.720312 2696 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 10 00:13:20.720425 kubelet[2696]: I0710 00:13:20.720321 2696 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 10 00:13:20.720425 kubelet[2696]: I0710 00:13:20.720337 2696 policy_none.go:49] "None policy: Start" Jul 10 00:13:20.720425 kubelet[2696]: I0710 00:13:20.720347 2696 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 10 00:13:20.720425 kubelet[2696]: I0710 00:13:20.720356 2696 state_mem.go:35] "Initializing new in-memory state store" Jul 10 00:13:20.720525 kubelet[2696]: I0710 00:13:20.720433 2696 state_mem.go:75] "Updated machine memory state" Jul 10 00:13:20.724591 kubelet[2696]: E0710 00:13:20.724557 2696 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 10 00:13:20.724772 kubelet[2696]: I0710 00:13:20.724754 2696 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 10 00:13:20.724840 kubelet[2696]: I0710 00:13:20.724771 2696 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 10 00:13:20.725036 kubelet[2696]: I0710 00:13:20.725020 2696 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 10 00:13:20.726064 kubelet[2696]: E0710 00:13:20.725959 2696 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 10 00:13:20.799414 kubelet[2696]: I0710 00:13:20.799288 2696 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 10 00:13:20.799414 kubelet[2696]: I0710 00:13:20.799351 2696 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 10 00:13:20.799414 kubelet[2696]: I0710 00:13:20.799288 2696 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 10 00:13:20.833125 kubelet[2696]: I0710 00:13:20.833094 2696 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 10 00:13:20.887212 kubelet[2696]: I0710 00:13:20.887156 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/604a504c60e55c8b681b0294a2ed3222-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"604a504c60e55c8b681b0294a2ed3222\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:13:20.887212 kubelet[2696]: I0710 00:13:20.887195 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:13:20.887212 kubelet[2696]: I0710 00:13:20.887214 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:13:20.887429 kubelet[2696]: I0710 00:13:20.887237 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/604a504c60e55c8b681b0294a2ed3222-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"604a504c60e55c8b681b0294a2ed3222\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:13:20.887429 kubelet[2696]: I0710 00:13:20.887254 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/604a504c60e55c8b681b0294a2ed3222-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"604a504c60e55c8b681b0294a2ed3222\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:13:20.887429 kubelet[2696]: I0710 00:13:20.887270 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:13:20.887429 kubelet[2696]: I0710 00:13:20.887290 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:13:20.887429 kubelet[2696]: I0710 00:13:20.887306 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:13:20.887545 kubelet[2696]: I0710 00:13:20.887322 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/834ee54f1daa06092e339273649eb5ea-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"834ee54f1daa06092e339273649eb5ea\") " pod="kube-system/kube-scheduler-localhost" Jul 10 00:13:21.017987 kubelet[2696]: E0710 00:13:21.017844 2696 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jul 10 00:13:21.018151 kubelet[2696]: E0710 00:13:21.018121 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:13:21.018814 kubelet[2696]: I0710 00:13:21.018639 2696 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 10 00:13:21.018814 kubelet[2696]: I0710 00:13:21.018724 2696 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 10 00:13:21.155832 kubelet[2696]: E0710 00:13:21.155676 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:13:21.155953 kubelet[2696]: E0710 00:13:21.155898 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:13:21.666475 kubelet[2696]: I0710 00:13:21.666429 2696 apiserver.go:52] "Watching apiserver" Jul 10 00:13:21.686698 kubelet[2696]: I0710 00:13:21.686669 2696 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 10 00:13:21.713119 kubelet[2696]: I0710 00:13:21.713086 2696 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 10 00:13:21.713327 kubelet[2696]: E0710 00:13:21.713167 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:13:21.713327 kubelet[2696]: I0710 00:13:21.713255 2696 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 10 00:13:21.945540 kubelet[2696]: E0710 00:13:21.945477 2696 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 10 00:13:21.945848 kubelet[2696]: E0710 00:13:21.945747 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:13:21.947012 kubelet[2696]: E0710 00:13:21.946952 2696 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jul 10 00:13:21.947755 kubelet[2696]: I0710 00:13:21.947043 2696 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.947027103 podStartE2EDuration="1.947027103s" podCreationTimestamp="2025-07-10 00:13:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:13:21.945874621 +0000 UTC m=+1.392472552" watchObservedRunningTime="2025-07-10 00:13:21.947027103 +0000 UTC m=+1.393625014" Jul 10 00:13:21.947755 kubelet[2696]: E0710 00:13:21.947097 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:13:21.967118 kubelet[2696]: I0710 00:13:21.967041 2696 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.967021152 podStartE2EDuration="3.967021152s" podCreationTimestamp="2025-07-10 00:13:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:13:21.956064726 +0000 UTC m=+1.402662637" watchObservedRunningTime="2025-07-10 00:13:21.967021152 +0000 UTC m=+1.413619063" Jul 10 00:13:21.979688 kubelet[2696]: I0710 00:13:21.979591 2696 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.9795650569999999 podStartE2EDuration="1.979565057s" podCreationTimestamp="2025-07-10 00:13:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:13:21.970444235 +0000 UTC m=+1.417042146" watchObservedRunningTime="2025-07-10 00:13:21.979565057 +0000 UTC m=+1.426162969" Jul 10 00:13:21.984429 sudo[2738]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 10 00:13:21.985001 sudo[2738]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 10 00:13:22.460550 sudo[2738]: pam_unix(sudo:session): session closed for user root Jul 10 00:13:22.714977 kubelet[2696]: E0710 00:13:22.714788 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:13:22.715903 kubelet[2696]: E0710 00:13:22.715858 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:13:22.715903 kubelet[2696]: E0710 00:13:22.715858 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:13:23.878101 sudo[1768]: pam_unix(sudo:session): session closed for user root Jul 10 00:13:23.879591 sshd[1767]: Connection closed by 10.0.0.1 port 50916 Jul 10 00:13:23.880197 sshd-session[1765]: pam_unix(sshd:session): session closed for user core Jul 10 00:13:23.885125 systemd-logind[1509]: Session 7 logged out. Waiting for processes to exit. Jul 10 00:13:23.885552 systemd[1]: sshd@6-10.0.0.19:22-10.0.0.1:50916.service: Deactivated successfully. Jul 10 00:13:23.888123 systemd[1]: session-7.scope: Deactivated successfully. Jul 10 00:13:23.888347 systemd[1]: session-7.scope: Consumed 5.403s CPU time, 258.1M memory peak. Jul 10 00:13:23.891208 systemd-logind[1509]: Removed session 7. Jul 10 00:13:25.389509 kubelet[2696]: I0710 00:13:25.389450 2696 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 10 00:13:25.390062 containerd[1573]: time="2025-07-10T00:13:25.389835253Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 10 00:13:25.390361 kubelet[2696]: I0710 00:13:25.390138 2696 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 10 00:13:26.282696 systemd[1]: Created slice kubepods-besteffort-pod86fec064_6814_466c_ac9c_2fbe0db09050.slice - libcontainer container kubepods-besteffort-pod86fec064_6814_466c_ac9c_2fbe0db09050.slice. Jul 10 00:13:26.304465 systemd[1]: Created slice kubepods-burstable-pod4803bec2_5640_4c58_9ea4_6335971c236b.slice - libcontainer container kubepods-burstable-pod4803bec2_5640_4c58_9ea4_6335971c236b.slice. Jul 10 00:13:26.320021 kubelet[2696]: I0710 00:13:26.319967 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/86fec064-6814-466c-ac9c-2fbe0db09050-xtables-lock\") pod \"kube-proxy-9j5gn\" (UID: \"86fec064-6814-466c-ac9c-2fbe0db09050\") " pod="kube-system/kube-proxy-9j5gn" Jul 10 00:13:26.320380 kubelet[2696]: I0710 00:13:26.320258 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4803bec2-5640-4c58-9ea4-6335971c236b-bpf-maps\") pod \"cilium-qbdt9\" (UID: \"4803bec2-5640-4c58-9ea4-6335971c236b\") " pod="kube-system/cilium-qbdt9" Jul 10 00:13:26.320380 kubelet[2696]: I0710 00:13:26.320286 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4803bec2-5640-4c58-9ea4-6335971c236b-hostproc\") pod \"cilium-qbdt9\" (UID: \"4803bec2-5640-4c58-9ea4-6335971c236b\") " pod="kube-system/cilium-qbdt9" Jul 10 00:13:26.320380 kubelet[2696]: I0710 00:13:26.320336 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4803bec2-5640-4c58-9ea4-6335971c236b-etc-cni-netd\") pod \"cilium-qbdt9\" (UID: \"4803bec2-5640-4c58-9ea4-6335971c236b\") " pod="kube-system/cilium-qbdt9" Jul 10 00:13:26.320380 kubelet[2696]: I0710 00:13:26.320352 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4803bec2-5640-4c58-9ea4-6335971c236b-xtables-lock\") pod \"cilium-qbdt9\" (UID: \"4803bec2-5640-4c58-9ea4-6335971c236b\") " pod="kube-system/cilium-qbdt9" Jul 10 00:13:26.320583 kubelet[2696]: I0710 00:13:26.320538 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/86fec064-6814-466c-ac9c-2fbe0db09050-kube-proxy\") pod \"kube-proxy-9j5gn\" (UID: \"86fec064-6814-466c-ac9c-2fbe0db09050\") " pod="kube-system/kube-proxy-9j5gn" Jul 10 00:13:26.320583 kubelet[2696]: I0710 00:13:26.320558 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4803bec2-5640-4c58-9ea4-6335971c236b-cni-path\") pod \"cilium-qbdt9\" (UID: \"4803bec2-5640-4c58-9ea4-6335971c236b\") " pod="kube-system/cilium-qbdt9" Jul 10 00:13:26.320756 kubelet[2696]: I0710 00:13:26.320717 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4803bec2-5640-4c58-9ea4-6335971c236b-clustermesh-secrets\") pod \"cilium-qbdt9\" (UID: \"4803bec2-5640-4c58-9ea4-6335971c236b\") " pod="kube-system/cilium-qbdt9" Jul 10 00:13:26.320866 kubelet[2696]: I0710 00:13:26.320741 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cqx7\" (UniqueName: \"kubernetes.io/projected/4803bec2-5640-4c58-9ea4-6335971c236b-kube-api-access-7cqx7\") pod \"cilium-qbdt9\" (UID: \"4803bec2-5640-4c58-9ea4-6335971c236b\") " pod="kube-system/cilium-qbdt9" Jul 10 00:13:26.321113 kubelet[2696]: I0710 00:13:26.320940 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4803bec2-5640-4c58-9ea4-6335971c236b-cilium-run\") pod \"cilium-qbdt9\" (UID: \"4803bec2-5640-4c58-9ea4-6335971c236b\") " pod="kube-system/cilium-qbdt9" Jul 10 00:13:26.321113 kubelet[2696]: I0710 00:13:26.320959 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/86fec064-6814-466c-ac9c-2fbe0db09050-lib-modules\") pod \"kube-proxy-9j5gn\" (UID: \"86fec064-6814-466c-ac9c-2fbe0db09050\") " pod="kube-system/kube-proxy-9j5gn" Jul 10 00:13:26.321113 kubelet[2696]: I0710 00:13:26.320974 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4803bec2-5640-4c58-9ea4-6335971c236b-cilium-cgroup\") pod \"cilium-qbdt9\" (UID: \"4803bec2-5640-4c58-9ea4-6335971c236b\") " pod="kube-system/cilium-qbdt9" Jul 10 00:13:26.321113 kubelet[2696]: I0710 00:13:26.320989 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4803bec2-5640-4c58-9ea4-6335971c236b-lib-modules\") pod \"cilium-qbdt9\" (UID: \"4803bec2-5640-4c58-9ea4-6335971c236b\") " pod="kube-system/cilium-qbdt9" Jul 10 00:13:26.321113 kubelet[2696]: I0710 00:13:26.321003 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4803bec2-5640-4c58-9ea4-6335971c236b-cilium-config-path\") pod \"cilium-qbdt9\" (UID: \"4803bec2-5640-4c58-9ea4-6335971c236b\") " pod="kube-system/cilium-qbdt9" Jul 10 00:13:26.321113 kubelet[2696]: I0710 00:13:26.321017 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4803bec2-5640-4c58-9ea4-6335971c236b-host-proc-sys-net\") pod \"cilium-qbdt9\" (UID: \"4803bec2-5640-4c58-9ea4-6335971c236b\") " pod="kube-system/cilium-qbdt9" Jul 10 00:13:26.321263 kubelet[2696]: I0710 00:13:26.321032 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4803bec2-5640-4c58-9ea4-6335971c236b-host-proc-sys-kernel\") pod \"cilium-qbdt9\" (UID: \"4803bec2-5640-4c58-9ea4-6335971c236b\") " pod="kube-system/cilium-qbdt9" Jul 10 00:13:26.321263 kubelet[2696]: I0710 00:13:26.321046 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4803bec2-5640-4c58-9ea4-6335971c236b-hubble-tls\") pod \"cilium-qbdt9\" (UID: \"4803bec2-5640-4c58-9ea4-6335971c236b\") " pod="kube-system/cilium-qbdt9" Jul 10 00:13:26.321263 kubelet[2696]: I0710 00:13:26.321060 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gklq\" (UniqueName: \"kubernetes.io/projected/86fec064-6814-466c-ac9c-2fbe0db09050-kube-api-access-4gklq\") pod \"kube-proxy-9j5gn\" (UID: \"86fec064-6814-466c-ac9c-2fbe0db09050\") " pod="kube-system/kube-proxy-9j5gn" Jul 10 00:13:26.324439 systemd[1]: Created slice kubepods-besteffort-pod820c45bc_e304_414f_b6bb_2e9593ae5916.slice - libcontainer container kubepods-besteffort-pod820c45bc_e304_414f_b6bb_2e9593ae5916.slice. Jul 10 00:13:26.422864 kubelet[2696]: I0710 00:13:26.422779 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlhv8\" (UniqueName: \"kubernetes.io/projected/820c45bc-e304-414f-b6bb-2e9593ae5916-kube-api-access-qlhv8\") pod \"cilium-operator-6c4d7847fc-2hnds\" (UID: \"820c45bc-e304-414f-b6bb-2e9593ae5916\") " pod="kube-system/cilium-operator-6c4d7847fc-2hnds" Jul 10 00:13:26.424018 kubelet[2696]: I0710 00:13:26.423022 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/820c45bc-e304-414f-b6bb-2e9593ae5916-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-2hnds\" (UID: \"820c45bc-e304-414f-b6bb-2e9593ae5916\") " pod="kube-system/cilium-operator-6c4d7847fc-2hnds" Jul 10 00:13:26.599422 kubelet[2696]: E0710 00:13:26.599286 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:13:26.600438 containerd[1573]: time="2025-07-10T00:13:26.600397746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9j5gn,Uid:86fec064-6814-466c-ac9c-2fbe0db09050,Namespace:kube-system,Attempt:0,}" Jul 10 00:13:26.616807 kubelet[2696]: E0710 00:13:26.616757 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:13:26.617234 containerd[1573]: time="2025-07-10T00:13:26.617205040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qbdt9,Uid:4803bec2-5640-4c58-9ea4-6335971c236b,Namespace:kube-system,Attempt:0,}" Jul 10 00:13:26.629355 kubelet[2696]: E0710 00:13:26.629318 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:13:26.629930 containerd[1573]: time="2025-07-10T00:13:26.629826237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-2hnds,Uid:820c45bc-e304-414f-b6bb-2e9593ae5916,Namespace:kube-system,Attempt:0,}" Jul 10 00:13:26.645427 containerd[1573]: time="2025-07-10T00:13:26.645369209Z" level=info msg="connecting to shim 20fd650cc01adb58d5b8e5903bd206e28ede29348956b84f1c1179f0fec55b89" address="unix:///run/containerd/s/aeb950b2f3441bf2d6103018785fcebcd8f3ae3dea90647c7332492160a7ac0f" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:13:26.646125 containerd[1573]: time="2025-07-10T00:13:26.646090947Z" level=info msg="connecting to shim 88641bfb94ef2ac6f2e6cf2da855403c7c5df4d5a258a2fa06d79d878d89d79c" address="unix:///run/containerd/s/2c6ecdbb288ef9e3415242cdd6504a89111bb58f38e411c1f3b0a2d5ce26a634" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:13:26.664352 containerd[1573]: time="2025-07-10T00:13:26.663283735Z" level=info msg="connecting to shim 9aba9394b7b002ff9ac5389775b5ff294be0d2f7f94418df72bd02a1c7e1a983" address="unix:///run/containerd/s/65547bb221aacd616e9487521c84a5d205062f9c59fb444f211d2cb4ed63c3a7" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:13:26.701006 systemd[1]: Started cri-containerd-88641bfb94ef2ac6f2e6cf2da855403c7c5df4d5a258a2fa06d79d878d89d79c.scope - libcontainer container 88641bfb94ef2ac6f2e6cf2da855403c7c5df4d5a258a2fa06d79d878d89d79c. Jul 10 00:13:26.707272 systemd[1]: Started cri-containerd-20fd650cc01adb58d5b8e5903bd206e28ede29348956b84f1c1179f0fec55b89.scope - libcontainer container 20fd650cc01adb58d5b8e5903bd206e28ede29348956b84f1c1179f0fec55b89. Jul 10 00:13:26.709317 systemd[1]: Started cri-containerd-9aba9394b7b002ff9ac5389775b5ff294be0d2f7f94418df72bd02a1c7e1a983.scope - libcontainer container 9aba9394b7b002ff9ac5389775b5ff294be0d2f7f94418df72bd02a1c7e1a983. Jul 10 00:13:26.742320 containerd[1573]: time="2025-07-10T00:13:26.742264839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qbdt9,Uid:4803bec2-5640-4c58-9ea4-6335971c236b,Namespace:kube-system,Attempt:0,} returns sandbox id \"88641bfb94ef2ac6f2e6cf2da855403c7c5df4d5a258a2fa06d79d878d89d79c\"" Jul 10 00:13:26.743402 kubelet[2696]: E0710 00:13:26.743368 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:13:26.745246 containerd[1573]: time="2025-07-10T00:13:26.745206511Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 10 00:13:26.754505 containerd[1573]: time="2025-07-10T00:13:26.754466657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9j5gn,Uid:86fec064-6814-466c-ac9c-2fbe0db09050,Namespace:kube-system,Attempt:0,} returns sandbox id \"20fd650cc01adb58d5b8e5903bd206e28ede29348956b84f1c1179f0fec55b89\"" Jul 10 00:13:26.755548 kubelet[2696]: E0710 00:13:26.755522 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:13:26.762020 containerd[1573]: time="2025-07-10T00:13:26.761959081Z" level=info msg="CreateContainer within sandbox \"20fd650cc01adb58d5b8e5903bd206e28ede29348956b84f1c1179f0fec55b89\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 10 00:13:26.777997 containerd[1573]: time="2025-07-10T00:13:26.777944155Z" level=info msg="Container 0c797d65b66c9c1b75e7c8ddab9d2e35610af08e3e8083064f6b39bfff6df498: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:13:26.778760 containerd[1573]: time="2025-07-10T00:13:26.778730184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-2hnds,Uid:820c45bc-e304-414f-b6bb-2e9593ae5916,Namespace:kube-system,Attempt:0,} returns sandbox id \"9aba9394b7b002ff9ac5389775b5ff294be0d2f7f94418df72bd02a1c7e1a983\"" Jul 10 00:13:26.779622 kubelet[2696]: E0710 00:13:26.779593 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:13:26.805480 containerd[1573]: time="2025-07-10T00:13:26.805344116Z" level=info msg="CreateContainer within sandbox \"20fd650cc01adb58d5b8e5903bd206e28ede29348956b84f1c1179f0fec55b89\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0c797d65b66c9c1b75e7c8ddab9d2e35610af08e3e8083064f6b39bfff6df498\"" Jul 10 00:13:26.807555 containerd[1573]: time="2025-07-10T00:13:26.806354614Z" level=info msg="StartContainer for \"0c797d65b66c9c1b75e7c8ddab9d2e35610af08e3e8083064f6b39bfff6df498\"" Jul 10 00:13:26.811391 containerd[1573]: time="2025-07-10T00:13:26.811353661Z" level=info msg="connecting to shim 0c797d65b66c9c1b75e7c8ddab9d2e35610af08e3e8083064f6b39bfff6df498" address="unix:///run/containerd/s/aeb950b2f3441bf2d6103018785fcebcd8f3ae3dea90647c7332492160a7ac0f" protocol=ttrpc version=3 Jul 10 00:13:26.833055 systemd[1]: Started cri-containerd-0c797d65b66c9c1b75e7c8ddab9d2e35610af08e3e8083064f6b39bfff6df498.scope - libcontainer container 0c797d65b66c9c1b75e7c8ddab9d2e35610af08e3e8083064f6b39bfff6df498. Jul 10 00:13:26.880402 containerd[1573]: time="2025-07-10T00:13:26.880285444Z" level=info msg="StartContainer for \"0c797d65b66c9c1b75e7c8ddab9d2e35610af08e3e8083064f6b39bfff6df498\" returns successfully" Jul 10 00:13:27.167340 kubelet[2696]: E0710 00:13:27.167210 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:13:27.727048 kubelet[2696]: E0710 00:13:27.726930 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:13:27.727762 kubelet[2696]: E0710 00:13:27.727484 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:13:27.736775 kubelet[2696]: I0710 00:13:27.736630 2696 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9j5gn" podStartSLOduration=2.736444589 podStartE2EDuration="2.736444589s" podCreationTimestamp="2025-07-10 00:13:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:13:27.73641353 +0000 UTC m=+7.183011441" watchObservedRunningTime="2025-07-10 00:13:27.736444589 +0000 UTC m=+7.183042490" Jul 10 00:13:29.342315 kubelet[2696]: E0710 00:13:29.341903 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:13:29.730831 kubelet[2696]: E0710 00:13:29.730708 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:13:29.886707 kubelet[2696]: E0710 00:13:29.886562 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:13:30.731529 kubelet[2696]: E0710 00:13:30.731472 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:13:30.732805 kubelet[2696]: E0710 00:13:30.731849 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:13:32.115296 update_engine[1511]: I20250710 00:13:32.115149 1511 update_attempter.cc:509] Updating boot flags... Jul 10 00:13:34.478747 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2790726993.mount: Deactivated successfully. Jul 10 00:13:39.630014 containerd[1573]: time="2025-07-10T00:13:39.629936618Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:13:39.630671 containerd[1573]: time="2025-07-10T00:13:39.630631151Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jul 10 00:13:39.631756 containerd[1573]: time="2025-07-10T00:13:39.631704168Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:13:39.633230 containerd[1573]: time="2025-07-10T00:13:39.633196467Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 12.887941204s" Jul 10 00:13:39.633310 containerd[1573]: time="2025-07-10T00:13:39.633231093Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 10 00:13:39.634158 containerd[1573]: time="2025-07-10T00:13:39.634131043Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 10 00:13:39.638813 containerd[1573]: time="2025-07-10T00:13:39.638759067Z" level=info msg="CreateContainer within sandbox \"88641bfb94ef2ac6f2e6cf2da855403c7c5df4d5a258a2fa06d79d878d89d79c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 10 00:13:39.648275 containerd[1573]: time="2025-07-10T00:13:39.648218800Z" level=info msg="Container 6fb24ef494773bae9e46b2e9563f988ce16f76f7b3a5b71220d9f162e4514dd4: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:13:39.658079 containerd[1573]: time="2025-07-10T00:13:39.658027652Z" level=info msg="CreateContainer within sandbox \"88641bfb94ef2ac6f2e6cf2da855403c7c5df4d5a258a2fa06d79d878d89d79c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6fb24ef494773bae9e46b2e9563f988ce16f76f7b3a5b71220d9f162e4514dd4\"" Jul 10 00:13:39.658732 containerd[1573]: time="2025-07-10T00:13:39.658695293Z" level=info msg="StartContainer for \"6fb24ef494773bae9e46b2e9563f988ce16f76f7b3a5b71220d9f162e4514dd4\"" Jul 10 00:13:39.659636 containerd[1573]: time="2025-07-10T00:13:39.659590605Z" level=info msg="connecting to shim 6fb24ef494773bae9e46b2e9563f988ce16f76f7b3a5b71220d9f162e4514dd4" address="unix:///run/containerd/s/2c6ecdbb288ef9e3415242cdd6504a89111bb58f38e411c1f3b0a2d5ce26a634" protocol=ttrpc version=3 Jul 10 00:13:39.682959 systemd[1]: Started cri-containerd-6fb24ef494773bae9e46b2e9563f988ce16f76f7b3a5b71220d9f162e4514dd4.scope - libcontainer container 6fb24ef494773bae9e46b2e9563f988ce16f76f7b3a5b71220d9f162e4514dd4. Jul 10 00:13:39.714441 containerd[1573]: time="2025-07-10T00:13:39.714401722Z" level=info msg="StartContainer for \"6fb24ef494773bae9e46b2e9563f988ce16f76f7b3a5b71220d9f162e4514dd4\" returns successfully" Jul 10 00:13:39.726703 systemd[1]: cri-containerd-6fb24ef494773bae9e46b2e9563f988ce16f76f7b3a5b71220d9f162e4514dd4.scope: Deactivated successfully. Jul 10 00:13:39.727853 containerd[1573]: time="2025-07-10T00:13:39.727818050Z" level=info msg="received exit event container_id:\"6fb24ef494773bae9e46b2e9563f988ce16f76f7b3a5b71220d9f162e4514dd4\" id:\"6fb24ef494773bae9e46b2e9563f988ce16f76f7b3a5b71220d9f162e4514dd4\" pid:3141 exited_at:{seconds:1752106419 nanos:727450346}" Jul 10 00:13:39.728004 containerd[1573]: time="2025-07-10T00:13:39.727852655Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6fb24ef494773bae9e46b2e9563f988ce16f76f7b3a5b71220d9f162e4514dd4\" id:\"6fb24ef494773bae9e46b2e9563f988ce16f76f7b3a5b71220d9f162e4514dd4\" pid:3141 exited_at:{seconds:1752106419 nanos:727450346}" Jul 10 00:13:39.750736 kubelet[2696]: E0710 00:13:39.750689 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:13:39.755265 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6fb24ef494773bae9e46b2e9563f988ce16f76f7b3a5b71220d9f162e4514dd4-rootfs.mount: Deactivated successfully. Jul 10 00:13:40.754420 kubelet[2696]: E0710 00:13:40.754359 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:13:40.958044 containerd[1573]: time="2025-07-10T00:13:40.957987645Z" level=info msg="CreateContainer within sandbox \"88641bfb94ef2ac6f2e6cf2da855403c7c5df4d5a258a2fa06d79d878d89d79c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 10 00:13:41.016426 containerd[1573]: time="2025-07-10T00:13:41.016286729Z" level=info msg="Container 3361403dd42c872d13fd333abd0eddc47b88115b86f7ef079fe800944beee38a: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:13:41.025162 containerd[1573]: time="2025-07-10T00:13:41.025089005Z" level=info msg="CreateContainer within sandbox \"88641bfb94ef2ac6f2e6cf2da855403c7c5df4d5a258a2fa06d79d878d89d79c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3361403dd42c872d13fd333abd0eddc47b88115b86f7ef079fe800944beee38a\"" Jul 10 00:13:41.025811 containerd[1573]: time="2025-07-10T00:13:41.025728391Z" level=info msg="StartContainer for \"3361403dd42c872d13fd333abd0eddc47b88115b86f7ef079fe800944beee38a\"" Jul 10 00:13:41.026930 containerd[1573]: time="2025-07-10T00:13:41.026874696Z" level=info msg="connecting to shim 3361403dd42c872d13fd333abd0eddc47b88115b86f7ef079fe800944beee38a" address="unix:///run/containerd/s/2c6ecdbb288ef9e3415242cdd6504a89111bb58f38e411c1f3b0a2d5ce26a634" protocol=ttrpc version=3 Jul 10 00:13:41.051142 systemd[1]: Started cri-containerd-3361403dd42c872d13fd333abd0eddc47b88115b86f7ef079fe800944beee38a.scope - libcontainer container 3361403dd42c872d13fd333abd0eddc47b88115b86f7ef079fe800944beee38a. Jul 10 00:13:41.086050 containerd[1573]: time="2025-07-10T00:13:41.085913490Z" level=info msg="StartContainer for \"3361403dd42c872d13fd333abd0eddc47b88115b86f7ef079fe800944beee38a\" returns successfully" Jul 10 00:13:41.100724 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 10 00:13:41.101118 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 10 00:13:41.102048 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 10 00:13:41.104076 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 10 00:13:41.104991 containerd[1573]: time="2025-07-10T00:13:41.104921981Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3361403dd42c872d13fd333abd0eddc47b88115b86f7ef079fe800944beee38a\" id:\"3361403dd42c872d13fd333abd0eddc47b88115b86f7ef079fe800944beee38a\" pid:3188 exited_at:{seconds:1752106421 nanos:104194047}" Jul 10 00:13:41.105297 containerd[1573]: time="2025-07-10T00:13:41.105244570Z" level=info msg="received exit event container_id:\"3361403dd42c872d13fd333abd0eddc47b88115b86f7ef079fe800944beee38a\" id:\"3361403dd42c872d13fd333abd0eddc47b88115b86f7ef079fe800944beee38a\" pid:3188 exited_at:{seconds:1752106421 nanos:104194047}" Jul 10 00:13:41.106723 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 10 00:13:41.107483 systemd[1]: cri-containerd-3361403dd42c872d13fd333abd0eddc47b88115b86f7ef079fe800944beee38a.scope: Deactivated successfully. Jul 10 00:13:41.130635 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 10 00:13:41.759570 kubelet[2696]: E0710 00:13:41.759509 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:13:41.766067 containerd[1573]: time="2025-07-10T00:13:41.766015935Z" level=info msg="CreateContainer within sandbox \"88641bfb94ef2ac6f2e6cf2da855403c7c5df4d5a258a2fa06d79d878d89d79c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 10 00:13:41.778143 containerd[1573]: time="2025-07-10T00:13:41.778086159Z" level=info msg="Container 62ef1e1241ab18cccfd913a565b4fab260e5df53d63a006e6dfbeed0cec62c14: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:13:41.789065 containerd[1573]: time="2025-07-10T00:13:41.789005361Z" level=info msg="CreateContainer within sandbox \"88641bfb94ef2ac6f2e6cf2da855403c7c5df4d5a258a2fa06d79d878d89d79c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"62ef1e1241ab18cccfd913a565b4fab260e5df53d63a006e6dfbeed0cec62c14\"" Jul 10 00:13:41.789926 containerd[1573]: time="2025-07-10T00:13:41.789893057Z" level=info msg="StartContainer for \"62ef1e1241ab18cccfd913a565b4fab260e5df53d63a006e6dfbeed0cec62c14\"" Jul 10 00:13:41.791251 containerd[1573]: time="2025-07-10T00:13:41.791143417Z" level=info msg="connecting to shim 62ef1e1241ab18cccfd913a565b4fab260e5df53d63a006e6dfbeed0cec62c14" address="unix:///run/containerd/s/2c6ecdbb288ef9e3415242cdd6504a89111bb58f38e411c1f3b0a2d5ce26a634" protocol=ttrpc version=3 Jul 10 00:13:41.816599 systemd[1]: Started cri-containerd-62ef1e1241ab18cccfd913a565b4fab260e5df53d63a006e6dfbeed0cec62c14.scope - libcontainer container 62ef1e1241ab18cccfd913a565b4fab260e5df53d63a006e6dfbeed0cec62c14. Jul 10 00:13:41.861300 systemd[1]: cri-containerd-62ef1e1241ab18cccfd913a565b4fab260e5df53d63a006e6dfbeed0cec62c14.scope: Deactivated successfully. Jul 10 00:13:41.862841 containerd[1573]: time="2025-07-10T00:13:41.862771996Z" level=info msg="TaskExit event in podsandbox handler container_id:\"62ef1e1241ab18cccfd913a565b4fab260e5df53d63a006e6dfbeed0cec62c14\" id:\"62ef1e1241ab18cccfd913a565b4fab260e5df53d63a006e6dfbeed0cec62c14\" pid:3248 exited_at:{seconds:1752106421 nanos:862358456}" Jul 10 00:13:41.897782 containerd[1573]: time="2025-07-10T00:13:41.897732279Z" level=info msg="received exit event container_id:\"62ef1e1241ab18cccfd913a565b4fab260e5df53d63a006e6dfbeed0cec62c14\" id:\"62ef1e1241ab18cccfd913a565b4fab260e5df53d63a006e6dfbeed0cec62c14\" pid:3248 exited_at:{seconds:1752106421 nanos:862358456}" Jul 10 00:13:41.910270 containerd[1573]: time="2025-07-10T00:13:41.910217117Z" level=info msg="StartContainer for \"62ef1e1241ab18cccfd913a565b4fab260e5df53d63a006e6dfbeed0cec62c14\" returns successfully" Jul 10 00:13:42.014749 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3361403dd42c872d13fd333abd0eddc47b88115b86f7ef079fe800944beee38a-rootfs.mount: Deactivated successfully. Jul 10 00:13:42.090788 containerd[1573]: time="2025-07-10T00:13:42.090719472Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:13:42.091489 containerd[1573]: time="2025-07-10T00:13:42.091464578Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jul 10 00:13:42.092557 containerd[1573]: time="2025-07-10T00:13:42.092532573Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:13:42.093626 containerd[1573]: time="2025-07-10T00:13:42.093574840Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.459415533s" Jul 10 00:13:42.093626 containerd[1573]: time="2025-07-10T00:13:42.093622470Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 10 00:13:42.098222 containerd[1573]: time="2025-07-10T00:13:42.098177244Z" level=info msg="CreateContainer within sandbox \"9aba9394b7b002ff9ac5389775b5ff294be0d2f7f94418df72bd02a1c7e1a983\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 10 00:13:42.106091 containerd[1573]: time="2025-07-10T00:13:42.106039920Z" level=info msg="Container 10918897a52c1823f2a48be439d028de3a86b773a74555b7e2020d82749e0baa: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:13:42.113530 containerd[1573]: time="2025-07-10T00:13:42.113495708Z" level=info msg="CreateContainer within sandbox \"9aba9394b7b002ff9ac5389775b5ff294be0d2f7f94418df72bd02a1c7e1a983\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"10918897a52c1823f2a48be439d028de3a86b773a74555b7e2020d82749e0baa\"" Jul 10 00:13:42.113925 containerd[1573]: time="2025-07-10T00:13:42.113904019Z" level=info msg="StartContainer for \"10918897a52c1823f2a48be439d028de3a86b773a74555b7e2020d82749e0baa\"" Jul 10 00:13:42.114826 containerd[1573]: time="2025-07-10T00:13:42.114775583Z" level=info msg="connecting to shim 10918897a52c1823f2a48be439d028de3a86b773a74555b7e2020d82749e0baa" address="unix:///run/containerd/s/65547bb221aacd616e9487521c84a5d205062f9c59fb444f211d2cb4ed63c3a7" protocol=ttrpc version=3 Jul 10 00:13:42.134973 systemd[1]: Started cri-containerd-10918897a52c1823f2a48be439d028de3a86b773a74555b7e2020d82749e0baa.scope - libcontainer container 10918897a52c1823f2a48be439d028de3a86b773a74555b7e2020d82749e0baa. Jul 10 00:13:42.167408 containerd[1573]: time="2025-07-10T00:13:42.167346919Z" level=info msg="StartContainer for \"10918897a52c1823f2a48be439d028de3a86b773a74555b7e2020d82749e0baa\" returns successfully" Jul 10 00:13:42.762814 kubelet[2696]: E0710 00:13:42.762751 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:13:42.768053 kubelet[2696]: E0710 00:13:42.768004 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:13:42.901197 kubelet[2696]: I0710 00:13:42.901131 2696 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-2hnds" podStartSLOduration=1.587084736 podStartE2EDuration="16.901110714s" podCreationTimestamp="2025-07-10 00:13:26 +0000 UTC" firstStartedPulling="2025-07-10 00:13:26.780261476 +0000 UTC m=+6.226859387" lastFinishedPulling="2025-07-10 00:13:42.094287454 +0000 UTC m=+21.540885365" observedRunningTime="2025-07-10 00:13:42.90066361 +0000 UTC m=+22.347261521" watchObservedRunningTime="2025-07-10 00:13:42.901110714 +0000 UTC m=+22.347708625" Jul 10 00:13:43.048158 containerd[1573]: time="2025-07-10T00:13:43.048013999Z" level=info msg="CreateContainer within sandbox \"88641bfb94ef2ac6f2e6cf2da855403c7c5df4d5a258a2fa06d79d878d89d79c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 10 00:13:43.184478 containerd[1573]: time="2025-07-10T00:13:43.183820311Z" level=info msg="Container 7d7639a43025f745ac48d1fe4a4011bc0de112f25e04df2d863587b7a1e28720: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:13:43.200862 containerd[1573]: time="2025-07-10T00:13:43.200788285Z" level=info msg="CreateContainer within sandbox \"88641bfb94ef2ac6f2e6cf2da855403c7c5df4d5a258a2fa06d79d878d89d79c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7d7639a43025f745ac48d1fe4a4011bc0de112f25e04df2d863587b7a1e28720\"" Jul 10 00:13:43.201538 containerd[1573]: time="2025-07-10T00:13:43.201382727Z" level=info msg="StartContainer for \"7d7639a43025f745ac48d1fe4a4011bc0de112f25e04df2d863587b7a1e28720\"" Jul 10 00:13:43.202453 containerd[1573]: time="2025-07-10T00:13:43.202423149Z" level=info msg="connecting to shim 7d7639a43025f745ac48d1fe4a4011bc0de112f25e04df2d863587b7a1e28720" address="unix:///run/containerd/s/2c6ecdbb288ef9e3415242cdd6504a89111bb58f38e411c1f3b0a2d5ce26a634" protocol=ttrpc version=3 Jul 10 00:13:43.226001 systemd[1]: Started cri-containerd-7d7639a43025f745ac48d1fe4a4011bc0de112f25e04df2d863587b7a1e28720.scope - libcontainer container 7d7639a43025f745ac48d1fe4a4011bc0de112f25e04df2d863587b7a1e28720. Jul 10 00:13:43.257357 systemd[1]: cri-containerd-7d7639a43025f745ac48d1fe4a4011bc0de112f25e04df2d863587b7a1e28720.scope: Deactivated successfully. Jul 10 00:13:43.257933 containerd[1573]: time="2025-07-10T00:13:43.257877254Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7d7639a43025f745ac48d1fe4a4011bc0de112f25e04df2d863587b7a1e28720\" id:\"7d7639a43025f745ac48d1fe4a4011bc0de112f25e04df2d863587b7a1e28720\" pid:3329 exited_at:{seconds:1752106423 nanos:257476950}" Jul 10 00:13:43.259637 containerd[1573]: time="2025-07-10T00:13:43.259597389Z" level=info msg="received exit event container_id:\"7d7639a43025f745ac48d1fe4a4011bc0de112f25e04df2d863587b7a1e28720\" id:\"7d7639a43025f745ac48d1fe4a4011bc0de112f25e04df2d863587b7a1e28720\" pid:3329 exited_at:{seconds:1752106423 nanos:257476950}" Jul 10 00:13:43.267339 containerd[1573]: time="2025-07-10T00:13:43.267312350Z" level=info msg="StartContainer for \"7d7639a43025f745ac48d1fe4a4011bc0de112f25e04df2d863587b7a1e28720\" returns successfully" Jul 10 00:13:43.280717 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7d7639a43025f745ac48d1fe4a4011bc0de112f25e04df2d863587b7a1e28720-rootfs.mount: Deactivated successfully. Jul 10 00:13:43.772940 kubelet[2696]: E0710 00:13:43.772894 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:13:43.773624 kubelet[2696]: E0710 00:13:43.773590 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:13:44.024357 containerd[1573]: time="2025-07-10T00:13:44.024231926Z" level=info msg="CreateContainer within sandbox \"88641bfb94ef2ac6f2e6cf2da855403c7c5df4d5a258a2fa06d79d878d89d79c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 10 00:13:44.042895 containerd[1573]: time="2025-07-10T00:13:44.042834441Z" level=info msg="Container dbce19b111df431fa26784ac01216e28f894b52e511be549ad9988553351b089: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:13:44.051902 containerd[1573]: time="2025-07-10T00:13:44.051840112Z" level=info msg="CreateContainer within sandbox \"88641bfb94ef2ac6f2e6cf2da855403c7c5df4d5a258a2fa06d79d878d89d79c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"dbce19b111df431fa26784ac01216e28f894b52e511be549ad9988553351b089\"" Jul 10 00:13:44.052486 containerd[1573]: time="2025-07-10T00:13:44.052437288Z" level=info msg="StartContainer for \"dbce19b111df431fa26784ac01216e28f894b52e511be549ad9988553351b089\"" Jul 10 00:13:44.053552 containerd[1573]: time="2025-07-10T00:13:44.053524318Z" level=info msg="connecting to shim dbce19b111df431fa26784ac01216e28f894b52e511be549ad9988553351b089" address="unix:///run/containerd/s/2c6ecdbb288ef9e3415242cdd6504a89111bb58f38e411c1f3b0a2d5ce26a634" protocol=ttrpc version=3 Jul 10 00:13:44.082984 systemd[1]: Started cri-containerd-dbce19b111df431fa26784ac01216e28f894b52e511be549ad9988553351b089.scope - libcontainer container dbce19b111df431fa26784ac01216e28f894b52e511be549ad9988553351b089. Jul 10 00:13:44.129100 containerd[1573]: time="2025-07-10T00:13:44.129021777Z" level=info msg="StartContainer for \"dbce19b111df431fa26784ac01216e28f894b52e511be549ad9988553351b089\" returns successfully" Jul 10 00:13:44.237176 containerd[1573]: time="2025-07-10T00:13:44.236690093Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dbce19b111df431fa26784ac01216e28f894b52e511be549ad9988553351b089\" id:\"6546bfb5ca8c9aa09900b94f86c910805e9272c9ad02461df9b17398947469c7\" pid:3399 exited_at:{seconds:1752106424 nanos:236122193}" Jul 10 00:13:44.278971 kubelet[2696]: I0710 00:13:44.278865 2696 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 10 00:13:44.356011 systemd[1]: Created slice kubepods-burstable-poddd175f60_cc1f_47b2_873a_dbee0fdb0439.slice - libcontainer container kubepods-burstable-poddd175f60_cc1f_47b2_873a_dbee0fdb0439.slice. Jul 10 00:13:44.367023 systemd[1]: Created slice kubepods-burstable-podcec7c3ea_7390_411b_bb93_fecd69102374.slice - libcontainer container kubepods-burstable-podcec7c3ea_7390_411b_bb93_fecd69102374.slice. Jul 10 00:13:44.438002 kubelet[2696]: I0710 00:13:44.437924 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdmns\" (UniqueName: \"kubernetes.io/projected/cec7c3ea-7390-411b-bb93-fecd69102374-kube-api-access-pdmns\") pod \"coredns-674b8bbfcf-5rbmd\" (UID: \"cec7c3ea-7390-411b-bb93-fecd69102374\") " pod="kube-system/coredns-674b8bbfcf-5rbmd" Jul 10 00:13:44.438002 kubelet[2696]: I0710 00:13:44.438007 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dd175f60-cc1f-47b2-873a-dbee0fdb0439-config-volume\") pod \"coredns-674b8bbfcf-cnvcz\" (UID: \"dd175f60-cc1f-47b2-873a-dbee0fdb0439\") " pod="kube-system/coredns-674b8bbfcf-cnvcz" Jul 10 00:13:44.438193 kubelet[2696]: I0710 00:13:44.438034 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbpt6\" (UniqueName: \"kubernetes.io/projected/dd175f60-cc1f-47b2-873a-dbee0fdb0439-kube-api-access-xbpt6\") pod \"coredns-674b8bbfcf-cnvcz\" (UID: \"dd175f60-cc1f-47b2-873a-dbee0fdb0439\") " pod="kube-system/coredns-674b8bbfcf-cnvcz" Jul 10 00:13:44.438193 kubelet[2696]: I0710 00:13:44.438057 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cec7c3ea-7390-411b-bb93-fecd69102374-config-volume\") pod \"coredns-674b8bbfcf-5rbmd\" (UID: \"cec7c3ea-7390-411b-bb93-fecd69102374\") " pod="kube-system/coredns-674b8bbfcf-5rbmd" Jul 10 00:13:44.661242 kubelet[2696]: E0710 00:13:44.661127 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:13:44.661817 containerd[1573]: time="2025-07-10T00:13:44.661762392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cnvcz,Uid:dd175f60-cc1f-47b2-873a-dbee0fdb0439,Namespace:kube-system,Attempt:0,}" Jul 10 00:13:44.670189 kubelet[2696]: E0710 00:13:44.670152 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:13:44.670507 containerd[1573]: time="2025-07-10T00:13:44.670476321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5rbmd,Uid:cec7c3ea-7390-411b-bb93-fecd69102374,Namespace:kube-system,Attempt:0,}" Jul 10 00:13:44.778130 kubelet[2696]: E0710 00:13:44.778100 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:13:44.845820 kubelet[2696]: I0710 00:13:44.844273 2696 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-qbdt9" podStartSLOduration=6.954499249 podStartE2EDuration="19.844251693s" podCreationTimestamp="2025-07-10 00:13:25 +0000 UTC" firstStartedPulling="2025-07-10 00:13:26.744273872 +0000 UTC m=+6.190871783" lastFinishedPulling="2025-07-10 00:13:39.634026316 +0000 UTC m=+19.080624227" observedRunningTime="2025-07-10 00:13:44.843210279 +0000 UTC m=+24.289808220" watchObservedRunningTime="2025-07-10 00:13:44.844251693 +0000 UTC m=+24.290849604" Jul 10 00:13:45.780259 kubelet[2696]: E0710 00:13:45.780208 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:13:46.764304 systemd-networkd[1462]: cilium_host: Link UP Jul 10 00:13:46.764965 systemd-networkd[1462]: cilium_net: Link UP Jul 10 00:13:46.765175 systemd-networkd[1462]: cilium_net: Gained carrier Jul 10 00:13:46.765357 systemd-networkd[1462]: cilium_host: Gained carrier Jul 10 00:13:46.784557 kubelet[2696]: E0710 00:13:46.784447 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:13:46.871211 systemd-networkd[1462]: cilium_vxlan: Link UP Jul 10 00:13:46.871223 systemd-networkd[1462]: cilium_vxlan: Gained carrier Jul 10 00:13:47.097829 kernel: NET: Registered PF_ALG protocol family Jul 10 00:13:47.188006 systemd-networkd[1462]: cilium_net: Gained IPv6LL Jul 10 00:13:47.768394 systemd-networkd[1462]: lxc_health: Link UP Jul 10 00:13:47.780898 systemd-networkd[1462]: lxc_health: Gained carrier Jul 10 00:13:47.805126 systemd-networkd[1462]: cilium_host: Gained IPv6LL Jul 10 00:13:47.858084 systemd-networkd[1462]: lxc7940ea7611fc: Link UP Jul 10 00:13:47.866827 kernel: eth0: renamed from tmp48120 Jul 10 00:13:47.871113 systemd-networkd[1462]: lxc7940ea7611fc: Gained carrier Jul 10 00:13:47.883225 systemd-networkd[1462]: lxc4a90b91c89b1: Link UP Jul 10 00:13:47.884957 kernel: eth0: renamed from tmp2cd01 Jul 10 00:13:47.885599 systemd-networkd[1462]: lxc4a90b91c89b1: Gained carrier Jul 10 00:13:48.622148 kubelet[2696]: E0710 00:13:48.622094 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:13:48.892064 systemd-networkd[1462]: cilium_vxlan: Gained IPv6LL Jul 10 00:13:49.148083 systemd-networkd[1462]: lxc4a90b91c89b1: Gained IPv6LL Jul 10 00:13:49.404064 systemd-networkd[1462]: lxc_health: Gained IPv6LL Jul 10 00:13:49.532740 systemd-networkd[1462]: lxc7940ea7611fc: Gained IPv6LL Jul 10 00:13:51.431819 containerd[1573]: time="2025-07-10T00:13:51.431319264Z" level=info msg="connecting to shim 481204b70c86e51b078589be56d46ddb4ebe9730fdc4f6569204c5454298d0cd" address="unix:///run/containerd/s/e65b3881b839a38b21a3b7ee085b21bba1557b84d3175563d87aff2c3f3228d5" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:13:51.433229 containerd[1573]: time="2025-07-10T00:13:51.433198500Z" level=info msg="connecting to shim 2cd0101b1aebd655df722f06ecf54f954d47ace3ef586b22705a9f870a111fb8" address="unix:///run/containerd/s/cfb77c776199914d5456eb3086ae2dca1cc363a5544931dc3f63609489339c52" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:13:51.460923 systemd[1]: Started cri-containerd-481204b70c86e51b078589be56d46ddb4ebe9730fdc4f6569204c5454298d0cd.scope - libcontainer container 481204b70c86e51b078589be56d46ddb4ebe9730fdc4f6569204c5454298d0cd. Jul 10 00:13:51.465103 systemd[1]: Started cri-containerd-2cd0101b1aebd655df722f06ecf54f954d47ace3ef586b22705a9f870a111fb8.scope - libcontainer container 2cd0101b1aebd655df722f06ecf54f954d47ace3ef586b22705a9f870a111fb8. Jul 10 00:13:51.477636 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:13:51.480933 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:13:51.514244 containerd[1573]: time="2025-07-10T00:13:51.514193927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5rbmd,Uid:cec7c3ea-7390-411b-bb93-fecd69102374,Namespace:kube-system,Attempt:0,} returns sandbox id \"481204b70c86e51b078589be56d46ddb4ebe9730fdc4f6569204c5454298d0cd\"" Jul 10 00:13:51.518823 kubelet[2696]: E0710 00:13:51.518685 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:13:51.522512 containerd[1573]: time="2025-07-10T00:13:51.522473959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cnvcz,Uid:dd175f60-cc1f-47b2-873a-dbee0fdb0439,Namespace:kube-system,Attempt:0,} returns sandbox id \"2cd0101b1aebd655df722f06ecf54f954d47ace3ef586b22705a9f870a111fb8\"" Jul 10 00:13:51.523137 kubelet[2696]: E0710 00:13:51.523091 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:13:51.525566 containerd[1573]: time="2025-07-10T00:13:51.525493892Z" level=info msg="CreateContainer within sandbox \"481204b70c86e51b078589be56d46ddb4ebe9730fdc4f6569204c5454298d0cd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 10 00:13:51.528591 containerd[1573]: time="2025-07-10T00:13:51.528538191Z" level=info msg="CreateContainer within sandbox \"2cd0101b1aebd655df722f06ecf54f954d47ace3ef586b22705a9f870a111fb8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 10 00:13:51.538580 containerd[1573]: time="2025-07-10T00:13:51.538370493Z" level=info msg="Container 70770d0fed381881f47303912338e19d11b3ea624dd25ecd03a37df955d58a2c: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:13:51.542001 containerd[1573]: time="2025-07-10T00:13:51.541954849Z" level=info msg="Container 83d3d6133696c822216cc973ae866aefa6553e77df86ebcd7c2d62940a1e43eb: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:13:51.549104 containerd[1573]: time="2025-07-10T00:13:51.549064808Z" level=info msg="CreateContainer within sandbox \"481204b70c86e51b078589be56d46ddb4ebe9730fdc4f6569204c5454298d0cd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"70770d0fed381881f47303912338e19d11b3ea624dd25ecd03a37df955d58a2c\"" Jul 10 00:13:51.549738 containerd[1573]: time="2025-07-10T00:13:51.549701396Z" level=info msg="StartContainer for \"70770d0fed381881f47303912338e19d11b3ea624dd25ecd03a37df955d58a2c\"" Jul 10 00:13:51.550909 containerd[1573]: time="2025-07-10T00:13:51.550876367Z" level=info msg="connecting to shim 70770d0fed381881f47303912338e19d11b3ea624dd25ecd03a37df955d58a2c" address="unix:///run/containerd/s/e65b3881b839a38b21a3b7ee085b21bba1557b84d3175563d87aff2c3f3228d5" protocol=ttrpc version=3 Jul 10 00:13:51.551002 containerd[1573]: time="2025-07-10T00:13:51.550942792Z" level=info msg="CreateContainer within sandbox \"2cd0101b1aebd655df722f06ecf54f954d47ace3ef586b22705a9f870a111fb8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"83d3d6133696c822216cc973ae866aefa6553e77df86ebcd7c2d62940a1e43eb\"" Jul 10 00:13:51.551472 containerd[1573]: time="2025-07-10T00:13:51.551420581Z" level=info msg="StartContainer for \"83d3d6133696c822216cc973ae866aefa6553e77df86ebcd7c2d62940a1e43eb\"" Jul 10 00:13:51.552998 containerd[1573]: time="2025-07-10T00:13:51.552974826Z" level=info msg="connecting to shim 83d3d6133696c822216cc973ae866aefa6553e77df86ebcd7c2d62940a1e43eb" address="unix:///run/containerd/s/cfb77c776199914d5456eb3086ae2dca1cc363a5544931dc3f63609489339c52" protocol=ttrpc version=3 Jul 10 00:13:51.586997 systemd[1]: Started cri-containerd-70770d0fed381881f47303912338e19d11b3ea624dd25ecd03a37df955d58a2c.scope - libcontainer container 70770d0fed381881f47303912338e19d11b3ea624dd25ecd03a37df955d58a2c. Jul 10 00:13:51.588613 systemd[1]: Started cri-containerd-83d3d6133696c822216cc973ae866aefa6553e77df86ebcd7c2d62940a1e43eb.scope - libcontainer container 83d3d6133696c822216cc973ae866aefa6553e77df86ebcd7c2d62940a1e43eb. Jul 10 00:13:51.634297 containerd[1573]: time="2025-07-10T00:13:51.634251102Z" level=info msg="StartContainer for \"70770d0fed381881f47303912338e19d11b3ea624dd25ecd03a37df955d58a2c\" returns successfully" Jul 10 00:13:51.638599 containerd[1573]: time="2025-07-10T00:13:51.638545292Z" level=info msg="StartContainer for \"83d3d6133696c822216cc973ae866aefa6553e77df86ebcd7c2d62940a1e43eb\" returns successfully" Jul 10 00:13:51.795840 kubelet[2696]: E0710 00:13:51.795757 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:13:51.799126 kubelet[2696]: E0710 00:13:51.799080 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:13:51.805931 kubelet[2696]: I0710 00:13:51.805850 2696 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-cnvcz" podStartSLOduration=25.805832474 podStartE2EDuration="25.805832474s" podCreationTimestamp="2025-07-10 00:13:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:13:51.805279483 +0000 UTC m=+31.251877394" watchObservedRunningTime="2025-07-10 00:13:51.805832474 +0000 UTC m=+31.252430385" Jul 10 00:13:51.821837 kubelet[2696]: I0710 00:13:51.821716 2696 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-5rbmd" podStartSLOduration=25.821690425 podStartE2EDuration="25.821690425s" podCreationTimestamp="2025-07-10 00:13:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:13:51.820856967 +0000 UTC m=+31.267454878" watchObservedRunningTime="2025-07-10 00:13:51.821690425 +0000 UTC m=+31.268288336" Jul 10 00:13:52.422021 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1505927215.mount: Deactivated successfully. Jul 10 00:13:52.839604 kubelet[2696]: E0710 00:13:52.838783 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:13:52.839604 kubelet[2696]: E0710 00:13:52.838970 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:13:53.262897 systemd[1]: Started sshd@7-10.0.0.19:22-10.0.0.1:46774.service - OpenSSH per-connection server daemon (10.0.0.1:46774). Jul 10 00:13:53.328122 sshd[4047]: Accepted publickey for core from 10.0.0.1 port 46774 ssh2: RSA SHA256:a/WzkVKs173+YSebQY64/4LigDpieaPOYRH6W2gWTe4 Jul 10 00:13:53.329768 sshd-session[4047]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:13:53.334589 systemd-logind[1509]: New session 8 of user core. Jul 10 00:13:53.347927 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 10 00:13:53.523657 sshd[4049]: Connection closed by 10.0.0.1 port 46774 Jul 10 00:13:53.523940 sshd-session[4047]: pam_unix(sshd:session): session closed for user core Jul 10 00:13:53.527865 systemd[1]: sshd@7-10.0.0.19:22-10.0.0.1:46774.service: Deactivated successfully. Jul 10 00:13:53.529846 systemd[1]: session-8.scope: Deactivated successfully. Jul 10 00:13:53.530634 systemd-logind[1509]: Session 8 logged out. Waiting for processes to exit. Jul 10 00:13:53.531957 systemd-logind[1509]: Removed session 8. Jul 10 00:13:53.841135 kubelet[2696]: E0710 00:13:53.840993 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:13:53.841135 kubelet[2696]: E0710 00:13:53.841035 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:13:55.738827 kubelet[2696]: I0710 00:13:55.738644 2696 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 00:13:55.739291 kubelet[2696]: E0710 00:13:55.739172 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:13:55.845112 kubelet[2696]: E0710 00:13:55.844993 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:13:58.535607 systemd[1]: Started sshd@8-10.0.0.19:22-10.0.0.1:46788.service - OpenSSH per-connection server daemon (10.0.0.1:46788). Jul 10 00:13:58.579648 sshd[4068]: Accepted publickey for core from 10.0.0.1 port 46788 ssh2: RSA SHA256:a/WzkVKs173+YSebQY64/4LigDpieaPOYRH6W2gWTe4 Jul 10 00:13:58.581252 sshd-session[4068]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:13:58.586118 systemd-logind[1509]: New session 9 of user core. Jul 10 00:13:58.591938 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 10 00:13:58.709755 sshd[4070]: Connection closed by 10.0.0.1 port 46788 Jul 10 00:13:58.710174 sshd-session[4068]: pam_unix(sshd:session): session closed for user core Jul 10 00:13:58.715058 systemd[1]: sshd@8-10.0.0.19:22-10.0.0.1:46788.service: Deactivated successfully. Jul 10 00:13:58.717337 systemd[1]: session-9.scope: Deactivated successfully. Jul 10 00:13:58.718088 systemd-logind[1509]: Session 9 logged out. Waiting for processes to exit. Jul 10 00:13:58.719487 systemd-logind[1509]: Removed session 9. Jul 10 00:14:03.729458 systemd[1]: Started sshd@9-10.0.0.19:22-10.0.0.1:53116.service - OpenSSH per-connection server daemon (10.0.0.1:53116). Jul 10 00:14:03.775811 sshd[4084]: Accepted publickey for core from 10.0.0.1 port 53116 ssh2: RSA SHA256:a/WzkVKs173+YSebQY64/4LigDpieaPOYRH6W2gWTe4 Jul 10 00:14:03.777525 sshd-session[4084]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:14:03.783120 systemd-logind[1509]: New session 10 of user core. Jul 10 00:14:03.796986 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 10 00:14:03.926730 sshd[4086]: Connection closed by 10.0.0.1 port 53116 Jul 10 00:14:03.927067 sshd-session[4084]: pam_unix(sshd:session): session closed for user core Jul 10 00:14:03.932381 systemd[1]: sshd@9-10.0.0.19:22-10.0.0.1:53116.service: Deactivated successfully. Jul 10 00:14:03.934661 systemd[1]: session-10.scope: Deactivated successfully. Jul 10 00:14:03.935625 systemd-logind[1509]: Session 10 logged out. Waiting for processes to exit. Jul 10 00:14:03.937402 systemd-logind[1509]: Removed session 10. Jul 10 00:14:08.944410 systemd[1]: Started sshd@10-10.0.0.19:22-10.0.0.1:53128.service - OpenSSH per-connection server daemon (10.0.0.1:53128). Jul 10 00:14:08.998312 sshd[4101]: Accepted publickey for core from 10.0.0.1 port 53128 ssh2: RSA SHA256:a/WzkVKs173+YSebQY64/4LigDpieaPOYRH6W2gWTe4 Jul 10 00:14:09.000050 sshd-session[4101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:14:09.004468 systemd-logind[1509]: New session 11 of user core. Jul 10 00:14:09.018943 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 10 00:14:09.139381 sshd[4103]: Connection closed by 10.0.0.1 port 53128 Jul 10 00:14:09.139724 sshd-session[4101]: pam_unix(sshd:session): session closed for user core Jul 10 00:14:09.144584 systemd[1]: sshd@10-10.0.0.19:22-10.0.0.1:53128.service: Deactivated successfully. Jul 10 00:14:09.146637 systemd[1]: session-11.scope: Deactivated successfully. Jul 10 00:14:09.147467 systemd-logind[1509]: Session 11 logged out. Waiting for processes to exit. Jul 10 00:14:09.148752 systemd-logind[1509]: Removed session 11. Jul 10 00:14:14.160967 systemd[1]: Started sshd@11-10.0.0.19:22-10.0.0.1:37712.service - OpenSSH per-connection server daemon (10.0.0.1:37712). Jul 10 00:14:14.204373 sshd[4118]: Accepted publickey for core from 10.0.0.1 port 37712 ssh2: RSA SHA256:a/WzkVKs173+YSebQY64/4LigDpieaPOYRH6W2gWTe4 Jul 10 00:14:14.206322 sshd-session[4118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:14:14.211414 systemd-logind[1509]: New session 12 of user core. Jul 10 00:14:14.219926 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 10 00:14:14.346208 sshd[4120]: Connection closed by 10.0.0.1 port 37712 Jul 10 00:14:14.346652 sshd-session[4118]: pam_unix(sshd:session): session closed for user core Jul 10 00:14:14.360995 systemd[1]: sshd@11-10.0.0.19:22-10.0.0.1:37712.service: Deactivated successfully. Jul 10 00:14:14.363037 systemd[1]: session-12.scope: Deactivated successfully. Jul 10 00:14:14.363878 systemd-logind[1509]: Session 12 logged out. Waiting for processes to exit. Jul 10 00:14:14.368156 systemd[1]: Started sshd@12-10.0.0.19:22-10.0.0.1:37716.service - OpenSSH per-connection server daemon (10.0.0.1:37716). Jul 10 00:14:14.369145 systemd-logind[1509]: Removed session 12. Jul 10 00:14:14.420230 sshd[4135]: Accepted publickey for core from 10.0.0.1 port 37716 ssh2: RSA SHA256:a/WzkVKs173+YSebQY64/4LigDpieaPOYRH6W2gWTe4 Jul 10 00:14:14.421748 sshd-session[4135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:14:14.426478 systemd-logind[1509]: New session 13 of user core. Jul 10 00:14:14.435938 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 10 00:14:14.583760 sshd[4137]: Connection closed by 10.0.0.1 port 37716 Jul 10 00:14:14.584216 sshd-session[4135]: pam_unix(sshd:session): session closed for user core Jul 10 00:14:14.594674 systemd[1]: sshd@12-10.0.0.19:22-10.0.0.1:37716.service: Deactivated successfully. Jul 10 00:14:14.597844 systemd[1]: session-13.scope: Deactivated successfully. Jul 10 00:14:14.599257 systemd-logind[1509]: Session 13 logged out. Waiting for processes to exit. Jul 10 00:14:14.606429 systemd[1]: Started sshd@13-10.0.0.19:22-10.0.0.1:37726.service - OpenSSH per-connection server daemon (10.0.0.1:37726). Jul 10 00:14:14.607491 systemd-logind[1509]: Removed session 13. Jul 10 00:14:14.659303 sshd[4149]: Accepted publickey for core from 10.0.0.1 port 37726 ssh2: RSA SHA256:a/WzkVKs173+YSebQY64/4LigDpieaPOYRH6W2gWTe4 Jul 10 00:14:14.660880 sshd-session[4149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:14:14.666380 systemd-logind[1509]: New session 14 of user core. Jul 10 00:14:14.681109 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 10 00:14:14.943720 sshd[4151]: Connection closed by 10.0.0.1 port 37726 Jul 10 00:14:14.944136 sshd-session[4149]: pam_unix(sshd:session): session closed for user core Jul 10 00:14:14.949115 systemd[1]: sshd@13-10.0.0.19:22-10.0.0.1:37726.service: Deactivated successfully. Jul 10 00:14:14.951328 systemd[1]: session-14.scope: Deactivated successfully. Jul 10 00:14:14.952174 systemd-logind[1509]: Session 14 logged out. Waiting for processes to exit. Jul 10 00:14:14.953741 systemd-logind[1509]: Removed session 14. Jul 10 00:14:19.962078 systemd[1]: Started sshd@14-10.0.0.19:22-10.0.0.1:45414.service - OpenSSH per-connection server daemon (10.0.0.1:45414). Jul 10 00:14:20.014123 sshd[4164]: Accepted publickey for core from 10.0.0.1 port 45414 ssh2: RSA SHA256:a/WzkVKs173+YSebQY64/4LigDpieaPOYRH6W2gWTe4 Jul 10 00:14:20.015710 sshd-session[4164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:14:20.020692 systemd-logind[1509]: New session 15 of user core. Jul 10 00:14:20.027944 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 10 00:14:20.142043 sshd[4166]: Connection closed by 10.0.0.1 port 45414 Jul 10 00:14:20.142395 sshd-session[4164]: pam_unix(sshd:session): session closed for user core Jul 10 00:14:20.146786 systemd[1]: sshd@14-10.0.0.19:22-10.0.0.1:45414.service: Deactivated successfully. Jul 10 00:14:20.149151 systemd[1]: session-15.scope: Deactivated successfully. Jul 10 00:14:20.149965 systemd-logind[1509]: Session 15 logged out. Waiting for processes to exit. Jul 10 00:14:20.151244 systemd-logind[1509]: Removed session 15. Jul 10 00:14:25.155069 systemd[1]: Started sshd@15-10.0.0.19:22-10.0.0.1:45416.service - OpenSSH per-connection server daemon (10.0.0.1:45416). Jul 10 00:14:25.208327 sshd[4184]: Accepted publickey for core from 10.0.0.1 port 45416 ssh2: RSA SHA256:a/WzkVKs173+YSebQY64/4LigDpieaPOYRH6W2gWTe4 Jul 10 00:14:25.209811 sshd-session[4184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:14:25.214419 systemd-logind[1509]: New session 16 of user core. Jul 10 00:14:25.222970 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 10 00:14:25.339624 sshd[4186]: Connection closed by 10.0.0.1 port 45416 Jul 10 00:14:25.340424 sshd-session[4184]: pam_unix(sshd:session): session closed for user core Jul 10 00:14:25.361188 systemd[1]: sshd@15-10.0.0.19:22-10.0.0.1:45416.service: Deactivated successfully. Jul 10 00:14:25.364013 systemd[1]: session-16.scope: Deactivated successfully. Jul 10 00:14:25.365006 systemd-logind[1509]: Session 16 logged out. Waiting for processes to exit. Jul 10 00:14:25.370302 systemd[1]: Started sshd@16-10.0.0.19:22-10.0.0.1:45418.service - OpenSSH per-connection server daemon (10.0.0.1:45418). Jul 10 00:14:25.370903 systemd-logind[1509]: Removed session 16. Jul 10 00:14:25.433713 sshd[4199]: Accepted publickey for core from 10.0.0.1 port 45418 ssh2: RSA SHA256:a/WzkVKs173+YSebQY64/4LigDpieaPOYRH6W2gWTe4 Jul 10 00:14:25.435781 sshd-session[4199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:14:25.441914 systemd-logind[1509]: New session 17 of user core. Jul 10 00:14:25.452949 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 10 00:14:25.768485 sshd[4201]: Connection closed by 10.0.0.1 port 45418 Jul 10 00:14:25.768670 sshd-session[4199]: pam_unix(sshd:session): session closed for user core Jul 10 00:14:25.782259 systemd[1]: sshd@16-10.0.0.19:22-10.0.0.1:45418.service: Deactivated successfully. Jul 10 00:14:25.784774 systemd[1]: session-17.scope: Deactivated successfully. Jul 10 00:14:25.785847 systemd-logind[1509]: Session 17 logged out. Waiting for processes to exit. Jul 10 00:14:25.789369 systemd[1]: Started sshd@17-10.0.0.19:22-10.0.0.1:45426.service - OpenSSH per-connection server daemon (10.0.0.1:45426). Jul 10 00:14:25.790242 systemd-logind[1509]: Removed session 17. Jul 10 00:14:25.847568 sshd[4213]: Accepted publickey for core from 10.0.0.1 port 45426 ssh2: RSA SHA256:a/WzkVKs173+YSebQY64/4LigDpieaPOYRH6W2gWTe4 Jul 10 00:14:25.849261 sshd-session[4213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:14:25.854638 systemd-logind[1509]: New session 18 of user core. Jul 10 00:14:25.863996 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 10 00:14:26.704925 sshd[4215]: Connection closed by 10.0.0.1 port 45426 Jul 10 00:14:26.708977 sshd-session[4213]: pam_unix(sshd:session): session closed for user core Jul 10 00:14:26.718181 systemd[1]: sshd@17-10.0.0.19:22-10.0.0.1:45426.service: Deactivated successfully. Jul 10 00:14:26.720595 systemd[1]: session-18.scope: Deactivated successfully. Jul 10 00:14:26.723135 systemd-logind[1509]: Session 18 logged out. Waiting for processes to exit. Jul 10 00:14:26.727045 systemd[1]: Started sshd@18-10.0.0.19:22-10.0.0.1:45434.service - OpenSSH per-connection server daemon (10.0.0.1:45434). Jul 10 00:14:26.728275 systemd-logind[1509]: Removed session 18. Jul 10 00:14:26.771755 sshd[4234]: Accepted publickey for core from 10.0.0.1 port 45434 ssh2: RSA SHA256:a/WzkVKs173+YSebQY64/4LigDpieaPOYRH6W2gWTe4 Jul 10 00:14:26.773878 sshd-session[4234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:14:26.779381 systemd-logind[1509]: New session 19 of user core. Jul 10 00:14:26.789003 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 10 00:14:27.037252 sshd[4236]: Connection closed by 10.0.0.1 port 45434 Jul 10 00:14:27.038038 sshd-session[4234]: pam_unix(sshd:session): session closed for user core Jul 10 00:14:27.052480 systemd[1]: sshd@18-10.0.0.19:22-10.0.0.1:45434.service: Deactivated successfully. Jul 10 00:14:27.055706 systemd[1]: session-19.scope: Deactivated successfully. Jul 10 00:14:27.058179 systemd-logind[1509]: Session 19 logged out. Waiting for processes to exit. Jul 10 00:14:27.060585 systemd[1]: Started sshd@19-10.0.0.19:22-10.0.0.1:45444.service - OpenSSH per-connection server daemon (10.0.0.1:45444). Jul 10 00:14:27.061858 systemd-logind[1509]: Removed session 19. Jul 10 00:14:27.114359 sshd[4250]: Accepted publickey for core from 10.0.0.1 port 45444 ssh2: RSA SHA256:a/WzkVKs173+YSebQY64/4LigDpieaPOYRH6W2gWTe4 Jul 10 00:14:27.117588 sshd-session[4250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:14:27.123888 systemd-logind[1509]: New session 20 of user core. Jul 10 00:14:27.130994 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 10 00:14:27.247813 sshd[4252]: Connection closed by 10.0.0.1 port 45444 Jul 10 00:14:27.248192 sshd-session[4250]: pam_unix(sshd:session): session closed for user core Jul 10 00:14:27.252776 systemd[1]: sshd@19-10.0.0.19:22-10.0.0.1:45444.service: Deactivated successfully. Jul 10 00:14:27.255301 systemd[1]: session-20.scope: Deactivated successfully. Jul 10 00:14:27.256381 systemd-logind[1509]: Session 20 logged out. Waiting for processes to exit. Jul 10 00:14:27.258125 systemd-logind[1509]: Removed session 20. Jul 10 00:14:32.266704 systemd[1]: Started sshd@20-10.0.0.19:22-10.0.0.1:34792.service - OpenSSH per-connection server daemon (10.0.0.1:34792). Jul 10 00:14:32.323010 sshd[4265]: Accepted publickey for core from 10.0.0.1 port 34792 ssh2: RSA SHA256:a/WzkVKs173+YSebQY64/4LigDpieaPOYRH6W2gWTe4 Jul 10 00:14:32.325072 sshd-session[4265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:14:32.330560 systemd-logind[1509]: New session 21 of user core. Jul 10 00:14:32.341035 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 10 00:14:32.465373 sshd[4267]: Connection closed by 10.0.0.1 port 34792 Jul 10 00:14:32.465786 sshd-session[4265]: pam_unix(sshd:session): session closed for user core Jul 10 00:14:32.471009 systemd[1]: sshd@20-10.0.0.19:22-10.0.0.1:34792.service: Deactivated successfully. Jul 10 00:14:32.473052 systemd[1]: session-21.scope: Deactivated successfully. Jul 10 00:14:32.473973 systemd-logind[1509]: Session 21 logged out. Waiting for processes to exit. Jul 10 00:14:32.475596 systemd-logind[1509]: Removed session 21. Jul 10 00:14:37.485093 systemd[1]: Started sshd@21-10.0.0.19:22-10.0.0.1:34794.service - OpenSSH per-connection server daemon (10.0.0.1:34794). Jul 10 00:14:37.543264 sshd[4283]: Accepted publickey for core from 10.0.0.1 port 34794 ssh2: RSA SHA256:a/WzkVKs173+YSebQY64/4LigDpieaPOYRH6W2gWTe4 Jul 10 00:14:37.545228 sshd-session[4283]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:14:37.551520 systemd-logind[1509]: New session 22 of user core. Jul 10 00:14:37.562108 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 10 00:14:37.690503 sshd[4285]: Connection closed by 10.0.0.1 port 34794 Jul 10 00:14:37.690956 sshd-session[4283]: pam_unix(sshd:session): session closed for user core Jul 10 00:14:37.695953 systemd[1]: sshd@21-10.0.0.19:22-10.0.0.1:34794.service: Deactivated successfully. Jul 10 00:14:37.698765 systemd[1]: session-22.scope: Deactivated successfully. Jul 10 00:14:37.700003 systemd-logind[1509]: Session 22 logged out. Waiting for processes to exit. Jul 10 00:14:37.701874 systemd-logind[1509]: Removed session 22. Jul 10 00:14:41.700542 kubelet[2696]: E0710 00:14:41.700483 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:14:42.712175 systemd[1]: Started sshd@22-10.0.0.19:22-10.0.0.1:36620.service - OpenSSH per-connection server daemon (10.0.0.1:36620). Jul 10 00:14:42.760984 sshd[4299]: Accepted publickey for core from 10.0.0.1 port 36620 ssh2: RSA SHA256:a/WzkVKs173+YSebQY64/4LigDpieaPOYRH6W2gWTe4 Jul 10 00:14:42.762752 sshd-session[4299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:14:42.767670 systemd-logind[1509]: New session 23 of user core. Jul 10 00:14:42.777909 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 10 00:14:42.891885 sshd[4301]: Connection closed by 10.0.0.1 port 36620 Jul 10 00:14:42.892289 sshd-session[4299]: pam_unix(sshd:session): session closed for user core Jul 10 00:14:42.902325 systemd[1]: sshd@22-10.0.0.19:22-10.0.0.1:36620.service: Deactivated successfully. Jul 10 00:14:42.904569 systemd[1]: session-23.scope: Deactivated successfully. Jul 10 00:14:42.905528 systemd-logind[1509]: Session 23 logged out. Waiting for processes to exit. Jul 10 00:14:42.909721 systemd[1]: Started sshd@23-10.0.0.19:22-10.0.0.1:36622.service - OpenSSH per-connection server daemon (10.0.0.1:36622). Jul 10 00:14:42.910647 systemd-logind[1509]: Removed session 23. Jul 10 00:14:42.971825 sshd[4315]: Accepted publickey for core from 10.0.0.1 port 36622 ssh2: RSA SHA256:a/WzkVKs173+YSebQY64/4LigDpieaPOYRH6W2gWTe4 Jul 10 00:14:42.973717 sshd-session[4315]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:14:42.979284 systemd-logind[1509]: New session 24 of user core. Jul 10 00:14:42.996070 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 10 00:14:44.359239 containerd[1573]: time="2025-07-10T00:14:44.359194749Z" level=info msg="StopContainer for \"10918897a52c1823f2a48be439d028de3a86b773a74555b7e2020d82749e0baa\" with timeout 30 (s)" Jul 10 00:14:44.372277 containerd[1573]: time="2025-07-10T00:14:44.372236004Z" level=info msg="Stop container \"10918897a52c1823f2a48be439d028de3a86b773a74555b7e2020d82749e0baa\" with signal terminated" Jul 10 00:14:44.387268 systemd[1]: cri-containerd-10918897a52c1823f2a48be439d028de3a86b773a74555b7e2020d82749e0baa.scope: Deactivated successfully. Jul 10 00:14:44.390484 containerd[1573]: time="2025-07-10T00:14:44.390434475Z" level=info msg="TaskExit event in podsandbox handler container_id:\"10918897a52c1823f2a48be439d028de3a86b773a74555b7e2020d82749e0baa\" id:\"10918897a52c1823f2a48be439d028de3a86b773a74555b7e2020d82749e0baa\" pid:3295 exited_at:{seconds:1752106484 nanos:389015978}" Jul 10 00:14:44.390655 containerd[1573]: time="2025-07-10T00:14:44.390627323Z" level=info msg="received exit event container_id:\"10918897a52c1823f2a48be439d028de3a86b773a74555b7e2020d82749e0baa\" id:\"10918897a52c1823f2a48be439d028de3a86b773a74555b7e2020d82749e0baa\" pid:3295 exited_at:{seconds:1752106484 nanos:389015978}" Jul 10 00:14:44.396820 containerd[1573]: time="2025-07-10T00:14:44.396723581Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 10 00:14:44.399067 containerd[1573]: time="2025-07-10T00:14:44.399035093Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dbce19b111df431fa26784ac01216e28f894b52e511be549ad9988553351b089\" id:\"0960ed347e4bd0fcc52685a141450794d7d27d06bfa09507da8fdcccefe8414a\" pid:4339 exited_at:{seconds:1752106484 nanos:398705675}" Jul 10 00:14:44.402554 containerd[1573]: time="2025-07-10T00:14:44.402524505Z" level=info msg="StopContainer for \"dbce19b111df431fa26784ac01216e28f894b52e511be549ad9988553351b089\" with timeout 2 (s)" Jul 10 00:14:44.402918 containerd[1573]: time="2025-07-10T00:14:44.402884733Z" level=info msg="Stop container \"dbce19b111df431fa26784ac01216e28f894b52e511be549ad9988553351b089\" with signal terminated" Jul 10 00:14:44.411480 systemd-networkd[1462]: lxc_health: Link DOWN Jul 10 00:14:44.412006 systemd-networkd[1462]: lxc_health: Lost carrier Jul 10 00:14:44.418425 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-10918897a52c1823f2a48be439d028de3a86b773a74555b7e2020d82749e0baa-rootfs.mount: Deactivated successfully. Jul 10 00:14:44.436493 systemd[1]: cri-containerd-dbce19b111df431fa26784ac01216e28f894b52e511be549ad9988553351b089.scope: Deactivated successfully. Jul 10 00:14:44.436899 systemd[1]: cri-containerd-dbce19b111df431fa26784ac01216e28f894b52e511be549ad9988553351b089.scope: Consumed 6.815s CPU time, 126.9M memory peak, 228K read from disk, 13.3M written to disk. Jul 10 00:14:44.437581 containerd[1573]: time="2025-07-10T00:14:44.437542303Z" level=info msg="received exit event container_id:\"dbce19b111df431fa26784ac01216e28f894b52e511be549ad9988553351b089\" id:\"dbce19b111df431fa26784ac01216e28f894b52e511be549ad9988553351b089\" pid:3365 exited_at:{seconds:1752106484 nanos:437329487}" Jul 10 00:14:44.437775 containerd[1573]: time="2025-07-10T00:14:44.437728027Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dbce19b111df431fa26784ac01216e28f894b52e511be549ad9988553351b089\" id:\"dbce19b111df431fa26784ac01216e28f894b52e511be549ad9988553351b089\" pid:3365 exited_at:{seconds:1752106484 nanos:437329487}" Jul 10 00:14:44.446895 containerd[1573]: time="2025-07-10T00:14:44.446841034Z" level=info msg="StopContainer for \"10918897a52c1823f2a48be439d028de3a86b773a74555b7e2020d82749e0baa\" returns successfully" Jul 10 00:14:44.448085 containerd[1573]: time="2025-07-10T00:14:44.447765618Z" level=info msg="StopPodSandbox for \"9aba9394b7b002ff9ac5389775b5ff294be0d2f7f94418df72bd02a1c7e1a983\"" Jul 10 00:14:44.448283 containerd[1573]: time="2025-07-10T00:14:44.448263058Z" level=info msg="Container to stop \"10918897a52c1823f2a48be439d028de3a86b773a74555b7e2020d82749e0baa\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:14:44.457258 systemd[1]: cri-containerd-9aba9394b7b002ff9ac5389775b5ff294be0d2f7f94418df72bd02a1c7e1a983.scope: Deactivated successfully. Jul 10 00:14:44.459591 containerd[1573]: time="2025-07-10T00:14:44.459380301Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9aba9394b7b002ff9ac5389775b5ff294be0d2f7f94418df72bd02a1c7e1a983\" id:\"9aba9394b7b002ff9ac5389775b5ff294be0d2f7f94418df72bd02a1c7e1a983\" pid:2891 exit_status:137 exited_at:{seconds:1752106484 nanos:458778342}" Jul 10 00:14:44.464550 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dbce19b111df431fa26784ac01216e28f894b52e511be549ad9988553351b089-rootfs.mount: Deactivated successfully. Jul 10 00:14:44.479415 containerd[1573]: time="2025-07-10T00:14:44.479362367Z" level=info msg="StopContainer for \"dbce19b111df431fa26784ac01216e28f894b52e511be549ad9988553351b089\" returns successfully" Jul 10 00:14:44.480304 containerd[1573]: time="2025-07-10T00:14:44.480062042Z" level=info msg="StopPodSandbox for \"88641bfb94ef2ac6f2e6cf2da855403c7c5df4d5a258a2fa06d79d878d89d79c\"" Jul 10 00:14:44.480304 containerd[1573]: time="2025-07-10T00:14:44.480132086Z" level=info msg="Container to stop \"3361403dd42c872d13fd333abd0eddc47b88115b86f7ef079fe800944beee38a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:14:44.480304 containerd[1573]: time="2025-07-10T00:14:44.480144790Z" level=info msg="Container to stop \"6fb24ef494773bae9e46b2e9563f988ce16f76f7b3a5b71220d9f162e4514dd4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:14:44.480304 containerd[1573]: time="2025-07-10T00:14:44.480156071Z" level=info msg="Container to stop \"62ef1e1241ab18cccfd913a565b4fab260e5df53d63a006e6dfbeed0cec62c14\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:14:44.480304 containerd[1573]: time="2025-07-10T00:14:44.480165129Z" level=info msg="Container to stop \"7d7639a43025f745ac48d1fe4a4011bc0de112f25e04df2d863587b7a1e28720\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:14:44.480304 containerd[1573]: time="2025-07-10T00:14:44.480174547Z" level=info msg="Container to stop \"dbce19b111df431fa26784ac01216e28f894b52e511be549ad9988553351b089\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:14:44.487202 systemd[1]: cri-containerd-88641bfb94ef2ac6f2e6cf2da855403c7c5df4d5a258a2fa06d79d878d89d79c.scope: Deactivated successfully. Jul 10 00:14:44.494427 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9aba9394b7b002ff9ac5389775b5ff294be0d2f7f94418df72bd02a1c7e1a983-rootfs.mount: Deactivated successfully. Jul 10 00:14:44.498458 containerd[1573]: time="2025-07-10T00:14:44.498380812Z" level=info msg="shim disconnected" id=9aba9394b7b002ff9ac5389775b5ff294be0d2f7f94418df72bd02a1c7e1a983 namespace=k8s.io Jul 10 00:14:44.498458 containerd[1573]: time="2025-07-10T00:14:44.498412302Z" level=warning msg="cleaning up after shim disconnected" id=9aba9394b7b002ff9ac5389775b5ff294be0d2f7f94418df72bd02a1c7e1a983 namespace=k8s.io Jul 10 00:14:44.514927 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-88641bfb94ef2ac6f2e6cf2da855403c7c5df4d5a258a2fa06d79d878d89d79c-rootfs.mount: Deactivated successfully. Jul 10 00:14:44.521947 containerd[1573]: time="2025-07-10T00:14:44.498419727Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 00:14:44.522178 containerd[1573]: time="2025-07-10T00:14:44.518689611Z" level=info msg="shim disconnected" id=88641bfb94ef2ac6f2e6cf2da855403c7c5df4d5a258a2fa06d79d878d89d79c namespace=k8s.io Jul 10 00:14:44.522178 containerd[1573]: time="2025-07-10T00:14:44.522047933Z" level=warning msg="cleaning up after shim disconnected" id=88641bfb94ef2ac6f2e6cf2da855403c7c5df4d5a258a2fa06d79d878d89d79c namespace=k8s.io Jul 10 00:14:44.522178 containerd[1573]: time="2025-07-10T00:14:44.522060536Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 00:14:44.545832 containerd[1573]: time="2025-07-10T00:14:44.545598470Z" level=info msg="TaskExit event in podsandbox handler container_id:\"88641bfb94ef2ac6f2e6cf2da855403c7c5df4d5a258a2fa06d79d878d89d79c\" id:\"88641bfb94ef2ac6f2e6cf2da855403c7c5df4d5a258a2fa06d79d878d89d79c\" pid:2881 exit_status:137 exited_at:{seconds:1752106484 nanos:489237588}" Jul 10 00:14:44.548188 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9aba9394b7b002ff9ac5389775b5ff294be0d2f7f94418df72bd02a1c7e1a983-shm.mount: Deactivated successfully. Jul 10 00:14:44.548490 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-88641bfb94ef2ac6f2e6cf2da855403c7c5df4d5a258a2fa06d79d878d89d79c-shm.mount: Deactivated successfully. Jul 10 00:14:44.555452 containerd[1573]: time="2025-07-10T00:14:44.555377507Z" level=info msg="received exit event sandbox_id:\"88641bfb94ef2ac6f2e6cf2da855403c7c5df4d5a258a2fa06d79d878d89d79c\" exit_status:137 exited_at:{seconds:1752106484 nanos:489237588}" Jul 10 00:14:44.555516 containerd[1573]: time="2025-07-10T00:14:44.555450638Z" level=info msg="received exit event sandbox_id:\"9aba9394b7b002ff9ac5389775b5ff294be0d2f7f94418df72bd02a1c7e1a983\" exit_status:137 exited_at:{seconds:1752106484 nanos:458778342}" Jul 10 00:14:44.557254 containerd[1573]: time="2025-07-10T00:14:44.557205477Z" level=info msg="TearDown network for sandbox \"88641bfb94ef2ac6f2e6cf2da855403c7c5df4d5a258a2fa06d79d878d89d79c\" successfully" Jul 10 00:14:44.557254 containerd[1573]: time="2025-07-10T00:14:44.557234222Z" level=info msg="StopPodSandbox for \"88641bfb94ef2ac6f2e6cf2da855403c7c5df4d5a258a2fa06d79d878d89d79c\" returns successfully" Jul 10 00:14:44.558595 containerd[1573]: time="2025-07-10T00:14:44.558553540Z" level=info msg="TearDown network for sandbox \"9aba9394b7b002ff9ac5389775b5ff294be0d2f7f94418df72bd02a1c7e1a983\" successfully" Jul 10 00:14:44.558661 containerd[1573]: time="2025-07-10T00:14:44.558595050Z" level=info msg="StopPodSandbox for \"9aba9394b7b002ff9ac5389775b5ff294be0d2f7f94418df72bd02a1c7e1a983\" returns successfully" Jul 10 00:14:44.617006 kubelet[2696]: I0710 00:14:44.616172 2696 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4803bec2-5640-4c58-9ea4-6335971c236b-host-proc-sys-kernel\") pod \"4803bec2-5640-4c58-9ea4-6335971c236b\" (UID: \"4803bec2-5640-4c58-9ea4-6335971c236b\") " Jul 10 00:14:44.617006 kubelet[2696]: I0710 00:14:44.616211 2696 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4803bec2-5640-4c58-9ea4-6335971c236b-etc-cni-netd\") pod \"4803bec2-5640-4c58-9ea4-6335971c236b\" (UID: \"4803bec2-5640-4c58-9ea4-6335971c236b\") " Jul 10 00:14:44.617006 kubelet[2696]: I0710 00:14:44.616237 2696 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4803bec2-5640-4c58-9ea4-6335971c236b-cni-path\") pod \"4803bec2-5640-4c58-9ea4-6335971c236b\" (UID: \"4803bec2-5640-4c58-9ea4-6335971c236b\") " Jul 10 00:14:44.617006 kubelet[2696]: I0710 00:14:44.616258 2696 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4803bec2-5640-4c58-9ea4-6335971c236b-cilium-cgroup\") pod \"4803bec2-5640-4c58-9ea4-6335971c236b\" (UID: \"4803bec2-5640-4c58-9ea4-6335971c236b\") " Jul 10 00:14:44.617006 kubelet[2696]: I0710 00:14:44.616275 2696 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4803bec2-5640-4c58-9ea4-6335971c236b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "4803bec2-5640-4c58-9ea4-6335971c236b" (UID: "4803bec2-5640-4c58-9ea4-6335971c236b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:14:44.617559 kubelet[2696]: I0710 00:14:44.616275 2696 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4803bec2-5640-4c58-9ea4-6335971c236b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "4803bec2-5640-4c58-9ea4-6335971c236b" (UID: "4803bec2-5640-4c58-9ea4-6335971c236b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:14:44.617559 kubelet[2696]: I0710 00:14:44.616284 2696 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/820c45bc-e304-414f-b6bb-2e9593ae5916-cilium-config-path\") pod \"820c45bc-e304-414f-b6bb-2e9593ae5916\" (UID: \"820c45bc-e304-414f-b6bb-2e9593ae5916\") " Jul 10 00:14:44.617559 kubelet[2696]: I0710 00:14:44.616315 2696 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4803bec2-5640-4c58-9ea4-6335971c236b-cni-path" (OuterVolumeSpecName: "cni-path") pod "4803bec2-5640-4c58-9ea4-6335971c236b" (UID: "4803bec2-5640-4c58-9ea4-6335971c236b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:14:44.617559 kubelet[2696]: I0710 00:14:44.616334 2696 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4803bec2-5640-4c58-9ea4-6335971c236b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "4803bec2-5640-4c58-9ea4-6335971c236b" (UID: "4803bec2-5640-4c58-9ea4-6335971c236b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:14:44.617559 kubelet[2696]: I0710 00:14:44.616359 2696 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4803bec2-5640-4c58-9ea4-6335971c236b-xtables-lock\") pod \"4803bec2-5640-4c58-9ea4-6335971c236b\" (UID: \"4803bec2-5640-4c58-9ea4-6335971c236b\") " Jul 10 00:14:44.617681 kubelet[2696]: I0710 00:14:44.616401 2696 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7cqx7\" (UniqueName: \"kubernetes.io/projected/4803bec2-5640-4c58-9ea4-6335971c236b-kube-api-access-7cqx7\") pod \"4803bec2-5640-4c58-9ea4-6335971c236b\" (UID: \"4803bec2-5640-4c58-9ea4-6335971c236b\") " Jul 10 00:14:44.617681 kubelet[2696]: I0710 00:14:44.616427 2696 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4803bec2-5640-4c58-9ea4-6335971c236b-lib-modules\") pod \"4803bec2-5640-4c58-9ea4-6335971c236b\" (UID: \"4803bec2-5640-4c58-9ea4-6335971c236b\") " Jul 10 00:14:44.617681 kubelet[2696]: I0710 00:14:44.616446 2696 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4803bec2-5640-4c58-9ea4-6335971c236b-host-proc-sys-net\") pod \"4803bec2-5640-4c58-9ea4-6335971c236b\" (UID: \"4803bec2-5640-4c58-9ea4-6335971c236b\") " Jul 10 00:14:44.617681 kubelet[2696]: I0710 00:14:44.616466 2696 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4803bec2-5640-4c58-9ea4-6335971c236b-bpf-maps\") pod \"4803bec2-5640-4c58-9ea4-6335971c236b\" (UID: \"4803bec2-5640-4c58-9ea4-6335971c236b\") " Jul 10 00:14:44.617681 kubelet[2696]: I0710 00:14:44.616519 2696 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4803bec2-5640-4c58-9ea4-6335971c236b-hostproc\") pod \"4803bec2-5640-4c58-9ea4-6335971c236b\" (UID: \"4803bec2-5640-4c58-9ea4-6335971c236b\") " Jul 10 00:14:44.617681 kubelet[2696]: I0710 00:14:44.616539 2696 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4803bec2-5640-4c58-9ea4-6335971c236b-cilium-run\") pod \"4803bec2-5640-4c58-9ea4-6335971c236b\" (UID: \"4803bec2-5640-4c58-9ea4-6335971c236b\") " Jul 10 00:14:44.618437 kubelet[2696]: I0710 00:14:44.616564 2696 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4803bec2-5640-4c58-9ea4-6335971c236b-cilium-config-path\") pod \"4803bec2-5640-4c58-9ea4-6335971c236b\" (UID: \"4803bec2-5640-4c58-9ea4-6335971c236b\") " Jul 10 00:14:44.618437 kubelet[2696]: I0710 00:14:44.616592 2696 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4803bec2-5640-4c58-9ea4-6335971c236b-clustermesh-secrets\") pod \"4803bec2-5640-4c58-9ea4-6335971c236b\" (UID: \"4803bec2-5640-4c58-9ea4-6335971c236b\") " Jul 10 00:14:44.618437 kubelet[2696]: I0710 00:14:44.616613 2696 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4803bec2-5640-4c58-9ea4-6335971c236b-hubble-tls\") pod \"4803bec2-5640-4c58-9ea4-6335971c236b\" (UID: \"4803bec2-5640-4c58-9ea4-6335971c236b\") " Jul 10 00:14:44.618437 kubelet[2696]: I0710 00:14:44.616636 2696 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qlhv8\" (UniqueName: \"kubernetes.io/projected/820c45bc-e304-414f-b6bb-2e9593ae5916-kube-api-access-qlhv8\") pod \"820c45bc-e304-414f-b6bb-2e9593ae5916\" (UID: \"820c45bc-e304-414f-b6bb-2e9593ae5916\") " Jul 10 00:14:44.618437 kubelet[2696]: I0710 00:14:44.616675 2696 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4803bec2-5640-4c58-9ea4-6335971c236b-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 10 00:14:44.618437 kubelet[2696]: I0710 00:14:44.616687 2696 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4803bec2-5640-4c58-9ea4-6335971c236b-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 10 00:14:44.618437 kubelet[2696]: I0710 00:14:44.616698 2696 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4803bec2-5640-4c58-9ea4-6335971c236b-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 10 00:14:44.618644 kubelet[2696]: I0710 00:14:44.616709 2696 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4803bec2-5640-4c58-9ea4-6335971c236b-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 10 00:14:44.618644 kubelet[2696]: I0710 00:14:44.617096 2696 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4803bec2-5640-4c58-9ea4-6335971c236b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "4803bec2-5640-4c58-9ea4-6335971c236b" (UID: "4803bec2-5640-4c58-9ea4-6335971c236b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:14:44.618644 kubelet[2696]: I0710 00:14:44.617128 2696 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4803bec2-5640-4c58-9ea4-6335971c236b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "4803bec2-5640-4c58-9ea4-6335971c236b" (UID: "4803bec2-5640-4c58-9ea4-6335971c236b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:14:44.620810 kubelet[2696]: I0710 00:14:44.620400 2696 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/820c45bc-e304-414f-b6bb-2e9593ae5916-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "820c45bc-e304-414f-b6bb-2e9593ae5916" (UID: "820c45bc-e304-414f-b6bb-2e9593ae5916"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 10 00:14:44.620810 kubelet[2696]: I0710 00:14:44.620735 2696 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4803bec2-5640-4c58-9ea4-6335971c236b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4803bec2-5640-4c58-9ea4-6335971c236b" (UID: "4803bec2-5640-4c58-9ea4-6335971c236b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 10 00:14:44.620810 kubelet[2696]: I0710 00:14:44.620771 2696 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4803bec2-5640-4c58-9ea4-6335971c236b-hostproc" (OuterVolumeSpecName: "hostproc") pod "4803bec2-5640-4c58-9ea4-6335971c236b" (UID: "4803bec2-5640-4c58-9ea4-6335971c236b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:14:44.620905 kubelet[2696]: I0710 00:14:44.620787 2696 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4803bec2-5640-4c58-9ea4-6335971c236b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "4803bec2-5640-4c58-9ea4-6335971c236b" (UID: "4803bec2-5640-4c58-9ea4-6335971c236b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:14:44.620905 kubelet[2696]: I0710 00:14:44.620836 2696 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4803bec2-5640-4c58-9ea4-6335971c236b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "4803bec2-5640-4c58-9ea4-6335971c236b" (UID: "4803bec2-5640-4c58-9ea4-6335971c236b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:14:44.620905 kubelet[2696]: I0710 00:14:44.620850 2696 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4803bec2-5640-4c58-9ea4-6335971c236b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "4803bec2-5640-4c58-9ea4-6335971c236b" (UID: "4803bec2-5640-4c58-9ea4-6335971c236b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:14:44.621559 kubelet[2696]: I0710 00:14:44.621536 2696 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/820c45bc-e304-414f-b6bb-2e9593ae5916-kube-api-access-qlhv8" (OuterVolumeSpecName: "kube-api-access-qlhv8") pod "820c45bc-e304-414f-b6bb-2e9593ae5916" (UID: "820c45bc-e304-414f-b6bb-2e9593ae5916"). InnerVolumeSpecName "kube-api-access-qlhv8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 10 00:14:44.622412 kubelet[2696]: I0710 00:14:44.622369 2696 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4803bec2-5640-4c58-9ea4-6335971c236b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "4803bec2-5640-4c58-9ea4-6335971c236b" (UID: "4803bec2-5640-4c58-9ea4-6335971c236b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 10 00:14:44.622977 kubelet[2696]: I0710 00:14:44.622943 2696 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4803bec2-5640-4c58-9ea4-6335971c236b-kube-api-access-7cqx7" (OuterVolumeSpecName: "kube-api-access-7cqx7") pod "4803bec2-5640-4c58-9ea4-6335971c236b" (UID: "4803bec2-5640-4c58-9ea4-6335971c236b"). InnerVolumeSpecName "kube-api-access-7cqx7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 10 00:14:44.623475 kubelet[2696]: I0710 00:14:44.623444 2696 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4803bec2-5640-4c58-9ea4-6335971c236b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "4803bec2-5640-4c58-9ea4-6335971c236b" (UID: "4803bec2-5640-4c58-9ea4-6335971c236b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 10 00:14:44.710995 systemd[1]: Removed slice kubepods-burstable-pod4803bec2_5640_4c58_9ea4_6335971c236b.slice - libcontainer container kubepods-burstable-pod4803bec2_5640_4c58_9ea4_6335971c236b.slice. Jul 10 00:14:44.711147 systemd[1]: kubepods-burstable-pod4803bec2_5640_4c58_9ea4_6335971c236b.slice: Consumed 6.929s CPU time, 127.2M memory peak, 240K read from disk, 13.3M written to disk. Jul 10 00:14:44.712268 systemd[1]: Removed slice kubepods-besteffort-pod820c45bc_e304_414f_b6bb_2e9593ae5916.slice - libcontainer container kubepods-besteffort-pod820c45bc_e304_414f_b6bb_2e9593ae5916.slice. Jul 10 00:14:44.717412 kubelet[2696]: I0710 00:14:44.717359 2696 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4803bec2-5640-4c58-9ea4-6335971c236b-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 10 00:14:44.717412 kubelet[2696]: I0710 00:14:44.717398 2696 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4803bec2-5640-4c58-9ea4-6335971c236b-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 10 00:14:44.717412 kubelet[2696]: I0710 00:14:44.717410 2696 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4803bec2-5640-4c58-9ea4-6335971c236b-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 10 00:14:44.717412 kubelet[2696]: I0710 00:14:44.717427 2696 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4803bec2-5640-4c58-9ea4-6335971c236b-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 10 00:14:44.717412 kubelet[2696]: I0710 00:14:44.717438 2696 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4803bec2-5640-4c58-9ea4-6335971c236b-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 10 00:14:44.717634 kubelet[2696]: I0710 00:14:44.717454 2696 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4803bec2-5640-4c58-9ea4-6335971c236b-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 10 00:14:44.717634 kubelet[2696]: I0710 00:14:44.717467 2696 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qlhv8\" (UniqueName: \"kubernetes.io/projected/820c45bc-e304-414f-b6bb-2e9593ae5916-kube-api-access-qlhv8\") on node \"localhost\" DevicePath \"\"" Jul 10 00:14:44.717634 kubelet[2696]: I0710 00:14:44.717478 2696 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/820c45bc-e304-414f-b6bb-2e9593ae5916-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 10 00:14:44.717634 kubelet[2696]: I0710 00:14:44.717488 2696 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4803bec2-5640-4c58-9ea4-6335971c236b-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 10 00:14:44.717634 kubelet[2696]: I0710 00:14:44.717501 2696 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7cqx7\" (UniqueName: \"kubernetes.io/projected/4803bec2-5640-4c58-9ea4-6335971c236b-kube-api-access-7cqx7\") on node \"localhost\" DevicePath \"\"" Jul 10 00:14:44.717634 kubelet[2696]: I0710 00:14:44.717512 2696 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4803bec2-5640-4c58-9ea4-6335971c236b-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 10 00:14:44.717634 kubelet[2696]: I0710 00:14:44.717523 2696 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4803bec2-5640-4c58-9ea4-6335971c236b-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 10 00:14:44.944824 kubelet[2696]: I0710 00:14:44.942482 2696 scope.go:117] "RemoveContainer" containerID="10918897a52c1823f2a48be439d028de3a86b773a74555b7e2020d82749e0baa" Jul 10 00:14:44.949948 containerd[1573]: time="2025-07-10T00:14:44.949897688Z" level=info msg="RemoveContainer for \"10918897a52c1823f2a48be439d028de3a86b773a74555b7e2020d82749e0baa\"" Jul 10 00:14:44.954718 containerd[1573]: time="2025-07-10T00:14:44.954690457Z" level=info msg="RemoveContainer for \"10918897a52c1823f2a48be439d028de3a86b773a74555b7e2020d82749e0baa\" returns successfully" Jul 10 00:14:44.954952 kubelet[2696]: I0710 00:14:44.954926 2696 scope.go:117] "RemoveContainer" containerID="10918897a52c1823f2a48be439d028de3a86b773a74555b7e2020d82749e0baa" Jul 10 00:14:44.955325 containerd[1573]: time="2025-07-10T00:14:44.955284902Z" level=error msg="ContainerStatus for \"10918897a52c1823f2a48be439d028de3a86b773a74555b7e2020d82749e0baa\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"10918897a52c1823f2a48be439d028de3a86b773a74555b7e2020d82749e0baa\": not found" Jul 10 00:14:44.955506 kubelet[2696]: E0710 00:14:44.955479 2696 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"10918897a52c1823f2a48be439d028de3a86b773a74555b7e2020d82749e0baa\": not found" containerID="10918897a52c1823f2a48be439d028de3a86b773a74555b7e2020d82749e0baa" Jul 10 00:14:44.955575 kubelet[2696]: I0710 00:14:44.955529 2696 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"10918897a52c1823f2a48be439d028de3a86b773a74555b7e2020d82749e0baa"} err="failed to get container status \"10918897a52c1823f2a48be439d028de3a86b773a74555b7e2020d82749e0baa\": rpc error: code = NotFound desc = an error occurred when try to find container \"10918897a52c1823f2a48be439d028de3a86b773a74555b7e2020d82749e0baa\": not found" Jul 10 00:14:44.955575 kubelet[2696]: I0710 00:14:44.955569 2696 scope.go:117] "RemoveContainer" containerID="dbce19b111df431fa26784ac01216e28f894b52e511be549ad9988553351b089" Jul 10 00:14:44.957550 containerd[1573]: time="2025-07-10T00:14:44.957518967Z" level=info msg="RemoveContainer for \"dbce19b111df431fa26784ac01216e28f894b52e511be549ad9988553351b089\"" Jul 10 00:14:44.964484 containerd[1573]: time="2025-07-10T00:14:44.964418167Z" level=info msg="RemoveContainer for \"dbce19b111df431fa26784ac01216e28f894b52e511be549ad9988553351b089\" returns successfully" Jul 10 00:14:44.964699 kubelet[2696]: I0710 00:14:44.964588 2696 scope.go:117] "RemoveContainer" containerID="7d7639a43025f745ac48d1fe4a4011bc0de112f25e04df2d863587b7a1e28720" Jul 10 00:14:44.967169 containerd[1573]: time="2025-07-10T00:14:44.967134793Z" level=info msg="RemoveContainer for \"7d7639a43025f745ac48d1fe4a4011bc0de112f25e04df2d863587b7a1e28720\"" Jul 10 00:14:44.972429 containerd[1573]: time="2025-07-10T00:14:44.972396177Z" level=info msg="RemoveContainer for \"7d7639a43025f745ac48d1fe4a4011bc0de112f25e04df2d863587b7a1e28720\" returns successfully" Jul 10 00:14:44.972599 kubelet[2696]: I0710 00:14:44.972566 2696 scope.go:117] "RemoveContainer" containerID="62ef1e1241ab18cccfd913a565b4fab260e5df53d63a006e6dfbeed0cec62c14" Jul 10 00:14:44.974633 containerd[1573]: time="2025-07-10T00:14:44.974609563Z" level=info msg="RemoveContainer for \"62ef1e1241ab18cccfd913a565b4fab260e5df53d63a006e6dfbeed0cec62c14\"" Jul 10 00:14:44.982771 containerd[1573]: time="2025-07-10T00:14:44.982719614Z" level=info msg="RemoveContainer for \"62ef1e1241ab18cccfd913a565b4fab260e5df53d63a006e6dfbeed0cec62c14\" returns successfully" Jul 10 00:14:44.983747 kubelet[2696]: I0710 00:14:44.983316 2696 scope.go:117] "RemoveContainer" containerID="3361403dd42c872d13fd333abd0eddc47b88115b86f7ef079fe800944beee38a" Jul 10 00:14:44.984848 containerd[1573]: time="2025-07-10T00:14:44.984780127Z" level=info msg="RemoveContainer for \"3361403dd42c872d13fd333abd0eddc47b88115b86f7ef079fe800944beee38a\"" Jul 10 00:14:44.989808 containerd[1573]: time="2025-07-10T00:14:44.989752470Z" level=info msg="RemoveContainer for \"3361403dd42c872d13fd333abd0eddc47b88115b86f7ef079fe800944beee38a\" returns successfully" Jul 10 00:14:44.989997 kubelet[2696]: I0710 00:14:44.989962 2696 scope.go:117] "RemoveContainer" containerID="6fb24ef494773bae9e46b2e9563f988ce16f76f7b3a5b71220d9f162e4514dd4" Jul 10 00:14:44.991571 containerd[1573]: time="2025-07-10T00:14:44.991533370Z" level=info msg="RemoveContainer for \"6fb24ef494773bae9e46b2e9563f988ce16f76f7b3a5b71220d9f162e4514dd4\"" Jul 10 00:14:45.001951 containerd[1573]: time="2025-07-10T00:14:45.001900439Z" level=info msg="RemoveContainer for \"6fb24ef494773bae9e46b2e9563f988ce16f76f7b3a5b71220d9f162e4514dd4\" returns successfully" Jul 10 00:14:45.002200 kubelet[2696]: I0710 00:14:45.002166 2696 scope.go:117] "RemoveContainer" containerID="dbce19b111df431fa26784ac01216e28f894b52e511be549ad9988553351b089" Jul 10 00:14:45.002509 containerd[1573]: time="2025-07-10T00:14:45.002461289Z" level=error msg="ContainerStatus for \"dbce19b111df431fa26784ac01216e28f894b52e511be549ad9988553351b089\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dbce19b111df431fa26784ac01216e28f894b52e511be549ad9988553351b089\": not found" Jul 10 00:14:45.002764 kubelet[2696]: E0710 00:14:45.002720 2696 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dbce19b111df431fa26784ac01216e28f894b52e511be549ad9988553351b089\": not found" containerID="dbce19b111df431fa26784ac01216e28f894b52e511be549ad9988553351b089" Jul 10 00:14:45.002843 kubelet[2696]: I0710 00:14:45.002778 2696 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dbce19b111df431fa26784ac01216e28f894b52e511be549ad9988553351b089"} err="failed to get container status \"dbce19b111df431fa26784ac01216e28f894b52e511be549ad9988553351b089\": rpc error: code = NotFound desc = an error occurred when try to find container \"dbce19b111df431fa26784ac01216e28f894b52e511be549ad9988553351b089\": not found" Jul 10 00:14:45.002887 kubelet[2696]: I0710 00:14:45.002847 2696 scope.go:117] "RemoveContainer" containerID="7d7639a43025f745ac48d1fe4a4011bc0de112f25e04df2d863587b7a1e28720" Jul 10 00:14:45.003126 containerd[1573]: time="2025-07-10T00:14:45.003078738Z" level=error msg="ContainerStatus for \"7d7639a43025f745ac48d1fe4a4011bc0de112f25e04df2d863587b7a1e28720\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7d7639a43025f745ac48d1fe4a4011bc0de112f25e04df2d863587b7a1e28720\": not found" Jul 10 00:14:45.003275 kubelet[2696]: E0710 00:14:45.003247 2696 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7d7639a43025f745ac48d1fe4a4011bc0de112f25e04df2d863587b7a1e28720\": not found" containerID="7d7639a43025f745ac48d1fe4a4011bc0de112f25e04df2d863587b7a1e28720" Jul 10 00:14:45.003319 kubelet[2696]: I0710 00:14:45.003283 2696 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7d7639a43025f745ac48d1fe4a4011bc0de112f25e04df2d863587b7a1e28720"} err="failed to get container status \"7d7639a43025f745ac48d1fe4a4011bc0de112f25e04df2d863587b7a1e28720\": rpc error: code = NotFound desc = an error occurred when try to find container \"7d7639a43025f745ac48d1fe4a4011bc0de112f25e04df2d863587b7a1e28720\": not found" Jul 10 00:14:45.003319 kubelet[2696]: I0710 00:14:45.003317 2696 scope.go:117] "RemoveContainer" containerID="62ef1e1241ab18cccfd913a565b4fab260e5df53d63a006e6dfbeed0cec62c14" Jul 10 00:14:45.003528 containerd[1573]: time="2025-07-10T00:14:45.003491946Z" level=error msg="ContainerStatus for \"62ef1e1241ab18cccfd913a565b4fab260e5df53d63a006e6dfbeed0cec62c14\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"62ef1e1241ab18cccfd913a565b4fab260e5df53d63a006e6dfbeed0cec62c14\": not found" Jul 10 00:14:45.003628 kubelet[2696]: E0710 00:14:45.003597 2696 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"62ef1e1241ab18cccfd913a565b4fab260e5df53d63a006e6dfbeed0cec62c14\": not found" containerID="62ef1e1241ab18cccfd913a565b4fab260e5df53d63a006e6dfbeed0cec62c14" Jul 10 00:14:45.003669 kubelet[2696]: I0710 00:14:45.003626 2696 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"62ef1e1241ab18cccfd913a565b4fab260e5df53d63a006e6dfbeed0cec62c14"} err="failed to get container status \"62ef1e1241ab18cccfd913a565b4fab260e5df53d63a006e6dfbeed0cec62c14\": rpc error: code = NotFound desc = an error occurred when try to find container \"62ef1e1241ab18cccfd913a565b4fab260e5df53d63a006e6dfbeed0cec62c14\": not found" Jul 10 00:14:45.003669 kubelet[2696]: I0710 00:14:45.003644 2696 scope.go:117] "RemoveContainer" containerID="3361403dd42c872d13fd333abd0eddc47b88115b86f7ef079fe800944beee38a" Jul 10 00:14:45.003879 containerd[1573]: time="2025-07-10T00:14:45.003849167Z" level=error msg="ContainerStatus for \"3361403dd42c872d13fd333abd0eddc47b88115b86f7ef079fe800944beee38a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3361403dd42c872d13fd333abd0eddc47b88115b86f7ef079fe800944beee38a\": not found" Jul 10 00:14:45.003989 kubelet[2696]: E0710 00:14:45.003966 2696 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3361403dd42c872d13fd333abd0eddc47b88115b86f7ef079fe800944beee38a\": not found" containerID="3361403dd42c872d13fd333abd0eddc47b88115b86f7ef079fe800944beee38a" Jul 10 00:14:45.004026 kubelet[2696]: I0710 00:14:45.003992 2696 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3361403dd42c872d13fd333abd0eddc47b88115b86f7ef079fe800944beee38a"} err="failed to get container status \"3361403dd42c872d13fd333abd0eddc47b88115b86f7ef079fe800944beee38a\": rpc error: code = NotFound desc = an error occurred when try to find container \"3361403dd42c872d13fd333abd0eddc47b88115b86f7ef079fe800944beee38a\": not found" Jul 10 00:14:45.004026 kubelet[2696]: I0710 00:14:45.004008 2696 scope.go:117] "RemoveContainer" containerID="6fb24ef494773bae9e46b2e9563f988ce16f76f7b3a5b71220d9f162e4514dd4" Jul 10 00:14:45.004227 containerd[1573]: time="2025-07-10T00:14:45.004188605Z" level=error msg="ContainerStatus for \"6fb24ef494773bae9e46b2e9563f988ce16f76f7b3a5b71220d9f162e4514dd4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6fb24ef494773bae9e46b2e9563f988ce16f76f7b3a5b71220d9f162e4514dd4\": not found" Jul 10 00:14:45.004380 kubelet[2696]: E0710 00:14:45.004347 2696 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6fb24ef494773bae9e46b2e9563f988ce16f76f7b3a5b71220d9f162e4514dd4\": not found" containerID="6fb24ef494773bae9e46b2e9563f988ce16f76f7b3a5b71220d9f162e4514dd4" Jul 10 00:14:45.004423 kubelet[2696]: I0710 00:14:45.004391 2696 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6fb24ef494773bae9e46b2e9563f988ce16f76f7b3a5b71220d9f162e4514dd4"} err="failed to get container status \"6fb24ef494773bae9e46b2e9563f988ce16f76f7b3a5b71220d9f162e4514dd4\": rpc error: code = NotFound desc = an error occurred when try to find container \"6fb24ef494773bae9e46b2e9563f988ce16f76f7b3a5b71220d9f162e4514dd4\": not found" Jul 10 00:14:45.420101 systemd[1]: var-lib-kubelet-pods-820c45bc\x2de304\x2d414f\x2db6bb\x2d2e9593ae5916-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqlhv8.mount: Deactivated successfully. Jul 10 00:14:45.420256 systemd[1]: var-lib-kubelet-pods-4803bec2\x2d5640\x2d4c58\x2d9ea4\x2d6335971c236b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7cqx7.mount: Deactivated successfully. Jul 10 00:14:45.420354 systemd[1]: var-lib-kubelet-pods-4803bec2\x2d5640\x2d4c58\x2d9ea4\x2d6335971c236b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 10 00:14:45.420463 systemd[1]: var-lib-kubelet-pods-4803bec2\x2d5640\x2d4c58\x2d9ea4\x2d6335971c236b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 10 00:14:45.741460 kubelet[2696]: E0710 00:14:45.741406 2696 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 10 00:14:46.312142 sshd[4317]: Connection closed by 10.0.0.1 port 36622 Jul 10 00:14:46.312839 sshd-session[4315]: pam_unix(sshd:session): session closed for user core Jul 10 00:14:46.326387 systemd[1]: sshd@23-10.0.0.19:22-10.0.0.1:36622.service: Deactivated successfully. Jul 10 00:14:46.328953 systemd[1]: session-24.scope: Deactivated successfully. Jul 10 00:14:46.330052 systemd-logind[1509]: Session 24 logged out. Waiting for processes to exit. Jul 10 00:14:46.333898 systemd[1]: Started sshd@24-10.0.0.19:22-10.0.0.1:36626.service - OpenSSH per-connection server daemon (10.0.0.1:36626). Jul 10 00:14:46.334662 systemd-logind[1509]: Removed session 24. Jul 10 00:14:46.398985 sshd[4471]: Accepted publickey for core from 10.0.0.1 port 36626 ssh2: RSA SHA256:a/WzkVKs173+YSebQY64/4LigDpieaPOYRH6W2gWTe4 Jul 10 00:14:46.400717 sshd-session[4471]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:14:46.405864 systemd-logind[1509]: New session 25 of user core. Jul 10 00:14:46.415937 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 10 00:14:46.701954 kubelet[2696]: I0710 00:14:46.701905 2696 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4803bec2-5640-4c58-9ea4-6335971c236b" path="/var/lib/kubelet/pods/4803bec2-5640-4c58-9ea4-6335971c236b/volumes" Jul 10 00:14:46.702743 kubelet[2696]: I0710 00:14:46.702716 2696 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="820c45bc-e304-414f-b6bb-2e9593ae5916" path="/var/lib/kubelet/pods/820c45bc-e304-414f-b6bb-2e9593ae5916/volumes" Jul 10 00:14:46.703561 kubelet[2696]: E0710 00:14:46.703528 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:14:47.139564 sshd[4473]: Connection closed by 10.0.0.1 port 36626 Jul 10 00:14:47.140997 sshd-session[4471]: pam_unix(sshd:session): session closed for user core Jul 10 00:14:47.156049 systemd[1]: sshd@24-10.0.0.19:22-10.0.0.1:36626.service: Deactivated successfully. Jul 10 00:14:47.159533 systemd[1]: session-25.scope: Deactivated successfully. Jul 10 00:14:47.163091 systemd-logind[1509]: Session 25 logged out. Waiting for processes to exit. Jul 10 00:14:47.166627 systemd-logind[1509]: Removed session 25. Jul 10 00:14:47.170119 systemd[1]: Started sshd@25-10.0.0.19:22-10.0.0.1:36640.service - OpenSSH per-connection server daemon (10.0.0.1:36640). Jul 10 00:14:47.201440 systemd[1]: Created slice kubepods-burstable-pod02ab398c_f5e0_4b89_ac41_71c79364a724.slice - libcontainer container kubepods-burstable-pod02ab398c_f5e0_4b89_ac41_71c79364a724.slice. Jul 10 00:14:47.222051 sshd[4485]: Accepted publickey for core from 10.0.0.1 port 36640 ssh2: RSA SHA256:a/WzkVKs173+YSebQY64/4LigDpieaPOYRH6W2gWTe4 Jul 10 00:14:47.224367 sshd-session[4485]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:14:47.230345 systemd-logind[1509]: New session 26 of user core. Jul 10 00:14:47.231531 kubelet[2696]: I0710 00:14:47.231502 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/02ab398c-f5e0-4b89-ac41-71c79364a724-cilium-ipsec-secrets\") pod \"cilium-5lgdm\" (UID: \"02ab398c-f5e0-4b89-ac41-71c79364a724\") " pod="kube-system/cilium-5lgdm" Jul 10 00:14:47.231904 kubelet[2696]: I0710 00:14:47.231884 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7sv9\" (UniqueName: \"kubernetes.io/projected/02ab398c-f5e0-4b89-ac41-71c79364a724-kube-api-access-w7sv9\") pod \"cilium-5lgdm\" (UID: \"02ab398c-f5e0-4b89-ac41-71c79364a724\") " pod="kube-system/cilium-5lgdm" Jul 10 00:14:47.232005 kubelet[2696]: I0710 00:14:47.231972 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/02ab398c-f5e0-4b89-ac41-71c79364a724-hubble-tls\") pod \"cilium-5lgdm\" (UID: \"02ab398c-f5e0-4b89-ac41-71c79364a724\") " pod="kube-system/cilium-5lgdm" Jul 10 00:14:47.232005 kubelet[2696]: I0710 00:14:47.232000 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/02ab398c-f5e0-4b89-ac41-71c79364a724-cilium-cgroup\") pod \"cilium-5lgdm\" (UID: \"02ab398c-f5e0-4b89-ac41-71c79364a724\") " pod="kube-system/cilium-5lgdm" Jul 10 00:14:47.232005 kubelet[2696]: I0710 00:14:47.232015 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/02ab398c-f5e0-4b89-ac41-71c79364a724-xtables-lock\") pod \"cilium-5lgdm\" (UID: \"02ab398c-f5e0-4b89-ac41-71c79364a724\") " pod="kube-system/cilium-5lgdm" Jul 10 00:14:47.232238 kubelet[2696]: I0710 00:14:47.232029 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/02ab398c-f5e0-4b89-ac41-71c79364a724-host-proc-sys-kernel\") pod \"cilium-5lgdm\" (UID: \"02ab398c-f5e0-4b89-ac41-71c79364a724\") " pod="kube-system/cilium-5lgdm" Jul 10 00:14:47.232238 kubelet[2696]: I0710 00:14:47.232043 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/02ab398c-f5e0-4b89-ac41-71c79364a724-bpf-maps\") pod \"cilium-5lgdm\" (UID: \"02ab398c-f5e0-4b89-ac41-71c79364a724\") " pod="kube-system/cilium-5lgdm" Jul 10 00:14:47.232238 kubelet[2696]: I0710 00:14:47.232055 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/02ab398c-f5e0-4b89-ac41-71c79364a724-lib-modules\") pod \"cilium-5lgdm\" (UID: \"02ab398c-f5e0-4b89-ac41-71c79364a724\") " pod="kube-system/cilium-5lgdm" Jul 10 00:14:47.232238 kubelet[2696]: I0710 00:14:47.232073 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/02ab398c-f5e0-4b89-ac41-71c79364a724-clustermesh-secrets\") pod \"cilium-5lgdm\" (UID: \"02ab398c-f5e0-4b89-ac41-71c79364a724\") " pod="kube-system/cilium-5lgdm" Jul 10 00:14:47.232238 kubelet[2696]: I0710 00:14:47.232166 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/02ab398c-f5e0-4b89-ac41-71c79364a724-hostproc\") pod \"cilium-5lgdm\" (UID: \"02ab398c-f5e0-4b89-ac41-71c79364a724\") " pod="kube-system/cilium-5lgdm" Jul 10 00:14:47.232238 kubelet[2696]: I0710 00:14:47.232212 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/02ab398c-f5e0-4b89-ac41-71c79364a724-cni-path\") pod \"cilium-5lgdm\" (UID: \"02ab398c-f5e0-4b89-ac41-71c79364a724\") " pod="kube-system/cilium-5lgdm" Jul 10 00:14:47.232424 kubelet[2696]: I0710 00:14:47.232238 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/02ab398c-f5e0-4b89-ac41-71c79364a724-cilium-config-path\") pod \"cilium-5lgdm\" (UID: \"02ab398c-f5e0-4b89-ac41-71c79364a724\") " pod="kube-system/cilium-5lgdm" Jul 10 00:14:47.232424 kubelet[2696]: I0710 00:14:47.232263 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/02ab398c-f5e0-4b89-ac41-71c79364a724-host-proc-sys-net\") pod \"cilium-5lgdm\" (UID: \"02ab398c-f5e0-4b89-ac41-71c79364a724\") " pod="kube-system/cilium-5lgdm" Jul 10 00:14:47.232424 kubelet[2696]: I0710 00:14:47.232290 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/02ab398c-f5e0-4b89-ac41-71c79364a724-cilium-run\") pod \"cilium-5lgdm\" (UID: \"02ab398c-f5e0-4b89-ac41-71c79364a724\") " pod="kube-system/cilium-5lgdm" Jul 10 00:14:47.232424 kubelet[2696]: I0710 00:14:47.232315 2696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/02ab398c-f5e0-4b89-ac41-71c79364a724-etc-cni-netd\") pod \"cilium-5lgdm\" (UID: \"02ab398c-f5e0-4b89-ac41-71c79364a724\") " pod="kube-system/cilium-5lgdm" Jul 10 00:14:47.238063 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 10 00:14:47.290833 sshd[4487]: Connection closed by 10.0.0.1 port 36640 Jul 10 00:14:47.291215 sshd-session[4485]: pam_unix(sshd:session): session closed for user core Jul 10 00:14:47.303754 systemd[1]: sshd@25-10.0.0.19:22-10.0.0.1:36640.service: Deactivated successfully. Jul 10 00:14:47.306149 systemd[1]: session-26.scope: Deactivated successfully. Jul 10 00:14:47.307140 systemd-logind[1509]: Session 26 logged out. Waiting for processes to exit. Jul 10 00:14:47.310537 systemd[1]: Started sshd@26-10.0.0.19:22-10.0.0.1:36650.service - OpenSSH per-connection server daemon (10.0.0.1:36650). Jul 10 00:14:47.311503 systemd-logind[1509]: Removed session 26. Jul 10 00:14:47.366826 sshd[4494]: Accepted publickey for core from 10.0.0.1 port 36650 ssh2: RSA SHA256:a/WzkVKs173+YSebQY64/4LigDpieaPOYRH6W2gWTe4 Jul 10 00:14:47.368714 sshd-session[4494]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:14:47.373709 systemd-logind[1509]: New session 27 of user core. Jul 10 00:14:47.384083 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 10 00:14:47.508186 kubelet[2696]: E0710 00:14:47.508104 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:14:47.509005 containerd[1573]: time="2025-07-10T00:14:47.508828501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5lgdm,Uid:02ab398c-f5e0-4b89-ac41-71c79364a724,Namespace:kube-system,Attempt:0,}" Jul 10 00:14:47.535968 containerd[1573]: time="2025-07-10T00:14:47.535911615Z" level=info msg="connecting to shim 53aab3b233370f6faf1d36346f518af8345696d3b01c19f0aba06444295e0a7f" address="unix:///run/containerd/s/dd650450d3cec4d28aaa7853913e625ee93b4c7d9a95c0b6e09cb20a23f057d2" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:14:47.571939 systemd[1]: Started cri-containerd-53aab3b233370f6faf1d36346f518af8345696d3b01c19f0aba06444295e0a7f.scope - libcontainer container 53aab3b233370f6faf1d36346f518af8345696d3b01c19f0aba06444295e0a7f. Jul 10 00:14:47.603117 containerd[1573]: time="2025-07-10T00:14:47.603073730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5lgdm,Uid:02ab398c-f5e0-4b89-ac41-71c79364a724,Namespace:kube-system,Attempt:0,} returns sandbox id \"53aab3b233370f6faf1d36346f518af8345696d3b01c19f0aba06444295e0a7f\"" Jul 10 00:14:47.604045 kubelet[2696]: E0710 00:14:47.604003 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:14:47.610210 containerd[1573]: time="2025-07-10T00:14:47.610170900Z" level=info msg="CreateContainer within sandbox \"53aab3b233370f6faf1d36346f518af8345696d3b01c19f0aba06444295e0a7f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 10 00:14:47.619395 containerd[1573]: time="2025-07-10T00:14:47.619357244Z" level=info msg="Container b58ffd57f0ac5343b732b025dc1cf0e782715e27226449124d99470517feb4a6: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:14:47.628785 containerd[1573]: time="2025-07-10T00:14:47.628745623Z" level=info msg="CreateContainer within sandbox \"53aab3b233370f6faf1d36346f518af8345696d3b01c19f0aba06444295e0a7f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b58ffd57f0ac5343b732b025dc1cf0e782715e27226449124d99470517feb4a6\"" Jul 10 00:14:47.629379 containerd[1573]: time="2025-07-10T00:14:47.629293958Z" level=info msg="StartContainer for \"b58ffd57f0ac5343b732b025dc1cf0e782715e27226449124d99470517feb4a6\"" Jul 10 00:14:47.630315 containerd[1573]: time="2025-07-10T00:14:47.630252857Z" level=info msg="connecting to shim b58ffd57f0ac5343b732b025dc1cf0e782715e27226449124d99470517feb4a6" address="unix:///run/containerd/s/dd650450d3cec4d28aaa7853913e625ee93b4c7d9a95c0b6e09cb20a23f057d2" protocol=ttrpc version=3 Jul 10 00:14:47.654964 systemd[1]: Started cri-containerd-b58ffd57f0ac5343b732b025dc1cf0e782715e27226449124d99470517feb4a6.scope - libcontainer container b58ffd57f0ac5343b732b025dc1cf0e782715e27226449124d99470517feb4a6. Jul 10 00:14:47.691932 containerd[1573]: time="2025-07-10T00:14:47.691878135Z" level=info msg="StartContainer for \"b58ffd57f0ac5343b732b025dc1cf0e782715e27226449124d99470517feb4a6\" returns successfully" Jul 10 00:14:47.704705 systemd[1]: cri-containerd-b58ffd57f0ac5343b732b025dc1cf0e782715e27226449124d99470517feb4a6.scope: Deactivated successfully. Jul 10 00:14:47.706201 containerd[1573]: time="2025-07-10T00:14:47.706145817Z" level=info msg="received exit event container_id:\"b58ffd57f0ac5343b732b025dc1cf0e782715e27226449124d99470517feb4a6\" id:\"b58ffd57f0ac5343b732b025dc1cf0e782715e27226449124d99470517feb4a6\" pid:4564 exited_at:{seconds:1752106487 nanos:705890189}" Jul 10 00:14:47.706381 containerd[1573]: time="2025-07-10T00:14:47.706341581Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b58ffd57f0ac5343b732b025dc1cf0e782715e27226449124d99470517feb4a6\" id:\"b58ffd57f0ac5343b732b025dc1cf0e782715e27226449124d99470517feb4a6\" pid:4564 exited_at:{seconds:1752106487 nanos:705890189}" Jul 10 00:14:47.964446 kubelet[2696]: E0710 00:14:47.964408 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:14:47.971395 containerd[1573]: time="2025-07-10T00:14:47.971345312Z" level=info msg="CreateContainer within sandbox \"53aab3b233370f6faf1d36346f518af8345696d3b01c19f0aba06444295e0a7f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 10 00:14:47.982257 containerd[1573]: time="2025-07-10T00:14:47.982191670Z" level=info msg="Container 69fba76eef9975cff281a0fa1085e30f93d6cf4310a50845d44c8518d6cb7fcd: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:14:47.990311 containerd[1573]: time="2025-07-10T00:14:47.990250054Z" level=info msg="CreateContainer within sandbox \"53aab3b233370f6faf1d36346f518af8345696d3b01c19f0aba06444295e0a7f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"69fba76eef9975cff281a0fa1085e30f93d6cf4310a50845d44c8518d6cb7fcd\"" Jul 10 00:14:47.990893 containerd[1573]: time="2025-07-10T00:14:47.990870146Z" level=info msg="StartContainer for \"69fba76eef9975cff281a0fa1085e30f93d6cf4310a50845d44c8518d6cb7fcd\"" Jul 10 00:14:47.991905 containerd[1573]: time="2025-07-10T00:14:47.991870793Z" level=info msg="connecting to shim 69fba76eef9975cff281a0fa1085e30f93d6cf4310a50845d44c8518d6cb7fcd" address="unix:///run/containerd/s/dd650450d3cec4d28aaa7853913e625ee93b4c7d9a95c0b6e09cb20a23f057d2" protocol=ttrpc version=3 Jul 10 00:14:48.017050 systemd[1]: Started cri-containerd-69fba76eef9975cff281a0fa1085e30f93d6cf4310a50845d44c8518d6cb7fcd.scope - libcontainer container 69fba76eef9975cff281a0fa1085e30f93d6cf4310a50845d44c8518d6cb7fcd. Jul 10 00:14:48.055852 containerd[1573]: time="2025-07-10T00:14:48.055773462Z" level=info msg="StartContainer for \"69fba76eef9975cff281a0fa1085e30f93d6cf4310a50845d44c8518d6cb7fcd\" returns successfully" Jul 10 00:14:48.061198 systemd[1]: cri-containerd-69fba76eef9975cff281a0fa1085e30f93d6cf4310a50845d44c8518d6cb7fcd.scope: Deactivated successfully. Jul 10 00:14:48.062036 containerd[1573]: time="2025-07-10T00:14:48.061922270Z" level=info msg="TaskExit event in podsandbox handler container_id:\"69fba76eef9975cff281a0fa1085e30f93d6cf4310a50845d44c8518d6cb7fcd\" id:\"69fba76eef9975cff281a0fa1085e30f93d6cf4310a50845d44c8518d6cb7fcd\" pid:4611 exited_at:{seconds:1752106488 nanos:61463716}" Jul 10 00:14:48.062317 containerd[1573]: time="2025-07-10T00:14:48.062029063Z" level=info msg="received exit event container_id:\"69fba76eef9975cff281a0fa1085e30f93d6cf4310a50845d44c8518d6cb7fcd\" id:\"69fba76eef9975cff281a0fa1085e30f93d6cf4310a50845d44c8518d6cb7fcd\" pid:4611 exited_at:{seconds:1752106488 nanos:61463716}" Jul 10 00:14:48.968314 kubelet[2696]: E0710 00:14:48.968273 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:14:49.170762 containerd[1573]: time="2025-07-10T00:14:49.170691812Z" level=info msg="CreateContainer within sandbox \"53aab3b233370f6faf1d36346f518af8345696d3b01c19f0aba06444295e0a7f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 10 00:14:49.339177 containerd[1573]: time="2025-07-10T00:14:49.339037608Z" level=info msg="Container 1c2f24b1049a3ab4bfac9746e3618aee930c21f694c9253590a987621cfb67f0: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:14:49.400829 containerd[1573]: time="2025-07-10T00:14:49.400758205Z" level=info msg="CreateContainer within sandbox \"53aab3b233370f6faf1d36346f518af8345696d3b01c19f0aba06444295e0a7f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1c2f24b1049a3ab4bfac9746e3618aee930c21f694c9253590a987621cfb67f0\"" Jul 10 00:14:49.401448 containerd[1573]: time="2025-07-10T00:14:49.401408393Z" level=info msg="StartContainer for \"1c2f24b1049a3ab4bfac9746e3618aee930c21f694c9253590a987621cfb67f0\"" Jul 10 00:14:49.402974 containerd[1573]: time="2025-07-10T00:14:49.402945051Z" level=info msg="connecting to shim 1c2f24b1049a3ab4bfac9746e3618aee930c21f694c9253590a987621cfb67f0" address="unix:///run/containerd/s/dd650450d3cec4d28aaa7853913e625ee93b4c7d9a95c0b6e09cb20a23f057d2" protocol=ttrpc version=3 Jul 10 00:14:49.425020 systemd[1]: Started cri-containerd-1c2f24b1049a3ab4bfac9746e3618aee930c21f694c9253590a987621cfb67f0.scope - libcontainer container 1c2f24b1049a3ab4bfac9746e3618aee930c21f694c9253590a987621cfb67f0. Jul 10 00:14:49.471431 systemd[1]: cri-containerd-1c2f24b1049a3ab4bfac9746e3618aee930c21f694c9253590a987621cfb67f0.scope: Deactivated successfully. Jul 10 00:14:49.473227 containerd[1573]: time="2025-07-10T00:14:49.472995816Z" level=info msg="received exit event container_id:\"1c2f24b1049a3ab4bfac9746e3618aee930c21f694c9253590a987621cfb67f0\" id:\"1c2f24b1049a3ab4bfac9746e3618aee930c21f694c9253590a987621cfb67f0\" pid:4655 exited_at:{seconds:1752106489 nanos:472712637}" Jul 10 00:14:49.473346 containerd[1573]: time="2025-07-10T00:14:49.473305767Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1c2f24b1049a3ab4bfac9746e3618aee930c21f694c9253590a987621cfb67f0\" id:\"1c2f24b1049a3ab4bfac9746e3618aee930c21f694c9253590a987621cfb67f0\" pid:4655 exited_at:{seconds:1752106489 nanos:472712637}" Jul 10 00:14:49.473588 containerd[1573]: time="2025-07-10T00:14:49.473563508Z" level=info msg="StartContainer for \"1c2f24b1049a3ab4bfac9746e3618aee930c21f694c9253590a987621cfb67f0\" returns successfully" Jul 10 00:14:49.497181 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1c2f24b1049a3ab4bfac9746e3618aee930c21f694c9253590a987621cfb67f0-rootfs.mount: Deactivated successfully. Jul 10 00:14:49.974402 kubelet[2696]: E0710 00:14:49.974340 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:14:49.981883 containerd[1573]: time="2025-07-10T00:14:49.981830529Z" level=info msg="CreateContainer within sandbox \"53aab3b233370f6faf1d36346f518af8345696d3b01c19f0aba06444295e0a7f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 10 00:14:49.993872 containerd[1573]: time="2025-07-10T00:14:49.993808437Z" level=info msg="Container 94660cb90de8ba85c3846f46de423821e3e4fe08bf7dfd04752deca4a0193377: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:14:49.998427 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3989007545.mount: Deactivated successfully. Jul 10 00:14:50.262226 containerd[1573]: time="2025-07-10T00:14:50.262062436Z" level=info msg="CreateContainer within sandbox \"53aab3b233370f6faf1d36346f518af8345696d3b01c19f0aba06444295e0a7f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"94660cb90de8ba85c3846f46de423821e3e4fe08bf7dfd04752deca4a0193377\"" Jul 10 00:14:50.262949 containerd[1573]: time="2025-07-10T00:14:50.262904651Z" level=info msg="StartContainer for \"94660cb90de8ba85c3846f46de423821e3e4fe08bf7dfd04752deca4a0193377\"" Jul 10 00:14:50.264145 containerd[1573]: time="2025-07-10T00:14:50.264092272Z" level=info msg="connecting to shim 94660cb90de8ba85c3846f46de423821e3e4fe08bf7dfd04752deca4a0193377" address="unix:///run/containerd/s/dd650450d3cec4d28aaa7853913e625ee93b4c7d9a95c0b6e09cb20a23f057d2" protocol=ttrpc version=3 Jul 10 00:14:50.286984 systemd[1]: Started cri-containerd-94660cb90de8ba85c3846f46de423821e3e4fe08bf7dfd04752deca4a0193377.scope - libcontainer container 94660cb90de8ba85c3846f46de423821e3e4fe08bf7dfd04752deca4a0193377. Jul 10 00:14:50.318676 systemd[1]: cri-containerd-94660cb90de8ba85c3846f46de423821e3e4fe08bf7dfd04752deca4a0193377.scope: Deactivated successfully. Jul 10 00:14:50.318994 containerd[1573]: time="2025-07-10T00:14:50.318951519Z" level=info msg="TaskExit event in podsandbox handler container_id:\"94660cb90de8ba85c3846f46de423821e3e4fe08bf7dfd04752deca4a0193377\" id:\"94660cb90de8ba85c3846f46de423821e3e4fe08bf7dfd04752deca4a0193377\" pid:4694 exited_at:{seconds:1752106490 nanos:318714547}" Jul 10 00:14:50.355851 containerd[1573]: time="2025-07-10T00:14:50.355765883Z" level=info msg="received exit event container_id:\"94660cb90de8ba85c3846f46de423821e3e4fe08bf7dfd04752deca4a0193377\" id:\"94660cb90de8ba85c3846f46de423821e3e4fe08bf7dfd04752deca4a0193377\" pid:4694 exited_at:{seconds:1752106490 nanos:318714547}" Jul 10 00:14:50.365078 containerd[1573]: time="2025-07-10T00:14:50.365018733Z" level=info msg="StartContainer for \"94660cb90de8ba85c3846f46de423821e3e4fe08bf7dfd04752deca4a0193377\" returns successfully" Jul 10 00:14:50.381365 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-94660cb90de8ba85c3846f46de423821e3e4fe08bf7dfd04752deca4a0193377-rootfs.mount: Deactivated successfully. Jul 10 00:14:50.742297 kubelet[2696]: E0710 00:14:50.742242 2696 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 10 00:14:50.979810 kubelet[2696]: E0710 00:14:50.979679 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:14:50.985828 containerd[1573]: time="2025-07-10T00:14:50.985769532Z" level=info msg="CreateContainer within sandbox \"53aab3b233370f6faf1d36346f518af8345696d3b01c19f0aba06444295e0a7f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 10 00:14:50.997995 containerd[1573]: time="2025-07-10T00:14:50.997861669Z" level=info msg="Container 64aa383b083a40f970f90d411827204fa283b03ebf22427fe471470c1881c8f1: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:14:51.007319 containerd[1573]: time="2025-07-10T00:14:51.007270842Z" level=info msg="CreateContainer within sandbox \"53aab3b233370f6faf1d36346f518af8345696d3b01c19f0aba06444295e0a7f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"64aa383b083a40f970f90d411827204fa283b03ebf22427fe471470c1881c8f1\"" Jul 10 00:14:51.007870 containerd[1573]: time="2025-07-10T00:14:51.007838132Z" level=info msg="StartContainer for \"64aa383b083a40f970f90d411827204fa283b03ebf22427fe471470c1881c8f1\"" Jul 10 00:14:51.008858 containerd[1573]: time="2025-07-10T00:14:51.008743926Z" level=info msg="connecting to shim 64aa383b083a40f970f90d411827204fa283b03ebf22427fe471470c1881c8f1" address="unix:///run/containerd/s/dd650450d3cec4d28aaa7853913e625ee93b4c7d9a95c0b6e09cb20a23f057d2" protocol=ttrpc version=3 Jul 10 00:14:51.042924 systemd[1]: Started cri-containerd-64aa383b083a40f970f90d411827204fa283b03ebf22427fe471470c1881c8f1.scope - libcontainer container 64aa383b083a40f970f90d411827204fa283b03ebf22427fe471470c1881c8f1. Jul 10 00:14:51.142985 containerd[1573]: time="2025-07-10T00:14:51.142926931Z" level=info msg="StartContainer for \"64aa383b083a40f970f90d411827204fa283b03ebf22427fe471470c1881c8f1\" returns successfully" Jul 10 00:14:51.217739 containerd[1573]: time="2025-07-10T00:14:51.217656413Z" level=info msg="TaskExit event in podsandbox handler container_id:\"64aa383b083a40f970f90d411827204fa283b03ebf22427fe471470c1881c8f1\" id:\"5b20bb70dd45669265ce79a82db1ededc620915b55f0e8d87c213c9efa842fa4\" pid:4765 exited_at:{seconds:1752106491 nanos:217077200}" Jul 10 00:14:51.559930 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Jul 10 00:14:51.984888 kubelet[2696]: E0710 00:14:51.984853 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:14:52.790970 kubelet[2696]: I0710 00:14:52.790900 2696 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-10T00:14:52Z","lastTransitionTime":"2025-07-10T00:14:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 10 00:14:53.509334 kubelet[2696]: E0710 00:14:53.509268 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:14:53.743174 containerd[1573]: time="2025-07-10T00:14:53.743110672Z" level=info msg="TaskExit event in podsandbox handler container_id:\"64aa383b083a40f970f90d411827204fa283b03ebf22427fe471470c1881c8f1\" id:\"a536e493942e93df1a786787b73cf879260f2ae9afe8066f910b684dfa97f037\" pid:5038 exit_status:1 exited_at:{seconds:1752106493 nanos:742667880}" Jul 10 00:14:54.683274 systemd-networkd[1462]: lxc_health: Link UP Jul 10 00:14:54.684504 systemd-networkd[1462]: lxc_health: Gained carrier Jul 10 00:14:55.510056 kubelet[2696]: E0710 00:14:55.510006 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:14:55.527976 kubelet[2696]: I0710 00:14:55.527535 2696 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-5lgdm" podStartSLOduration=8.527515796 podStartE2EDuration="8.527515796s" podCreationTimestamp="2025-07-10 00:14:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:14:51.998768254 +0000 UTC m=+91.445366165" watchObservedRunningTime="2025-07-10 00:14:55.527515796 +0000 UTC m=+94.974113717" Jul 10 00:14:55.700136 kubelet[2696]: E0710 00:14:55.700081 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:14:55.846014 containerd[1573]: time="2025-07-10T00:14:55.845850908Z" level=info msg="TaskExit event in podsandbox handler container_id:\"64aa383b083a40f970f90d411827204fa283b03ebf22427fe471470c1881c8f1\" id:\"8a2500b2134281ae17ccbdfea174f76356b865c62d552a0aa29b1c436cb90e8d\" pid:5297 exited_at:{seconds:1752106495 nanos:845311693}" Jul 10 00:14:55.993628 kubelet[2696]: E0710 00:14:55.993567 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:14:56.028060 systemd-networkd[1462]: lxc_health: Gained IPv6LL Jul 10 00:14:56.995780 kubelet[2696]: E0710 00:14:56.995635 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:14:57.699745 kubelet[2696]: E0710 00:14:57.699677 2696 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:14:57.972295 containerd[1573]: time="2025-07-10T00:14:57.972120764Z" level=info msg="TaskExit event in podsandbox handler container_id:\"64aa383b083a40f970f90d411827204fa283b03ebf22427fe471470c1881c8f1\" id:\"ad0c57dbdff8d999602a028b51829debb97ef2fd8dd77dd3bc5b912c79a55b95\" pid:5327 exited_at:{seconds:1752106497 nanos:971595095}" Jul 10 00:15:00.073254 containerd[1573]: time="2025-07-10T00:15:00.073181805Z" level=info msg="TaskExit event in podsandbox handler container_id:\"64aa383b083a40f970f90d411827204fa283b03ebf22427fe471470c1881c8f1\" id:\"d6fe46cc8becd54b2a2aae8925e7ebefb2fc149900c7a5f2a36f2c337874b0e1\" pid:5359 exited_at:{seconds:1752106500 nanos:72634596}" Jul 10 00:15:00.079218 sshd[4500]: Connection closed by 10.0.0.1 port 36650 Jul 10 00:15:00.079702 sshd-session[4494]: pam_unix(sshd:session): session closed for user core Jul 10 00:15:00.083182 systemd[1]: sshd@26-10.0.0.19:22-10.0.0.1:36650.service: Deactivated successfully. Jul 10 00:15:00.085744 systemd[1]: session-27.scope: Deactivated successfully. Jul 10 00:15:00.087926 systemd-logind[1509]: Session 27 logged out. Waiting for processes to exit. Jul 10 00:15:00.089477 systemd-logind[1509]: Removed session 27.