Jul 7 06:13:57.849180 kernel: Linux version 6.12.35-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Sun Jul 6 21:56:00 -00 2025 Jul 7 06:13:57.849207 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2e0b2c30526b1d273b6d599d4c30389a93a14ce36aaa5af83a05b11c5ea5ae50 Jul 7 06:13:57.849222 kernel: BIOS-provided physical RAM map: Jul 7 06:13:57.849231 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 7 06:13:57.849240 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jul 7 06:13:57.849249 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jul 7 06:13:57.849260 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jul 7 06:13:57.849269 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jul 7 06:13:57.849289 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Jul 7 06:13:57.849298 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jul 7 06:13:57.849307 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Jul 7 06:13:57.849316 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jul 7 06:13:57.849325 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jul 7 06:13:57.849335 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jul 7 06:13:57.849348 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jul 7 06:13:57.849358 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jul 7 06:13:57.849368 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jul 7 06:13:57.849377 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jul 7 06:13:57.849387 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jul 7 06:13:57.849396 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jul 7 06:13:57.849406 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jul 7 06:13:57.849415 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jul 7 06:13:57.849425 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jul 7 06:13:57.849434 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 7 06:13:57.849444 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jul 7 06:13:57.849456 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 7 06:13:57.849466 kernel: NX (Execute Disable) protection: active Jul 7 06:13:57.849475 kernel: APIC: Static calls initialized Jul 7 06:13:57.849490 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Jul 7 06:13:57.849515 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Jul 7 06:13:57.849541 kernel: extended physical RAM map: Jul 7 06:13:57.849556 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 7 06:13:57.849565 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Jul 7 06:13:57.849575 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jul 7 06:13:57.849585 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Jul 7 06:13:57.849594 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jul 7 06:13:57.849607 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Jul 7 06:13:57.849617 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jul 7 06:13:57.849626 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Jul 7 06:13:57.849636 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Jul 7 06:13:57.849651 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Jul 7 06:13:57.849661 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Jul 7 06:13:57.849673 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Jul 7 06:13:57.849694 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jul 7 06:13:57.849704 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jul 7 06:13:57.849714 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jul 7 06:13:57.849724 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jul 7 06:13:57.849734 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jul 7 06:13:57.849745 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jul 7 06:13:57.849755 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jul 7 06:13:57.849765 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jul 7 06:13:57.849787 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jul 7 06:13:57.849807 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jul 7 06:13:57.849818 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jul 7 06:13:57.849828 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jul 7 06:13:57.849838 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 7 06:13:57.849848 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jul 7 06:13:57.849858 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 7 06:13:57.849868 kernel: efi: EFI v2.7 by EDK II Jul 7 06:13:57.849878 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Jul 7 06:13:57.849888 kernel: random: crng init done Jul 7 06:13:57.849925 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Jul 7 06:13:57.849935 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Jul 7 06:13:57.849949 kernel: secureboot: Secure boot disabled Jul 7 06:13:57.849959 kernel: SMBIOS 2.8 present. Jul 7 06:13:57.849969 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Jul 7 06:13:57.849978 kernel: DMI: Memory slots populated: 1/1 Jul 7 06:13:57.849988 kernel: Hypervisor detected: KVM Jul 7 06:13:57.849998 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 7 06:13:57.850008 kernel: kvm-clock: using sched offset of 3651001405 cycles Jul 7 06:13:57.850019 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 7 06:13:57.850030 kernel: tsc: Detected 2794.748 MHz processor Jul 7 06:13:57.850040 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 7 06:13:57.850051 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 7 06:13:57.850064 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jul 7 06:13:57.850074 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jul 7 06:13:57.850085 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 7 06:13:57.850110 kernel: Using GB pages for direct mapping Jul 7 06:13:57.850120 kernel: ACPI: Early table checksum verification disabled Jul 7 06:13:57.850131 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jul 7 06:13:57.850141 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jul 7 06:13:57.850151 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:13:57.850162 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:13:57.850176 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jul 7 06:13:57.850187 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:13:57.850197 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:13:57.850208 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:13:57.850218 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:13:57.850229 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jul 7 06:13:57.850239 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jul 7 06:13:57.850250 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Jul 7 06:13:57.850265 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jul 7 06:13:57.850278 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jul 7 06:13:57.850291 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jul 7 06:13:57.850303 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jul 7 06:13:57.850316 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jul 7 06:13:57.850329 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jul 7 06:13:57.850342 kernel: No NUMA configuration found Jul 7 06:13:57.850355 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Jul 7 06:13:57.850368 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Jul 7 06:13:57.850381 kernel: Zone ranges: Jul 7 06:13:57.850399 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 7 06:13:57.850411 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Jul 7 06:13:57.850424 kernel: Normal empty Jul 7 06:13:57.850437 kernel: Device empty Jul 7 06:13:57.850450 kernel: Movable zone start for each node Jul 7 06:13:57.850463 kernel: Early memory node ranges Jul 7 06:13:57.850474 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jul 7 06:13:57.850484 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jul 7 06:13:57.850495 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jul 7 06:13:57.850507 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Jul 7 06:13:57.850518 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Jul 7 06:13:57.850528 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Jul 7 06:13:57.850538 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Jul 7 06:13:57.850548 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Jul 7 06:13:57.850559 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Jul 7 06:13:57.850569 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 7 06:13:57.850580 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jul 7 06:13:57.850602 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jul 7 06:13:57.850613 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 7 06:13:57.850624 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Jul 7 06:13:57.850635 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Jul 7 06:13:57.850648 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jul 7 06:13:57.850659 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Jul 7 06:13:57.850670 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Jul 7 06:13:57.850692 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 7 06:13:57.850703 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 7 06:13:57.850717 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 7 06:13:57.850728 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 7 06:13:57.850739 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 7 06:13:57.850749 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 7 06:13:57.850760 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 7 06:13:57.850771 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 7 06:13:57.850782 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 7 06:13:57.850793 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 7 06:13:57.850804 kernel: TSC deadline timer available Jul 7 06:13:57.850814 kernel: CPU topo: Max. logical packages: 1 Jul 7 06:13:57.850828 kernel: CPU topo: Max. logical dies: 1 Jul 7 06:13:57.850838 kernel: CPU topo: Max. dies per package: 1 Jul 7 06:13:57.850849 kernel: CPU topo: Max. threads per core: 1 Jul 7 06:13:57.850859 kernel: CPU topo: Num. cores per package: 4 Jul 7 06:13:57.850870 kernel: CPU topo: Num. threads per package: 4 Jul 7 06:13:57.850881 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jul 7 06:13:57.850891 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 7 06:13:57.850902 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 7 06:13:57.850913 kernel: kvm-guest: setup PV sched yield Jul 7 06:13:57.850927 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Jul 7 06:13:57.850938 kernel: Booting paravirtualized kernel on KVM Jul 7 06:13:57.850949 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 7 06:13:57.850960 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jul 7 06:13:57.850971 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jul 7 06:13:57.850982 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jul 7 06:13:57.850993 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 7 06:13:57.851004 kernel: kvm-guest: PV spinlocks enabled Jul 7 06:13:57.851015 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 7 06:13:57.851030 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2e0b2c30526b1d273b6d599d4c30389a93a14ce36aaa5af83a05b11c5ea5ae50 Jul 7 06:13:57.851041 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 7 06:13:57.851052 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 7 06:13:57.851063 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 7 06:13:57.851074 kernel: Fallback order for Node 0: 0 Jul 7 06:13:57.851085 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Jul 7 06:13:57.851153 kernel: Policy zone: DMA32 Jul 7 06:13:57.851165 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 7 06:13:57.851179 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 7 06:13:57.851190 kernel: ftrace: allocating 40095 entries in 157 pages Jul 7 06:13:57.851201 kernel: ftrace: allocated 157 pages with 5 groups Jul 7 06:13:57.851212 kernel: Dynamic Preempt: voluntary Jul 7 06:13:57.851223 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 7 06:13:57.851235 kernel: rcu: RCU event tracing is enabled. Jul 7 06:13:57.851246 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 7 06:13:57.851257 kernel: Trampoline variant of Tasks RCU enabled. Jul 7 06:13:57.851268 kernel: Rude variant of Tasks RCU enabled. Jul 7 06:13:57.851282 kernel: Tracing variant of Tasks RCU enabled. Jul 7 06:13:57.851293 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 7 06:13:57.851304 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 7 06:13:57.851315 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 7 06:13:57.851326 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 7 06:13:57.851337 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 7 06:13:57.851348 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 7 06:13:57.851359 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 7 06:13:57.851369 kernel: Console: colour dummy device 80x25 Jul 7 06:13:57.851383 kernel: printk: legacy console [ttyS0] enabled Jul 7 06:13:57.851394 kernel: ACPI: Core revision 20240827 Jul 7 06:13:57.851405 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 7 06:13:57.851416 kernel: APIC: Switch to symmetric I/O mode setup Jul 7 06:13:57.851426 kernel: x2apic enabled Jul 7 06:13:57.851437 kernel: APIC: Switched APIC routing to: physical x2apic Jul 7 06:13:57.851448 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jul 7 06:13:57.851459 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jul 7 06:13:57.851470 kernel: kvm-guest: setup PV IPIs Jul 7 06:13:57.851483 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 7 06:13:57.851495 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Jul 7 06:13:57.851506 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jul 7 06:13:57.851517 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 7 06:13:57.851528 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 7 06:13:57.851538 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 7 06:13:57.851549 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 7 06:13:57.851560 kernel: Spectre V2 : Mitigation: Retpolines Jul 7 06:13:57.851571 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 7 06:13:57.851585 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 7 06:13:57.851596 kernel: RETBleed: Mitigation: untrained return thunk Jul 7 06:13:57.851607 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 7 06:13:57.851618 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 7 06:13:57.851629 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jul 7 06:13:57.851641 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jul 7 06:13:57.851651 kernel: x86/bugs: return thunk changed Jul 7 06:13:57.851662 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jul 7 06:13:57.851676 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 7 06:13:57.851697 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 7 06:13:57.851707 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 7 06:13:57.851719 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 7 06:13:57.851730 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 7 06:13:57.851740 kernel: Freeing SMP alternatives memory: 32K Jul 7 06:13:57.851751 kernel: pid_max: default: 32768 minimum: 301 Jul 7 06:13:57.851762 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 7 06:13:57.851773 kernel: landlock: Up and running. Jul 7 06:13:57.851786 kernel: SELinux: Initializing. Jul 7 06:13:57.851797 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 7 06:13:57.851808 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 7 06:13:57.851819 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 7 06:13:57.851830 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 7 06:13:57.851841 kernel: ... version: 0 Jul 7 06:13:57.851851 kernel: ... bit width: 48 Jul 7 06:13:57.851862 kernel: ... generic registers: 6 Jul 7 06:13:57.851873 kernel: ... value mask: 0000ffffffffffff Jul 7 06:13:57.851887 kernel: ... max period: 00007fffffffffff Jul 7 06:13:57.851897 kernel: ... fixed-purpose events: 0 Jul 7 06:13:57.851908 kernel: ... event mask: 000000000000003f Jul 7 06:13:57.851919 kernel: signal: max sigframe size: 1776 Jul 7 06:13:57.851930 kernel: rcu: Hierarchical SRCU implementation. Jul 7 06:13:57.851941 kernel: rcu: Max phase no-delay instances is 400. Jul 7 06:13:57.851952 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 7 06:13:57.851963 kernel: smp: Bringing up secondary CPUs ... Jul 7 06:13:57.851973 kernel: smpboot: x86: Booting SMP configuration: Jul 7 06:13:57.851984 kernel: .... node #0, CPUs: #1 #2 #3 Jul 7 06:13:57.851997 kernel: smp: Brought up 1 node, 4 CPUs Jul 7 06:13:57.852008 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jul 7 06:13:57.852019 kernel: Memory: 2422668K/2565800K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54432K init, 2536K bss, 137196K reserved, 0K cma-reserved) Jul 7 06:13:57.852030 kernel: devtmpfs: initialized Jul 7 06:13:57.852041 kernel: x86/mm: Memory block size: 128MB Jul 7 06:13:57.852052 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jul 7 06:13:57.852063 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jul 7 06:13:57.852074 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Jul 7 06:13:57.852088 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jul 7 06:13:57.852122 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Jul 7 06:13:57.852133 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jul 7 06:13:57.852145 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 7 06:13:57.852156 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 7 06:13:57.852167 kernel: pinctrl core: initialized pinctrl subsystem Jul 7 06:13:57.852180 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 7 06:13:57.852194 kernel: audit: initializing netlink subsys (disabled) Jul 7 06:13:57.852207 kernel: audit: type=2000 audit(1751868834.807:1): state=initialized audit_enabled=0 res=1 Jul 7 06:13:57.852225 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 7 06:13:57.852239 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 7 06:13:57.852252 kernel: cpuidle: using governor menu Jul 7 06:13:57.852266 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 7 06:13:57.852280 kernel: dca service started, version 1.12.1 Jul 7 06:13:57.852294 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Jul 7 06:13:57.852307 kernel: PCI: Using configuration type 1 for base access Jul 7 06:13:57.852321 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 7 06:13:57.852332 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 7 06:13:57.852346 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 7 06:13:57.852356 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 7 06:13:57.852367 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 7 06:13:57.852378 kernel: ACPI: Added _OSI(Module Device) Jul 7 06:13:57.852388 kernel: ACPI: Added _OSI(Processor Device) Jul 7 06:13:57.852410 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 7 06:13:57.852421 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 7 06:13:57.852449 kernel: ACPI: Interpreter enabled Jul 7 06:13:57.852461 kernel: ACPI: PM: (supports S0 S3 S5) Jul 7 06:13:57.852475 kernel: ACPI: Using IOAPIC for interrupt routing Jul 7 06:13:57.852486 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 7 06:13:57.852497 kernel: PCI: Using E820 reservations for host bridge windows Jul 7 06:13:57.852508 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 7 06:13:57.852519 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 7 06:13:57.852768 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 7 06:13:57.852925 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jul 7 06:13:57.853072 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jul 7 06:13:57.853110 kernel: PCI host bridge to bus 0000:00 Jul 7 06:13:57.853288 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 7 06:13:57.853496 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 7 06:13:57.853641 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 7 06:13:57.853793 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Jul 7 06:13:57.853934 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jul 7 06:13:57.854068 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Jul 7 06:13:57.854251 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 7 06:13:57.854419 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jul 7 06:13:57.854584 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jul 7 06:13:57.854748 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Jul 7 06:13:57.854899 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Jul 7 06:13:57.855047 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Jul 7 06:13:57.855235 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 7 06:13:57.855396 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jul 7 06:13:57.855549 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Jul 7 06:13:57.855708 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Jul 7 06:13:57.855858 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Jul 7 06:13:57.856017 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jul 7 06:13:57.856189 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Jul 7 06:13:57.856343 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Jul 7 06:13:57.856491 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Jul 7 06:13:57.856650 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jul 7 06:13:57.856813 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Jul 7 06:13:57.856965 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Jul 7 06:13:57.857155 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Jul 7 06:13:57.857312 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Jul 7 06:13:57.857478 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jul 7 06:13:57.857629 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 7 06:13:57.857802 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jul 7 06:13:57.857954 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Jul 7 06:13:57.858122 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Jul 7 06:13:57.858285 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jul 7 06:13:57.858438 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Jul 7 06:13:57.858454 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 7 06:13:57.858465 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 7 06:13:57.858476 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 7 06:13:57.858487 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 7 06:13:57.858498 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 7 06:13:57.858509 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 7 06:13:57.858520 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 7 06:13:57.858534 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 7 06:13:57.858545 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 7 06:13:57.858556 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 7 06:13:57.858567 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 7 06:13:57.858578 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 7 06:13:57.858589 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 7 06:13:57.858600 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 7 06:13:57.858610 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 7 06:13:57.858621 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 7 06:13:57.858635 kernel: iommu: Default domain type: Translated Jul 7 06:13:57.858646 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 7 06:13:57.858657 kernel: efivars: Registered efivars operations Jul 7 06:13:57.858668 kernel: PCI: Using ACPI for IRQ routing Jul 7 06:13:57.858689 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 7 06:13:57.858700 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jul 7 06:13:57.858711 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Jul 7 06:13:57.858721 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Jul 7 06:13:57.858732 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Jul 7 06:13:57.858743 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Jul 7 06:13:57.858756 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Jul 7 06:13:57.858767 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Jul 7 06:13:57.858778 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Jul 7 06:13:57.858927 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 7 06:13:57.859072 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 7 06:13:57.859253 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 7 06:13:57.859269 kernel: vgaarb: loaded Jul 7 06:13:57.859284 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 7 06:13:57.859295 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 7 06:13:57.859306 kernel: clocksource: Switched to clocksource kvm-clock Jul 7 06:13:57.859317 kernel: VFS: Disk quotas dquot_6.6.0 Jul 7 06:13:57.859328 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 7 06:13:57.859339 kernel: pnp: PnP ACPI init Jul 7 06:13:57.859500 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Jul 7 06:13:57.859537 kernel: pnp: PnP ACPI: found 6 devices Jul 7 06:13:57.859552 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 7 06:13:57.859566 kernel: NET: Registered PF_INET protocol family Jul 7 06:13:57.859577 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 7 06:13:57.859589 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 7 06:13:57.859600 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 7 06:13:57.859611 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 7 06:13:57.859623 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 7 06:13:57.859634 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 7 06:13:57.859645 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 7 06:13:57.859659 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 7 06:13:57.859670 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 7 06:13:57.859696 kernel: NET: Registered PF_XDP protocol family Jul 7 06:13:57.859845 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Jul 7 06:13:57.859996 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Jul 7 06:13:57.860151 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 7 06:13:57.860285 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 7 06:13:57.860417 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 7 06:13:57.860554 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Jul 7 06:13:57.860699 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jul 7 06:13:57.860833 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Jul 7 06:13:57.860849 kernel: PCI: CLS 0 bytes, default 64 Jul 7 06:13:57.860874 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Jul 7 06:13:57.860905 kernel: Initialise system trusted keyrings Jul 7 06:13:57.860926 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 7 06:13:57.860947 kernel: Key type asymmetric registered Jul 7 06:13:57.860958 kernel: Asymmetric key parser 'x509' registered Jul 7 06:13:57.860975 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 7 06:13:57.860986 kernel: io scheduler mq-deadline registered Jul 7 06:13:57.861001 kernel: io scheduler kyber registered Jul 7 06:13:57.861012 kernel: io scheduler bfq registered Jul 7 06:13:57.861023 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 7 06:13:57.861035 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 7 06:13:57.861049 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 7 06:13:57.861061 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jul 7 06:13:57.861073 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 7 06:13:57.861084 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 7 06:13:57.861111 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 7 06:13:57.861123 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 7 06:13:57.861134 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 7 06:13:57.861291 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 7 06:13:57.861313 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 7 06:13:57.861451 kernel: rtc_cmos 00:04: registered as rtc0 Jul 7 06:13:57.861589 kernel: rtc_cmos 00:04: setting system clock to 2025-07-07T06:13:57 UTC (1751868837) Jul 7 06:13:57.861738 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jul 7 06:13:57.861754 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 7 06:13:57.861765 kernel: efifb: probing for efifb Jul 7 06:13:57.861777 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Jul 7 06:13:57.861789 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jul 7 06:13:57.861804 kernel: efifb: scrolling: redraw Jul 7 06:13:57.861816 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 7 06:13:57.861827 kernel: Console: switching to colour frame buffer device 160x50 Jul 7 06:13:57.861838 kernel: fb0: EFI VGA frame buffer device Jul 7 06:13:57.861849 kernel: pstore: Using crash dump compression: deflate Jul 7 06:13:57.861860 kernel: pstore: Registered efi_pstore as persistent store backend Jul 7 06:13:57.861872 kernel: NET: Registered PF_INET6 protocol family Jul 7 06:13:57.861883 kernel: Segment Routing with IPv6 Jul 7 06:13:57.861894 kernel: In-situ OAM (IOAM) with IPv6 Jul 7 06:13:57.861905 kernel: NET: Registered PF_PACKET protocol family Jul 7 06:13:57.861920 kernel: Key type dns_resolver registered Jul 7 06:13:57.861931 kernel: IPI shorthand broadcast: enabled Jul 7 06:13:57.861942 kernel: sched_clock: Marking stable (2886005108, 172495964)->(3076879036, -18377964) Jul 7 06:13:57.861953 kernel: registered taskstats version 1 Jul 7 06:13:57.861965 kernel: Loading compiled-in X.509 certificates Jul 7 06:13:57.861976 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.35-flatcar: b8e96f4c6a9e663230fc9c12b186cf91fcc7a64e' Jul 7 06:13:57.861987 kernel: Demotion targets for Node 0: null Jul 7 06:13:57.861999 kernel: Key type .fscrypt registered Jul 7 06:13:57.862010 kernel: Key type fscrypt-provisioning registered Jul 7 06:13:57.862023 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 7 06:13:57.862035 kernel: ima: Allocated hash algorithm: sha1 Jul 7 06:13:57.862046 kernel: ima: No architecture policies found Jul 7 06:13:57.862057 kernel: clk: Disabling unused clocks Jul 7 06:13:57.862068 kernel: Warning: unable to open an initial console. Jul 7 06:13:57.862080 kernel: Freeing unused kernel image (initmem) memory: 54432K Jul 7 06:13:57.862091 kernel: Write protecting the kernel read-only data: 24576k Jul 7 06:13:57.862118 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 7 06:13:57.862133 kernel: Run /init as init process Jul 7 06:13:57.862145 kernel: with arguments: Jul 7 06:13:57.862156 kernel: /init Jul 7 06:13:57.862167 kernel: with environment: Jul 7 06:13:57.862178 kernel: HOME=/ Jul 7 06:13:57.862189 kernel: TERM=linux Jul 7 06:13:57.862200 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 7 06:13:57.862213 systemd[1]: Successfully made /usr/ read-only. Jul 7 06:13:57.862228 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 7 06:13:57.862244 systemd[1]: Detected virtualization kvm. Jul 7 06:13:57.862255 systemd[1]: Detected architecture x86-64. Jul 7 06:13:57.862267 systemd[1]: Running in initrd. Jul 7 06:13:57.862278 systemd[1]: No hostname configured, using default hostname. Jul 7 06:13:57.862291 systemd[1]: Hostname set to . Jul 7 06:13:57.862303 systemd[1]: Initializing machine ID from VM UUID. Jul 7 06:13:57.862315 systemd[1]: Queued start job for default target initrd.target. Jul 7 06:13:57.862329 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 06:13:57.862341 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 06:13:57.862354 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 7 06:13:57.862367 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 06:13:57.862381 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 7 06:13:57.862395 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 7 06:13:57.862411 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 7 06:13:57.862428 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 7 06:13:57.862440 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 06:13:57.862452 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 06:13:57.862465 systemd[1]: Reached target paths.target - Path Units. Jul 7 06:13:57.862477 systemd[1]: Reached target slices.target - Slice Units. Jul 7 06:13:57.862489 systemd[1]: Reached target swap.target - Swaps. Jul 7 06:13:57.862501 systemd[1]: Reached target timers.target - Timer Units. Jul 7 06:13:57.862513 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 06:13:57.862525 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 06:13:57.862540 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 7 06:13:57.862552 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 7 06:13:57.862564 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 06:13:57.862576 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 06:13:57.862588 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 06:13:57.862600 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 06:13:57.862612 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 7 06:13:57.862624 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 06:13:57.862639 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 7 06:13:57.862652 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 7 06:13:57.862664 systemd[1]: Starting systemd-fsck-usr.service... Jul 7 06:13:57.862676 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 06:13:57.862698 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 06:13:57.862714 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:13:57.862726 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 7 06:13:57.862741 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 06:13:57.862753 systemd[1]: Finished systemd-fsck-usr.service. Jul 7 06:13:57.862795 systemd-journald[219]: Collecting audit messages is disabled. Jul 7 06:13:57.862826 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 06:13:57.862839 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 06:13:57.862851 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 06:13:57.862864 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:13:57.862877 systemd-journald[219]: Journal started Jul 7 06:13:57.862906 systemd-journald[219]: Runtime Journal (/run/log/journal/848d0bc8e98847a6932340214724e32c) is 6M, max 48.5M, 42.4M free. Jul 7 06:13:57.849837 systemd-modules-load[220]: Inserted module 'overlay' Jul 7 06:13:57.867056 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 06:13:57.869826 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 06:13:57.875214 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 06:13:57.880826 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 06:13:57.883451 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 7 06:13:57.883474 kernel: Bridge firewalling registered Jul 7 06:13:57.883941 systemd-modules-load[220]: Inserted module 'br_netfilter' Jul 7 06:13:57.886305 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 06:13:57.888467 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 06:13:57.892782 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 06:13:57.895443 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 7 06:13:57.896215 systemd-tmpfiles[237]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 7 06:13:57.902872 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 06:13:57.907466 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 06:13:57.911334 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 06:13:57.916804 dracut-cmdline[256]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2e0b2c30526b1d273b6d599d4c30389a93a14ce36aaa5af83a05b11c5ea5ae50 Jul 7 06:13:57.965451 systemd-resolved[271]: Positive Trust Anchors: Jul 7 06:13:57.965469 systemd-resolved[271]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 06:13:57.965500 systemd-resolved[271]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 06:13:57.967952 systemd-resolved[271]: Defaulting to hostname 'linux'. Jul 7 06:13:57.969118 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 06:13:57.974987 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 06:13:58.041138 kernel: SCSI subsystem initialized Jul 7 06:13:58.050154 kernel: Loading iSCSI transport class v2.0-870. Jul 7 06:13:58.061150 kernel: iscsi: registered transport (tcp) Jul 7 06:13:58.083166 kernel: iscsi: registered transport (qla4xxx) Jul 7 06:13:58.083247 kernel: QLogic iSCSI HBA Driver Jul 7 06:13:58.104841 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 06:13:58.133890 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 06:13:58.137735 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 06:13:58.195958 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 7 06:13:58.197866 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 7 06:13:58.257133 kernel: raid6: avx2x4 gen() 29460 MB/s Jul 7 06:13:58.274133 kernel: raid6: avx2x2 gen() 29686 MB/s Jul 7 06:13:58.291161 kernel: raid6: avx2x1 gen() 25176 MB/s Jul 7 06:13:58.291186 kernel: raid6: using algorithm avx2x2 gen() 29686 MB/s Jul 7 06:13:58.309231 kernel: raid6: .... xor() 19353 MB/s, rmw enabled Jul 7 06:13:58.309271 kernel: raid6: using avx2x2 recovery algorithm Jul 7 06:13:58.332128 kernel: xor: automatically using best checksumming function avx Jul 7 06:13:58.501150 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 7 06:13:58.510633 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 7 06:13:58.513498 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 06:13:58.543776 systemd-udevd[473]: Using default interface naming scheme 'v255'. Jul 7 06:13:58.549606 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 06:13:58.551814 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 7 06:13:58.576201 dracut-pre-trigger[479]: rd.md=0: removing MD RAID activation Jul 7 06:13:58.606716 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 06:13:58.608449 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 06:13:58.676871 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 06:13:58.682355 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 7 06:13:58.715295 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jul 7 06:13:58.720935 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 7 06:13:58.723400 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 7 06:13:58.723423 kernel: GPT:9289727 != 19775487 Jul 7 06:13:58.723440 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 7 06:13:58.723450 kernel: GPT:9289727 != 19775487 Jul 7 06:13:58.724377 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 7 06:13:58.724398 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 06:13:58.730740 kernel: cryptd: max_cpu_qlen set to 1000 Jul 7 06:13:58.738113 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jul 7 06:13:58.753116 kernel: AES CTR mode by8 optimization enabled Jul 7 06:13:58.753156 kernel: libata version 3.00 loaded. Jul 7 06:13:58.753912 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 06:13:58.754089 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:13:58.760350 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:13:58.767304 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:13:58.787142 kernel: ahci 0000:00:1f.2: version 3.0 Jul 7 06:13:58.794544 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 7 06:13:58.794587 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jul 7 06:13:58.794786 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jul 7 06:13:58.794931 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 7 06:13:58.798383 kernel: scsi host0: ahci Jul 7 06:13:58.798574 kernel: scsi host1: ahci Jul 7 06:13:58.799129 kernel: scsi host2: ahci Jul 7 06:13:58.800257 kernel: scsi host3: ahci Jul 7 06:13:58.800423 kernel: scsi host4: ahci Jul 7 06:13:58.800571 kernel: scsi host5: ahci Jul 7 06:13:58.801349 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:13:58.809564 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 0 Jul 7 06:13:58.809622 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 0 Jul 7 06:13:58.809634 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 0 Jul 7 06:13:58.809644 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 0 Jul 7 06:13:58.809667 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 0 Jul 7 06:13:58.809677 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 0 Jul 7 06:13:58.817724 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 7 06:13:58.826634 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 7 06:13:58.840328 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 7 06:13:58.841571 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 7 06:13:58.852783 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 7 06:13:58.855515 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 7 06:13:58.890108 disk-uuid[634]: Primary Header is updated. Jul 7 06:13:58.890108 disk-uuid[634]: Secondary Entries is updated. Jul 7 06:13:58.890108 disk-uuid[634]: Secondary Header is updated. Jul 7 06:13:58.893673 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 06:13:58.899125 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 06:13:59.116489 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 7 06:13:59.116573 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 7 06:13:59.116590 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 7 06:13:59.118131 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 7 06:13:59.119128 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 7 06:13:59.119153 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jul 7 06:13:59.120550 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 7 06:13:59.120575 kernel: ata3.00: applying bridge limits Jul 7 06:13:59.121127 kernel: ata3.00: configured for UDMA/100 Jul 7 06:13:59.122123 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 7 06:13:59.190134 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 7 06:13:59.190460 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 7 06:13:59.216507 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 7 06:13:59.607948 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 7 06:13:59.610939 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 06:13:59.613580 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 06:13:59.616197 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 06:13:59.619482 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 7 06:13:59.654742 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 7 06:13:59.901121 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 06:13:59.901479 disk-uuid[635]: The operation has completed successfully. Jul 7 06:13:59.933045 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 7 06:13:59.933195 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 7 06:13:59.970710 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 7 06:14:00.001403 sh[663]: Success Jul 7 06:14:00.022385 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 7 06:14:00.022448 kernel: device-mapper: uevent: version 1.0.3 Jul 7 06:14:00.023501 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 7 06:14:00.033137 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jul 7 06:14:00.065765 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 7 06:14:00.068488 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 7 06:14:00.085571 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 7 06:14:00.091125 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 7 06:14:00.093801 kernel: BTRFS: device fsid 9d124217-7448-4fc6-a329-8a233bb5a0ac devid 1 transid 38 /dev/mapper/usr (253:0) scanned by mount (675) Jul 7 06:14:00.093823 kernel: BTRFS info (device dm-0): first mount of filesystem 9d124217-7448-4fc6-a329-8a233bb5a0ac Jul 7 06:14:00.094720 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 7 06:14:00.094741 kernel: BTRFS info (device dm-0): using free-space-tree Jul 7 06:14:00.099853 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 7 06:14:00.102050 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 7 06:14:00.104261 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 7 06:14:00.106930 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 7 06:14:00.109433 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 7 06:14:00.145048 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (708) Jul 7 06:14:00.145139 kernel: BTRFS info (device vda6): first mount of filesystem 847f3129-822b-493d-8278-974df083638f Jul 7 06:14:00.145152 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 06:14:00.146603 kernel: BTRFS info (device vda6): using free-space-tree Jul 7 06:14:00.153123 kernel: BTRFS info (device vda6): last unmount of filesystem 847f3129-822b-493d-8278-974df083638f Jul 7 06:14:00.154597 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 7 06:14:00.157554 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 7 06:14:00.243211 ignition[749]: Ignition 2.21.0 Jul 7 06:14:00.244079 ignition[749]: Stage: fetch-offline Jul 7 06:14:00.244163 ignition[749]: no configs at "/usr/lib/ignition/base.d" Jul 7 06:14:00.244173 ignition[749]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 06:14:00.244291 ignition[749]: parsed url from cmdline: "" Jul 7 06:14:00.244296 ignition[749]: no config URL provided Jul 7 06:14:00.244305 ignition[749]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 06:14:00.244314 ignition[749]: no config at "/usr/lib/ignition/user.ign" Jul 7 06:14:00.244342 ignition[749]: op(1): [started] loading QEMU firmware config module Jul 7 06:14:00.244350 ignition[749]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 7 06:14:00.252493 ignition[749]: op(1): [finished] loading QEMU firmware config module Jul 7 06:14:00.263497 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 06:14:00.268166 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 06:14:00.295134 ignition[749]: parsing config with SHA512: ad526570bf53f0ab8944ec0ed2ffd8a9f5c24ad7409af8a91ba3c51d6ca2330b828e4270f43238edd0d94aea37102f091176d4da54f5e376af2ad96d203336e5 Jul 7 06:14:00.298747 unknown[749]: fetched base config from "system" Jul 7 06:14:00.298758 unknown[749]: fetched user config from "qemu" Jul 7 06:14:00.299059 ignition[749]: fetch-offline: fetch-offline passed Jul 7 06:14:00.299122 ignition[749]: Ignition finished successfully Jul 7 06:14:00.302176 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 06:14:00.317960 systemd-networkd[853]: lo: Link UP Jul 7 06:14:00.317970 systemd-networkd[853]: lo: Gained carrier Jul 7 06:14:00.319525 systemd-networkd[853]: Enumeration completed Jul 7 06:14:00.319610 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 06:14:00.319889 systemd-networkd[853]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 06:14:00.319894 systemd-networkd[853]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 06:14:00.321219 systemd[1]: Reached target network.target - Network. Jul 7 06:14:00.321881 systemd-networkd[853]: eth0: Link UP Jul 7 06:14:00.321886 systemd-networkd[853]: eth0: Gained carrier Jul 7 06:14:00.321897 systemd-networkd[853]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 06:14:00.323039 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 7 06:14:00.323872 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 7 06:14:00.337148 systemd-networkd[853]: eth0: DHCPv4 address 10.0.0.129/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 7 06:14:00.358389 ignition[857]: Ignition 2.21.0 Jul 7 06:14:00.358952 ignition[857]: Stage: kargs Jul 7 06:14:00.359087 ignition[857]: no configs at "/usr/lib/ignition/base.d" Jul 7 06:14:00.359113 ignition[857]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 06:14:00.359903 ignition[857]: kargs: kargs passed Jul 7 06:14:00.359952 ignition[857]: Ignition finished successfully Jul 7 06:14:00.364314 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 7 06:14:00.367304 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 7 06:14:00.412308 ignition[867]: Ignition 2.21.0 Jul 7 06:14:00.413001 ignition[867]: Stage: disks Jul 7 06:14:00.414587 ignition[867]: no configs at "/usr/lib/ignition/base.d" Jul 7 06:14:00.414602 ignition[867]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 06:14:00.415650 ignition[867]: disks: disks passed Jul 7 06:14:00.415701 ignition[867]: Ignition finished successfully Jul 7 06:14:00.418225 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 7 06:14:00.419973 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 7 06:14:00.421469 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 7 06:14:00.423900 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 06:14:00.426221 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 06:14:00.427331 systemd[1]: Reached target basic.target - Basic System. Jul 7 06:14:00.430450 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 7 06:14:00.457420 systemd-fsck[877]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 7 06:14:00.465460 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 7 06:14:00.466718 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 7 06:14:00.580144 kernel: EXT4-fs (vda9): mounted filesystem df0fa228-af1b-4496-9a54-2d4ccccd27d9 r/w with ordered data mode. Quota mode: none. Jul 7 06:14:00.581065 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 7 06:14:00.583340 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 7 06:14:00.586657 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 06:14:00.589055 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 7 06:14:00.591066 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 7 06:14:00.591139 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 7 06:14:00.592958 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 06:14:00.600755 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 7 06:14:00.602755 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 7 06:14:00.609127 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (885) Jul 7 06:14:00.609166 kernel: BTRFS info (device vda6): first mount of filesystem 847f3129-822b-493d-8278-974df083638f Jul 7 06:14:00.610122 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 06:14:00.611487 kernel: BTRFS info (device vda6): using free-space-tree Jul 7 06:14:00.615565 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 06:14:00.642365 initrd-setup-root[910]: cut: /sysroot/etc/passwd: No such file or directory Jul 7 06:14:00.647126 initrd-setup-root[917]: cut: /sysroot/etc/group: No such file or directory Jul 7 06:14:00.651013 initrd-setup-root[924]: cut: /sysroot/etc/shadow: No such file or directory Jul 7 06:14:00.655619 initrd-setup-root[931]: cut: /sysroot/etc/gshadow: No such file or directory Jul 7 06:14:00.744237 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 7 06:14:00.746365 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 7 06:14:00.747964 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 7 06:14:00.768195 kernel: BTRFS info (device vda6): last unmount of filesystem 847f3129-822b-493d-8278-974df083638f Jul 7 06:14:00.781265 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 7 06:14:00.795896 ignition[999]: INFO : Ignition 2.21.0 Jul 7 06:14:00.795896 ignition[999]: INFO : Stage: mount Jul 7 06:14:00.797573 ignition[999]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 06:14:00.797573 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 06:14:00.801046 ignition[999]: INFO : mount: mount passed Jul 7 06:14:00.801046 ignition[999]: INFO : Ignition finished successfully Jul 7 06:14:00.804020 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 7 06:14:00.806393 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 7 06:14:01.092352 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 7 06:14:01.094638 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 06:14:01.150721 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1012) Jul 7 06:14:01.150772 kernel: BTRFS info (device vda6): first mount of filesystem 847f3129-822b-493d-8278-974df083638f Jul 7 06:14:01.150784 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 06:14:01.151648 kernel: BTRFS info (device vda6): using free-space-tree Jul 7 06:14:01.155844 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 06:14:01.186813 ignition[1029]: INFO : Ignition 2.21.0 Jul 7 06:14:01.186813 ignition[1029]: INFO : Stage: files Jul 7 06:14:01.188670 ignition[1029]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 06:14:01.188670 ignition[1029]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 06:14:01.190865 ignition[1029]: DEBUG : files: compiled without relabeling support, skipping Jul 7 06:14:01.192085 ignition[1029]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 7 06:14:01.192085 ignition[1029]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 7 06:14:01.194939 ignition[1029]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 7 06:14:01.194939 ignition[1029]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 7 06:14:01.198157 ignition[1029]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 7 06:14:01.198157 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jul 7 06:14:01.198157 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jul 7 06:14:01.194987 unknown[1029]: wrote ssh authorized keys file for user: core Jul 7 06:14:01.240000 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 7 06:14:01.370890 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jul 7 06:14:01.370890 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 7 06:14:01.375603 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 7 06:14:01.421275 systemd-networkd[853]: eth0: Gained IPv6LL Jul 7 06:14:01.855886 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 7 06:14:01.957120 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 7 06:14:01.959722 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 7 06:14:01.959722 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 7 06:14:01.959722 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 7 06:14:01.959722 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 7 06:14:01.959722 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 06:14:01.959722 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 06:14:01.959722 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 06:14:01.959722 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 06:14:01.975894 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 06:14:01.975894 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 06:14:01.975894 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 7 06:14:01.975894 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 7 06:14:01.975894 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 7 06:14:01.975894 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jul 7 06:14:02.484263 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 7 06:14:02.829223 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 7 06:14:02.829223 ignition[1029]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 7 06:14:02.833286 ignition[1029]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 06:14:02.835308 ignition[1029]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 06:14:02.835308 ignition[1029]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 7 06:14:02.835308 ignition[1029]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jul 7 06:14:02.835308 ignition[1029]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 7 06:14:02.842283 ignition[1029]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 7 06:14:02.842283 ignition[1029]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jul 7 06:14:02.842283 ignition[1029]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jul 7 06:14:02.853611 ignition[1029]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 7 06:14:02.857076 ignition[1029]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 7 06:14:02.858719 ignition[1029]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jul 7 06:14:02.858719 ignition[1029]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jul 7 06:14:02.858719 ignition[1029]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jul 7 06:14:02.858719 ignition[1029]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 7 06:14:02.858719 ignition[1029]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 7 06:14:02.858719 ignition[1029]: INFO : files: files passed Jul 7 06:14:02.858719 ignition[1029]: INFO : Ignition finished successfully Jul 7 06:14:02.868392 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 7 06:14:02.870863 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 7 06:14:02.873350 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 7 06:14:02.885676 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 7 06:14:02.885799 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 7 06:14:02.889598 initrd-setup-root-after-ignition[1058]: grep: /sysroot/oem/oem-release: No such file or directory Jul 7 06:14:02.893131 initrd-setup-root-after-ignition[1060]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 06:14:02.893131 initrd-setup-root-after-ignition[1060]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 7 06:14:02.896271 initrd-setup-root-after-ignition[1064]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 06:14:02.899252 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 06:14:02.901949 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 7 06:14:02.903232 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 7 06:14:02.955824 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 7 06:14:02.956963 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 7 06:14:02.959511 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 7 06:14:02.959776 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 7 06:14:02.961759 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 7 06:14:02.964859 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 7 06:14:02.997429 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 06:14:02.999018 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 7 06:14:03.024652 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 7 06:14:03.026891 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 06:14:03.027404 systemd[1]: Stopped target timers.target - Timer Units. Jul 7 06:14:03.029539 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 7 06:14:03.029659 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 06:14:03.032848 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 7 06:14:03.034827 systemd[1]: Stopped target basic.target - Basic System. Jul 7 06:14:03.036574 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 7 06:14:03.037121 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 06:14:03.040335 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 7 06:14:03.040876 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 7 06:14:03.041554 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 7 06:14:03.041874 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 06:14:03.042378 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 7 06:14:03.049557 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 7 06:14:03.049866 systemd[1]: Stopped target swap.target - Swaps. Jul 7 06:14:03.050327 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 7 06:14:03.050457 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 7 06:14:03.056043 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 7 06:14:03.056587 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 06:14:03.056870 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 7 06:14:03.061862 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 06:14:03.062498 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 7 06:14:03.062615 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 7 06:14:03.066206 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 7 06:14:03.066318 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 06:14:03.066816 systemd[1]: Stopped target paths.target - Path Units. Jul 7 06:14:03.067059 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 7 06:14:03.074218 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 06:14:03.076966 systemd[1]: Stopped target slices.target - Slice Units. Jul 7 06:14:03.077499 systemd[1]: Stopped target sockets.target - Socket Units. Jul 7 06:14:03.077834 systemd[1]: iscsid.socket: Deactivated successfully. Jul 7 06:14:03.077935 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 06:14:03.080620 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 7 06:14:03.080708 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 06:14:03.082517 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 7 06:14:03.082656 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 06:14:03.084480 systemd[1]: ignition-files.service: Deactivated successfully. Jul 7 06:14:03.084595 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 7 06:14:03.090115 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 7 06:14:03.090478 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 7 06:14:03.090638 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 06:14:03.093932 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 7 06:14:03.095063 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 7 06:14:03.095236 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 06:14:03.098054 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 7 06:14:03.098179 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 06:14:03.105637 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 7 06:14:03.112337 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 7 06:14:03.126859 ignition[1084]: INFO : Ignition 2.21.0 Jul 7 06:14:03.126859 ignition[1084]: INFO : Stage: umount Jul 7 06:14:03.128773 ignition[1084]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 06:14:03.128773 ignition[1084]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 06:14:03.128773 ignition[1084]: INFO : umount: umount passed Jul 7 06:14:03.131995 ignition[1084]: INFO : Ignition finished successfully Jul 7 06:14:03.134190 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 7 06:14:03.134346 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 7 06:14:03.135003 systemd[1]: Stopped target network.target - Network. Jul 7 06:14:03.138696 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 7 06:14:03.138767 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 7 06:14:03.139409 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 7 06:14:03.139473 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 7 06:14:03.139817 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 7 06:14:03.139867 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 7 06:14:03.140148 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 7 06:14:03.140187 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 7 06:14:03.140724 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 7 06:14:03.141041 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 7 06:14:03.142496 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 7 06:14:03.162582 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 7 06:14:03.162728 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 7 06:14:03.166892 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 7 06:14:03.167269 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 7 06:14:03.167401 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 7 06:14:03.171178 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 7 06:14:03.171886 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 7 06:14:03.173511 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 7 06:14:03.173571 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 7 06:14:03.174868 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 7 06:14:03.177481 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 7 06:14:03.177530 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 06:14:03.177908 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 7 06:14:03.177948 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 7 06:14:03.183709 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 7 06:14:03.183757 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 7 06:14:03.186170 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 7 06:14:03.186220 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 06:14:03.189262 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 06:14:03.191833 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 7 06:14:03.191903 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 7 06:14:03.203922 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 7 06:14:03.204126 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 06:14:03.206339 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 7 06:14:03.206381 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 7 06:14:03.206649 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 7 06:14:03.206682 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 06:14:03.206963 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 7 06:14:03.207007 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 7 06:14:03.207773 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 7 06:14:03.207819 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 7 06:14:03.215304 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 06:14:03.215350 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 06:14:03.222155 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 7 06:14:03.222396 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 7 06:14:03.222442 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 06:14:03.226549 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 7 06:14:03.226602 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 06:14:03.229891 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 06:14:03.229937 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:14:03.234237 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 7 06:14:03.234294 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 7 06:14:03.234341 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 7 06:14:03.234713 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 7 06:14:03.244300 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 7 06:14:03.252065 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 7 06:14:03.252252 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 7 06:14:03.306216 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 7 06:14:03.306363 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 7 06:14:03.307069 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 7 06:14:03.309372 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 7 06:14:03.309439 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 7 06:14:03.310624 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 7 06:14:03.338189 systemd[1]: Switching root. Jul 7 06:14:03.377720 systemd-journald[219]: Journal stopped Jul 7 06:14:04.688067 systemd-journald[219]: Received SIGTERM from PID 1 (systemd). Jul 7 06:14:04.688186 kernel: SELinux: policy capability network_peer_controls=1 Jul 7 06:14:04.688205 kernel: SELinux: policy capability open_perms=1 Jul 7 06:14:04.688220 kernel: SELinux: policy capability extended_socket_class=1 Jul 7 06:14:04.688235 kernel: SELinux: policy capability always_check_network=0 Jul 7 06:14:04.688255 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 7 06:14:04.688270 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 7 06:14:04.688285 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 7 06:14:04.688298 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 7 06:14:04.688318 kernel: SELinux: policy capability userspace_initial_context=0 Jul 7 06:14:04.688332 kernel: audit: type=1403 audit(1751868843.861:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 7 06:14:04.688354 systemd[1]: Successfully loaded SELinux policy in 43.616ms. Jul 7 06:14:04.688383 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.288ms. Jul 7 06:14:04.688400 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 7 06:14:04.688420 systemd[1]: Detected virtualization kvm. Jul 7 06:14:04.688435 systemd[1]: Detected architecture x86-64. Jul 7 06:14:04.688453 systemd[1]: Detected first boot. Jul 7 06:14:04.688468 systemd[1]: Initializing machine ID from VM UUID. Jul 7 06:14:04.688484 zram_generator::config[1129]: No configuration found. Jul 7 06:14:04.688501 kernel: Guest personality initialized and is inactive Jul 7 06:14:04.688527 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 7 06:14:04.688543 kernel: Initialized host personality Jul 7 06:14:04.688561 kernel: NET: Registered PF_VSOCK protocol family Jul 7 06:14:04.688577 systemd[1]: Populated /etc with preset unit settings. Jul 7 06:14:04.688593 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 7 06:14:04.688608 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 7 06:14:04.688623 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 7 06:14:04.688639 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 7 06:14:04.688655 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 7 06:14:04.688671 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 7 06:14:04.688694 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 7 06:14:04.688713 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 7 06:14:04.688730 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 7 06:14:04.688746 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 7 06:14:04.688763 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 7 06:14:04.688779 systemd[1]: Created slice user.slice - User and Session Slice. Jul 7 06:14:04.688794 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 06:14:04.688813 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 06:14:04.688830 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 7 06:14:04.688846 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 7 06:14:04.688867 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 7 06:14:04.688884 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 06:14:04.688899 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 7 06:14:04.688915 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 06:14:04.688931 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 06:14:04.688947 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 7 06:14:04.688962 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 7 06:14:04.688982 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 7 06:14:04.688998 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 7 06:14:04.689013 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 06:14:04.689028 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 06:14:04.689044 systemd[1]: Reached target slices.target - Slice Units. Jul 7 06:14:04.689059 systemd[1]: Reached target swap.target - Swaps. Jul 7 06:14:04.689074 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 7 06:14:04.689089 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 7 06:14:04.689121 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 7 06:14:04.689140 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 06:14:04.689155 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 06:14:04.689172 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 06:14:04.689187 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 7 06:14:04.689202 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 7 06:14:04.689217 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 7 06:14:04.689231 systemd[1]: Mounting media.mount - External Media Directory... Jul 7 06:14:04.689246 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:14:04.689261 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 7 06:14:04.689286 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 7 06:14:04.689302 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 7 06:14:04.689318 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 7 06:14:04.689334 systemd[1]: Reached target machines.target - Containers. Jul 7 06:14:04.689349 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 7 06:14:04.689364 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 06:14:04.689379 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 06:14:04.689393 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 7 06:14:04.689408 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 06:14:04.689425 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 06:14:04.689440 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 06:14:04.689454 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 7 06:14:04.689469 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 06:14:04.689484 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 7 06:14:04.689501 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 7 06:14:04.689528 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 7 06:14:04.689543 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 7 06:14:04.689561 systemd[1]: Stopped systemd-fsck-usr.service. Jul 7 06:14:04.689577 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 06:14:04.689593 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 06:14:04.689607 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 06:14:04.689622 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 06:14:04.689640 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 7 06:14:04.689655 kernel: loop: module loaded Jul 7 06:14:04.689669 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 7 06:14:04.689684 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 06:14:04.689704 systemd[1]: verity-setup.service: Deactivated successfully. Jul 7 06:14:04.689719 systemd[1]: Stopped verity-setup.service. Jul 7 06:14:04.689733 kernel: fuse: init (API version 7.41) Jul 7 06:14:04.689748 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:14:04.689763 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 7 06:14:04.689781 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 7 06:14:04.689797 systemd[1]: Mounted media.mount - External Media Directory. Jul 7 06:14:04.689812 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 7 06:14:04.689831 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 7 06:14:04.689848 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 7 06:14:04.689866 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 7 06:14:04.689881 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 06:14:04.689935 systemd-journald[1200]: Collecting audit messages is disabled. Jul 7 06:14:04.689972 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 7 06:14:04.689988 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 7 06:14:04.690003 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 06:14:04.690018 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 06:14:04.690036 systemd-journald[1200]: Journal started Jul 7 06:14:04.690065 systemd-journald[1200]: Runtime Journal (/run/log/journal/848d0bc8e98847a6932340214724e32c) is 6M, max 48.5M, 42.4M free. Jul 7 06:14:04.409893 systemd[1]: Queued start job for default target multi-user.target. Jul 7 06:14:04.429926 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 7 06:14:04.430495 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 7 06:14:04.692929 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 06:14:04.693059 kernel: ACPI: bus type drm_connector registered Jul 7 06:14:04.694087 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 06:14:04.694379 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 06:14:04.695808 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 06:14:04.696056 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 06:14:04.697565 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 7 06:14:04.697777 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 7 06:14:04.699215 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 06:14:04.699465 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 06:14:04.700950 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 06:14:04.702475 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 06:14:04.704154 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 7 06:14:04.705810 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 7 06:14:04.720780 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 06:14:04.723583 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 7 06:14:04.725985 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 7 06:14:04.727486 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 7 06:14:04.727612 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 06:14:04.729977 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 7 06:14:04.734239 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 7 06:14:04.735647 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 06:14:04.737197 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 7 06:14:04.741454 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 7 06:14:04.743035 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 06:14:04.744284 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 7 06:14:04.745595 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 06:14:04.747140 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 06:14:04.752841 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 7 06:14:04.755422 systemd-journald[1200]: Time spent on flushing to /var/log/journal/848d0bc8e98847a6932340214724e32c is 23.509ms for 1064 entries. Jul 7 06:14:04.755422 systemd-journald[1200]: System Journal (/var/log/journal/848d0bc8e98847a6932340214724e32c) is 8M, max 195.6M, 187.6M free. Jul 7 06:14:04.801883 systemd-journald[1200]: Received client request to flush runtime journal. Jul 7 06:14:04.801937 kernel: loop0: detected capacity change from 0 to 146240 Jul 7 06:14:04.756482 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 7 06:14:04.760568 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 7 06:14:04.762211 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 7 06:14:04.775165 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 7 06:14:04.777757 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 7 06:14:04.784140 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 7 06:14:04.786486 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 06:14:04.804851 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 7 06:14:04.810471 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 06:14:04.815117 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 7 06:14:04.825641 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 7 06:14:04.828783 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 7 06:14:04.833090 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 06:14:04.838113 kernel: loop1: detected capacity change from 0 to 229808 Jul 7 06:14:04.862132 kernel: loop2: detected capacity change from 0 to 113872 Jul 7 06:14:04.863851 systemd-tmpfiles[1265]: ACLs are not supported, ignoring. Jul 7 06:14:04.863870 systemd-tmpfiles[1265]: ACLs are not supported, ignoring. Jul 7 06:14:04.869439 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 06:14:04.892128 kernel: loop3: detected capacity change from 0 to 146240 Jul 7 06:14:04.902132 kernel: loop4: detected capacity change from 0 to 229808 Jul 7 06:14:04.912135 kernel: loop5: detected capacity change from 0 to 113872 Jul 7 06:14:04.921086 (sd-merge)[1270]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 7 06:14:04.921704 (sd-merge)[1270]: Merged extensions into '/usr'. Jul 7 06:14:04.927078 systemd[1]: Reload requested from client PID 1248 ('systemd-sysext') (unit systemd-sysext.service)... Jul 7 06:14:04.927194 systemd[1]: Reloading... Jul 7 06:14:04.980130 zram_generator::config[1296]: No configuration found. Jul 7 06:14:05.082562 ldconfig[1243]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 7 06:14:05.095114 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:14:05.177411 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 7 06:14:05.177927 systemd[1]: Reloading finished in 250 ms. Jul 7 06:14:05.206384 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 7 06:14:05.207994 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 7 06:14:05.223661 systemd[1]: Starting ensure-sysext.service... Jul 7 06:14:05.225937 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 06:14:05.238032 systemd[1]: Reload requested from client PID 1334 ('systemctl') (unit ensure-sysext.service)... Jul 7 06:14:05.238134 systemd[1]: Reloading... Jul 7 06:14:05.250394 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 7 06:14:05.250825 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 7 06:14:05.251179 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 7 06:14:05.251502 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 7 06:14:05.252563 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 7 06:14:05.252894 systemd-tmpfiles[1335]: ACLs are not supported, ignoring. Jul 7 06:14:05.252976 systemd-tmpfiles[1335]: ACLs are not supported, ignoring. Jul 7 06:14:05.257936 systemd-tmpfiles[1335]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 06:14:05.257950 systemd-tmpfiles[1335]: Skipping /boot Jul 7 06:14:05.275052 systemd-tmpfiles[1335]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 06:14:05.275239 systemd-tmpfiles[1335]: Skipping /boot Jul 7 06:14:05.307139 zram_generator::config[1365]: No configuration found. Jul 7 06:14:05.400244 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:14:05.486005 systemd[1]: Reloading finished in 247 ms. Jul 7 06:14:05.510863 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 7 06:14:05.527819 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 06:14:05.537355 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 7 06:14:05.540170 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 7 06:14:05.562161 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 7 06:14:05.565962 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 06:14:05.570312 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 06:14:05.573917 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 7 06:14:05.578914 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:14:05.579195 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 06:14:05.584516 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 06:14:05.588283 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 06:14:05.590677 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 06:14:05.591884 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 06:14:05.591983 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 06:14:05.593837 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 7 06:14:05.594911 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:14:05.601367 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 06:14:05.601618 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 06:14:05.604631 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 7 06:14:05.607313 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 06:14:05.607658 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 06:14:05.610240 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 06:14:05.610858 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 06:14:05.615455 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 7 06:14:05.621452 systemd-udevd[1407]: Using default interface naming scheme 'v255'. Jul 7 06:14:05.625284 augenrules[1435]: No rules Jul 7 06:14:05.625494 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:14:05.625793 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 06:14:05.627539 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 06:14:05.631299 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 06:14:05.642330 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 06:14:05.644646 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 06:14:05.645838 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 06:14:05.645992 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 06:14:05.649256 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 7 06:14:05.649772 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:14:05.651660 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 06:14:05.652551 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 7 06:14:05.654008 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 06:14:05.656718 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 7 06:14:05.659123 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 06:14:05.659378 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 06:14:05.661203 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 7 06:14:05.662942 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 06:14:05.664247 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 06:14:05.665916 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 06:14:05.666240 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 06:14:05.668744 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 06:14:05.669005 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 06:14:05.677467 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 7 06:14:05.679809 systemd[1]: Finished ensure-sysext.service. Jul 7 06:14:05.704525 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 06:14:05.707188 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 06:14:05.707275 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 06:14:05.709708 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 7 06:14:05.710918 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 7 06:14:05.764173 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 7 06:14:05.798007 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 7 06:14:05.801065 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 7 06:14:05.808558 systemd-resolved[1404]: Positive Trust Anchors: Jul 7 06:14:05.808577 systemd-resolved[1404]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 06:14:05.808610 systemd-resolved[1404]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 06:14:05.812123 kernel: mousedev: PS/2 mouse device common for all mice Jul 7 06:14:05.812973 systemd-resolved[1404]: Defaulting to hostname 'linux'. Jul 7 06:14:05.814769 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 06:14:05.816148 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 06:14:05.823306 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 7 06:14:05.826240 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 7 06:14:05.832117 kernel: ACPI: button: Power Button [PWRF] Jul 7 06:14:05.862067 systemd-networkd[1486]: lo: Link UP Jul 7 06:14:05.862406 systemd-networkd[1486]: lo: Gained carrier Jul 7 06:14:05.869360 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jul 7 06:14:05.869665 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 7 06:14:05.869832 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 7 06:14:05.865814 systemd-networkd[1486]: Enumeration completed Jul 7 06:14:05.865919 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 06:14:05.866981 systemd-networkd[1486]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 06:14:05.866986 systemd-networkd[1486]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 06:14:05.867237 systemd[1]: Reached target network.target - Network. Jul 7 06:14:05.867626 systemd-networkd[1486]: eth0: Link UP Jul 7 06:14:05.868786 systemd-networkd[1486]: eth0: Gained carrier Jul 7 06:14:05.868803 systemd-networkd[1486]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 06:14:05.870597 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 7 06:14:05.874192 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 7 06:14:05.883154 systemd-networkd[1486]: eth0: DHCPv4 address 10.0.0.129/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 7 06:14:05.900156 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 7 06:14:05.904173 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 7 06:14:05.905774 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 06:14:05.906373 systemd-timesyncd[1487]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 7 06:14:05.906417 systemd-timesyncd[1487]: Initial clock synchronization to Mon 2025-07-07 06:14:06.145640 UTC. Jul 7 06:14:05.907323 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 7 06:14:05.909203 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 7 06:14:05.910419 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 7 06:14:05.911813 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 7 06:14:05.913187 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 7 06:14:05.913222 systemd[1]: Reached target paths.target - Path Units. Jul 7 06:14:05.914167 systemd[1]: Reached target time-set.target - System Time Set. Jul 7 06:14:05.915324 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 7 06:14:05.916501 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 7 06:14:05.918175 systemd[1]: Reached target timers.target - Timer Units. Jul 7 06:14:05.920852 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 7 06:14:05.924762 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 7 06:14:05.929752 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 7 06:14:05.933319 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 7 06:14:05.934561 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 7 06:14:05.946208 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 7 06:14:05.947869 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 7 06:14:05.950641 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 7 06:14:05.970573 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 06:14:05.972252 systemd[1]: Reached target basic.target - Basic System. Jul 7 06:14:05.973327 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 7 06:14:05.973412 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 7 06:14:05.976322 systemd[1]: Starting containerd.service - containerd container runtime... Jul 7 06:14:05.978441 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 7 06:14:05.984771 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 7 06:14:05.990288 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 7 06:14:05.993076 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 7 06:14:05.994128 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 7 06:14:05.995335 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 7 06:14:05.999429 jq[1525]: false Jul 7 06:14:06.002111 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 7 06:14:06.006832 oslogin_cache_refresh[1527]: Refreshing passwd entry cache Jul 7 06:14:06.012548 google_oslogin_nss_cache[1527]: oslogin_cache_refresh[1527]: Refreshing passwd entry cache Jul 7 06:14:06.012461 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 7 06:14:06.014353 google_oslogin_nss_cache[1527]: oslogin_cache_refresh[1527]: Failure getting users, quitting Jul 7 06:14:06.014345 oslogin_cache_refresh[1527]: Failure getting users, quitting Jul 7 06:14:06.014550 google_oslogin_nss_cache[1527]: oslogin_cache_refresh[1527]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 7 06:14:06.014550 google_oslogin_nss_cache[1527]: oslogin_cache_refresh[1527]: Refreshing group entry cache Jul 7 06:14:06.014364 oslogin_cache_refresh[1527]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 7 06:14:06.014412 oslogin_cache_refresh[1527]: Refreshing group entry cache Jul 7 06:14:06.017439 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 7 06:14:06.020266 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 7 06:14:06.021475 oslogin_cache_refresh[1527]: Failure getting groups, quitting Jul 7 06:14:06.023290 google_oslogin_nss_cache[1527]: oslogin_cache_refresh[1527]: Failure getting groups, quitting Jul 7 06:14:06.023290 google_oslogin_nss_cache[1527]: oslogin_cache_refresh[1527]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 7 06:14:06.021486 oslogin_cache_refresh[1527]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 7 06:14:06.035475 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 7 06:14:06.037834 extend-filesystems[1526]: Found /dev/vda6 Jul 7 06:14:06.039167 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:14:06.041107 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 7 06:14:06.041647 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 7 06:14:06.043695 extend-filesystems[1526]: Found /dev/vda9 Jul 7 06:14:06.045085 systemd[1]: Starting update-engine.service - Update Engine... Jul 7 06:14:06.050796 extend-filesystems[1526]: Checking size of /dev/vda9 Jul 7 06:14:06.083065 extend-filesystems[1526]: Resized partition /dev/vda9 Jul 7 06:14:06.085429 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 7 06:14:06.096656 jq[1555]: true Jul 7 06:14:06.096451 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 7 06:14:06.098886 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 7 06:14:06.099270 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 7 06:14:06.100935 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 7 06:14:06.101468 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 7 06:14:06.102832 extend-filesystems[1557]: resize2fs 1.47.2 (1-Jan-2025) Jul 7 06:14:06.104586 systemd[1]: motdgen.service: Deactivated successfully. Jul 7 06:14:06.104919 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 7 06:14:06.118789 update_engine[1547]: I20250707 06:14:06.118721 1547 main.cc:92] Flatcar Update Engine starting Jul 7 06:14:06.119903 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 7 06:14:06.120665 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 7 06:14:06.136229 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 7 06:14:06.138883 (ntainerd)[1562]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 7 06:14:06.151793 jq[1561]: true Jul 7 06:14:06.168150 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 7 06:14:06.193234 extend-filesystems[1557]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 7 06:14:06.193234 extend-filesystems[1557]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 7 06:14:06.193234 extend-filesystems[1557]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 7 06:14:06.194739 extend-filesystems[1526]: Resized filesystem in /dev/vda9 Jul 7 06:14:06.194795 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 7 06:14:06.195183 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 7 06:14:06.195914 systemd-logind[1538]: Watching system buttons on /dev/input/event2 (Power Button) Jul 7 06:14:06.196231 systemd-logind[1538]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 7 06:14:06.196674 systemd-logind[1538]: New seat seat0. Jul 7 06:14:06.204555 kernel: kvm_amd: TSC scaling supported Jul 7 06:14:06.204611 kernel: kvm_amd: Nested Virtualization enabled Jul 7 06:14:06.204625 kernel: kvm_amd: Nested Paging enabled Jul 7 06:14:06.205722 kernel: kvm_amd: LBR virtualization supported Jul 7 06:14:06.208160 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jul 7 06:14:06.208265 kernel: kvm_amd: Virtual GIF supported Jul 7 06:14:06.234397 dbus-daemon[1523]: [system] SELinux support is enabled Jul 7 06:14:06.239694 update_engine[1547]: I20250707 06:14:06.239557 1547 update_check_scheduler.cc:74] Next update check in 3m54s Jul 7 06:14:06.249523 bash[1591]: Updated "/home/core/.ssh/authorized_keys" Jul 7 06:14:06.260523 systemd[1]: Started systemd-logind.service - User Login Management. Jul 7 06:14:06.262030 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 7 06:14:06.265914 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:14:06.270110 kernel: EDAC MC: Ver: 3.0.0 Jul 7 06:14:06.268821 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 7 06:14:06.276290 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 7 06:14:06.278019 tar[1558]: linux-amd64/LICENSE Jul 7 06:14:06.278019 tar[1558]: linux-amd64/helm Jul 7 06:14:06.277193 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 7 06:14:06.278358 dbus-daemon[1523]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 7 06:14:06.277226 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 7 06:14:06.278783 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 7 06:14:06.278802 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 7 06:14:06.280416 systemd[1]: Started update-engine.service - Update Engine. Jul 7 06:14:06.286185 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 7 06:14:06.325241 locksmithd[1598]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 7 06:14:06.395841 containerd[1562]: time="2025-07-07T06:14:06Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 7 06:14:06.399847 containerd[1562]: time="2025-07-07T06:14:06.399807381Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 7 06:14:06.409686 containerd[1562]: time="2025-07-07T06:14:06.409584067Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.039µs" Jul 7 06:14:06.409686 containerd[1562]: time="2025-07-07T06:14:06.409614323Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 7 06:14:06.409686 containerd[1562]: time="2025-07-07T06:14:06.409631392Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 7 06:14:06.410174 containerd[1562]: time="2025-07-07T06:14:06.410043271Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 7 06:14:06.410174 containerd[1562]: time="2025-07-07T06:14:06.410068255Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 7 06:14:06.410174 containerd[1562]: time="2025-07-07T06:14:06.410117622Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 7 06:14:06.410387 containerd[1562]: time="2025-07-07T06:14:06.410360335Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 7 06:14:06.410650 containerd[1562]: time="2025-07-07T06:14:06.410471310Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 7 06:14:06.411214 containerd[1562]: time="2025-07-07T06:14:06.411189491Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 7 06:14:06.411286 containerd[1562]: time="2025-07-07T06:14:06.411267950Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 7 06:14:06.411354 containerd[1562]: time="2025-07-07T06:14:06.411335562Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 7 06:14:06.411462 containerd[1562]: time="2025-07-07T06:14:06.411425857Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 7 06:14:06.411661 containerd[1562]: time="2025-07-07T06:14:06.411640676Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 7 06:14:06.412022 containerd[1562]: time="2025-07-07T06:14:06.412000185Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 7 06:14:06.412119 containerd[1562]: time="2025-07-07T06:14:06.412098808Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 7 06:14:06.412206 containerd[1562]: time="2025-07-07T06:14:06.412187287Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 7 06:14:06.412307 containerd[1562]: time="2025-07-07T06:14:06.412288685Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 7 06:14:06.412882 containerd[1562]: time="2025-07-07T06:14:06.412765051Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 7 06:14:06.412882 containerd[1562]: time="2025-07-07T06:14:06.412849504Z" level=info msg="metadata content store policy set" policy=shared Jul 7 06:14:06.418237 containerd[1562]: time="2025-07-07T06:14:06.418213067Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 7 06:14:06.418335 containerd[1562]: time="2025-07-07T06:14:06.418316788Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 7 06:14:06.418409 containerd[1562]: time="2025-07-07T06:14:06.418391191Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 7 06:14:06.418488 containerd[1562]: time="2025-07-07T06:14:06.418470506Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 7 06:14:06.418559 containerd[1562]: time="2025-07-07T06:14:06.418541948Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 7 06:14:06.418623 containerd[1562]: time="2025-07-07T06:14:06.418606826Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 7 06:14:06.418708 containerd[1562]: time="2025-07-07T06:14:06.418690857Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 7 06:14:06.418795 containerd[1562]: time="2025-07-07T06:14:06.418775156Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 7 06:14:06.418872 containerd[1562]: time="2025-07-07T06:14:06.418855060Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 7 06:14:06.418941 containerd[1562]: time="2025-07-07T06:14:06.418924138Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 7 06:14:06.419007 containerd[1562]: time="2025-07-07T06:14:06.418989903Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 7 06:14:06.419081 containerd[1562]: time="2025-07-07T06:14:06.419063914Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 7 06:14:06.419285 containerd[1562]: time="2025-07-07T06:14:06.419264359Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 7 06:14:06.419379 containerd[1562]: time="2025-07-07T06:14:06.419358658Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 7 06:14:06.419455 containerd[1562]: time="2025-07-07T06:14:06.419437127Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 7 06:14:06.419523 containerd[1562]: time="2025-07-07T06:14:06.419506009Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 7 06:14:06.419605 containerd[1562]: time="2025-07-07T06:14:06.419585870Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 7 06:14:06.420344 containerd[1562]: time="2025-07-07T06:14:06.419662926Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 7 06:14:06.420344 containerd[1562]: time="2025-07-07T06:14:06.419702315Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 7 06:14:06.420344 containerd[1562]: time="2025-07-07T06:14:06.419718124Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 7 06:14:06.420344 containerd[1562]: time="2025-07-07T06:14:06.419731994Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 7 06:14:06.420344 containerd[1562]: time="2025-07-07T06:14:06.419745687Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 7 06:14:06.420344 containerd[1562]: time="2025-07-07T06:14:06.419757152Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 7 06:14:06.420344 containerd[1562]: time="2025-07-07T06:14:06.419835776Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 7 06:14:06.420344 containerd[1562]: time="2025-07-07T06:14:06.419853319Z" level=info msg="Start snapshots syncer" Jul 7 06:14:06.420344 containerd[1562]: time="2025-07-07T06:14:06.419887869Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 7 06:14:06.420568 containerd[1562]: time="2025-07-07T06:14:06.420178980Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 7 06:14:06.420568 containerd[1562]: time="2025-07-07T06:14:06.420238213Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 7 06:14:06.421203 containerd[1562]: time="2025-07-07T06:14:06.421178179Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 7 06:14:06.421415 containerd[1562]: time="2025-07-07T06:14:06.421381750Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 7 06:14:06.421446 containerd[1562]: time="2025-07-07T06:14:06.421413359Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 7 06:14:06.421446 containerd[1562]: time="2025-07-07T06:14:06.421426712Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 7 06:14:06.421446 containerd[1562]: time="2025-07-07T06:14:06.421439684Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 7 06:14:06.421500 containerd[1562]: time="2025-07-07T06:14:06.421454647Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 7 06:14:06.421500 containerd[1562]: time="2025-07-07T06:14:06.421467670Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 7 06:14:06.421500 containerd[1562]: time="2025-07-07T06:14:06.421481488Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 7 06:14:06.421559 containerd[1562]: time="2025-07-07T06:14:06.421513467Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 7 06:14:06.421559 containerd[1562]: time="2025-07-07T06:14:06.421527904Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 7 06:14:06.421559 containerd[1562]: time="2025-07-07T06:14:06.421542651Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 7 06:14:06.422370 containerd[1562]: time="2025-07-07T06:14:06.422344532Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 7 06:14:06.422399 containerd[1562]: time="2025-07-07T06:14:06.422372962Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 7 06:14:06.422399 containerd[1562]: time="2025-07-07T06:14:06.422385325Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 7 06:14:06.422399 containerd[1562]: time="2025-07-07T06:14:06.422397595Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 7 06:14:06.422473 containerd[1562]: time="2025-07-07T06:14:06.422409482Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 7 06:14:06.422473 containerd[1562]: time="2025-07-07T06:14:06.422426788Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 7 06:14:06.422473 containerd[1562]: time="2025-07-07T06:14:06.422439729Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 7 06:14:06.422473 containerd[1562]: time="2025-07-07T06:14:06.422461616Z" level=info msg="runtime interface created" Jul 7 06:14:06.422473 containerd[1562]: time="2025-07-07T06:14:06.422468974Z" level=info msg="created NRI interface" Jul 7 06:14:06.422564 containerd[1562]: time="2025-07-07T06:14:06.422478973Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 7 06:14:06.422564 containerd[1562]: time="2025-07-07T06:14:06.422491543Z" level=info msg="Connect containerd service" Jul 7 06:14:06.422564 containerd[1562]: time="2025-07-07T06:14:06.422519891Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 7 06:14:06.423467 containerd[1562]: time="2025-07-07T06:14:06.423436296Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 06:14:06.451930 sshd_keygen[1549]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 7 06:14:06.481775 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 7 06:14:06.484983 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 7 06:14:06.506433 systemd[1]: issuegen.service: Deactivated successfully. Jul 7 06:14:06.506894 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 7 06:14:06.510896 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 7 06:14:06.521973 containerd[1562]: time="2025-07-07T06:14:06.521926586Z" level=info msg="Start subscribing containerd event" Jul 7 06:14:06.522048 containerd[1562]: time="2025-07-07T06:14:06.521984025Z" level=info msg="Start recovering state" Jul 7 06:14:06.522108 containerd[1562]: time="2025-07-07T06:14:06.522085940Z" level=info msg="Start event monitor" Jul 7 06:14:06.522186 containerd[1562]: time="2025-07-07T06:14:06.522121738Z" level=info msg="Start cni network conf syncer for default" Jul 7 06:14:06.522186 containerd[1562]: time="2025-07-07T06:14:06.522130592Z" level=info msg="Start streaming server" Jul 7 06:14:06.522186 containerd[1562]: time="2025-07-07T06:14:06.522140364Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 7 06:14:06.522186 containerd[1562]: time="2025-07-07T06:14:06.522147753Z" level=info msg="runtime interface starting up..." Jul 7 06:14:06.522186 containerd[1562]: time="2025-07-07T06:14:06.522153460Z" level=info msg="starting plugins..." Jul 7 06:14:06.522186 containerd[1562]: time="2025-07-07T06:14:06.522169258Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 7 06:14:06.522331 containerd[1562]: time="2025-07-07T06:14:06.522094112Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 7 06:14:06.522331 containerd[1562]: time="2025-07-07T06:14:06.522321284Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 7 06:14:06.523186 containerd[1562]: time="2025-07-07T06:14:06.522369951Z" level=info msg="containerd successfully booted in 0.127113s" Jul 7 06:14:06.522438 systemd[1]: Started containerd.service - containerd container runtime. Jul 7 06:14:06.530716 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 7 06:14:06.534441 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 7 06:14:06.537076 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 7 06:14:06.538871 systemd[1]: Reached target getty.target - Login Prompts. Jul 7 06:14:06.702326 tar[1558]: linux-amd64/README.md Jul 7 06:14:06.725278 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 7 06:14:07.501683 systemd-networkd[1486]: eth0: Gained IPv6LL Jul 7 06:14:07.505236 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 7 06:14:07.507128 systemd[1]: Reached target network-online.target - Network is Online. Jul 7 06:14:07.509757 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 7 06:14:07.512307 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:14:07.526450 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 7 06:14:07.549409 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 7 06:14:07.551036 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 7 06:14:07.551320 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 7 06:14:07.553498 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 7 06:14:08.683757 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:14:08.685401 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 7 06:14:08.686660 systemd[1]: Startup finished in 2.953s (kernel) + 6.227s (initrd) + 4.868s (userspace) = 14.048s. Jul 7 06:14:08.689882 (kubelet)[1667]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 06:14:09.278433 kubelet[1667]: E0707 06:14:09.278380 1667 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 06:14:09.282818 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 06:14:09.283029 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 06:14:09.283433 systemd[1]: kubelet.service: Consumed 1.554s CPU time, 266.4M memory peak. Jul 7 06:14:11.165980 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 7 06:14:11.167288 systemd[1]: Started sshd@0-10.0.0.129:22-10.0.0.1:60340.service - OpenSSH per-connection server daemon (10.0.0.1:60340). Jul 7 06:14:11.236921 sshd[1680]: Accepted publickey for core from 10.0.0.1 port 60340 ssh2: RSA SHA256:A8rgjtEfWsEGE4smvhHSEJA2ZNBF4eVGnULCJgixfWQ Jul 7 06:14:11.238765 sshd-session[1680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:14:11.245698 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 7 06:14:11.246836 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 7 06:14:11.253529 systemd-logind[1538]: New session 1 of user core. Jul 7 06:14:11.276716 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 7 06:14:11.280128 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 7 06:14:11.296995 (systemd)[1684]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 7 06:14:11.299577 systemd-logind[1538]: New session c1 of user core. Jul 7 06:14:11.454138 systemd[1684]: Queued start job for default target default.target. Jul 7 06:14:11.463374 systemd[1684]: Created slice app.slice - User Application Slice. Jul 7 06:14:11.463401 systemd[1684]: Reached target paths.target - Paths. Jul 7 06:14:11.463445 systemd[1684]: Reached target timers.target - Timers. Jul 7 06:14:11.464972 systemd[1684]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 7 06:14:11.476352 systemd[1684]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 7 06:14:11.476472 systemd[1684]: Reached target sockets.target - Sockets. Jul 7 06:14:11.476506 systemd[1684]: Reached target basic.target - Basic System. Jul 7 06:14:11.476546 systemd[1684]: Reached target default.target - Main User Target. Jul 7 06:14:11.476576 systemd[1684]: Startup finished in 169ms. Jul 7 06:14:11.476984 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 7 06:14:11.478834 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 7 06:14:11.550584 systemd[1]: Started sshd@1-10.0.0.129:22-10.0.0.1:60342.service - OpenSSH per-connection server daemon (10.0.0.1:60342). Jul 7 06:14:11.596132 sshd[1695]: Accepted publickey for core from 10.0.0.1 port 60342 ssh2: RSA SHA256:A8rgjtEfWsEGE4smvhHSEJA2ZNBF4eVGnULCJgixfWQ Jul 7 06:14:11.597490 sshd-session[1695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:14:11.601703 systemd-logind[1538]: New session 2 of user core. Jul 7 06:14:11.611305 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 7 06:14:11.664145 sshd[1697]: Connection closed by 10.0.0.1 port 60342 Jul 7 06:14:11.664487 sshd-session[1695]: pam_unix(sshd:session): session closed for user core Jul 7 06:14:11.672486 systemd[1]: sshd@1-10.0.0.129:22-10.0.0.1:60342.service: Deactivated successfully. Jul 7 06:14:11.674020 systemd[1]: session-2.scope: Deactivated successfully. Jul 7 06:14:11.674878 systemd-logind[1538]: Session 2 logged out. Waiting for processes to exit. Jul 7 06:14:11.677232 systemd[1]: Started sshd@2-10.0.0.129:22-10.0.0.1:60352.service - OpenSSH per-connection server daemon (10.0.0.1:60352). Jul 7 06:14:11.678005 systemd-logind[1538]: Removed session 2. Jul 7 06:14:11.731306 sshd[1703]: Accepted publickey for core from 10.0.0.1 port 60352 ssh2: RSA SHA256:A8rgjtEfWsEGE4smvhHSEJA2ZNBF4eVGnULCJgixfWQ Jul 7 06:14:11.732421 sshd-session[1703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:14:11.737044 systemd-logind[1538]: New session 3 of user core. Jul 7 06:14:11.747323 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 7 06:14:11.797317 sshd[1705]: Connection closed by 10.0.0.1 port 60352 Jul 7 06:14:11.797702 sshd-session[1703]: pam_unix(sshd:session): session closed for user core Jul 7 06:14:11.809785 systemd[1]: sshd@2-10.0.0.129:22-10.0.0.1:60352.service: Deactivated successfully. Jul 7 06:14:11.811429 systemd[1]: session-3.scope: Deactivated successfully. Jul 7 06:14:11.812130 systemd-logind[1538]: Session 3 logged out. Waiting for processes to exit. Jul 7 06:14:11.814611 systemd[1]: Started sshd@3-10.0.0.129:22-10.0.0.1:60362.service - OpenSSH per-connection server daemon (10.0.0.1:60362). Jul 7 06:14:11.815227 systemd-logind[1538]: Removed session 3. Jul 7 06:14:11.861854 sshd[1711]: Accepted publickey for core from 10.0.0.1 port 60362 ssh2: RSA SHA256:A8rgjtEfWsEGE4smvhHSEJA2ZNBF4eVGnULCJgixfWQ Jul 7 06:14:11.863251 sshd-session[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:14:11.867564 systemd-logind[1538]: New session 4 of user core. Jul 7 06:14:11.877249 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 7 06:14:11.929454 sshd[1713]: Connection closed by 10.0.0.1 port 60362 Jul 7 06:14:11.929728 sshd-session[1711]: pam_unix(sshd:session): session closed for user core Jul 7 06:14:11.941479 systemd[1]: sshd@3-10.0.0.129:22-10.0.0.1:60362.service: Deactivated successfully. Jul 7 06:14:11.943032 systemd[1]: session-4.scope: Deactivated successfully. Jul 7 06:14:11.943964 systemd-logind[1538]: Session 4 logged out. Waiting for processes to exit. Jul 7 06:14:11.946918 systemd[1]: Started sshd@4-10.0.0.129:22-10.0.0.1:60366.service - OpenSSH per-connection server daemon (10.0.0.1:60366). Jul 7 06:14:11.947635 systemd-logind[1538]: Removed session 4. Jul 7 06:14:11.999293 sshd[1719]: Accepted publickey for core from 10.0.0.1 port 60366 ssh2: RSA SHA256:A8rgjtEfWsEGE4smvhHSEJA2ZNBF4eVGnULCJgixfWQ Jul 7 06:14:12.000875 sshd-session[1719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:14:12.005038 systemd-logind[1538]: New session 5 of user core. Jul 7 06:14:12.014242 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 7 06:14:12.073922 sudo[1722]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 7 06:14:12.074287 sudo[1722]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 06:14:12.095311 sudo[1722]: pam_unix(sudo:session): session closed for user root Jul 7 06:14:12.097279 sshd[1721]: Connection closed by 10.0.0.1 port 60366 Jul 7 06:14:12.097652 sshd-session[1719]: pam_unix(sshd:session): session closed for user core Jul 7 06:14:12.113957 systemd[1]: sshd@4-10.0.0.129:22-10.0.0.1:60366.service: Deactivated successfully. Jul 7 06:14:12.115763 systemd[1]: session-5.scope: Deactivated successfully. Jul 7 06:14:12.116605 systemd-logind[1538]: Session 5 logged out. Waiting for processes to exit. Jul 7 06:14:12.119352 systemd[1]: Started sshd@5-10.0.0.129:22-10.0.0.1:60372.service - OpenSSH per-connection server daemon (10.0.0.1:60372). Jul 7 06:14:12.119864 systemd-logind[1538]: Removed session 5. Jul 7 06:14:12.178609 sshd[1728]: Accepted publickey for core from 10.0.0.1 port 60372 ssh2: RSA SHA256:A8rgjtEfWsEGE4smvhHSEJA2ZNBF4eVGnULCJgixfWQ Jul 7 06:14:12.180189 sshd-session[1728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:14:12.184192 systemd-logind[1538]: New session 6 of user core. Jul 7 06:14:12.194227 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 7 06:14:12.248276 sudo[1732]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 7 06:14:12.248592 sudo[1732]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 06:14:12.393634 sudo[1732]: pam_unix(sudo:session): session closed for user root Jul 7 06:14:12.400944 sudo[1731]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 7 06:14:12.401332 sudo[1731]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 06:14:12.411857 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 7 06:14:12.462814 augenrules[1754]: No rules Jul 7 06:14:12.463879 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 06:14:12.464275 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 7 06:14:12.465672 sudo[1731]: pam_unix(sudo:session): session closed for user root Jul 7 06:14:12.467193 sshd[1730]: Connection closed by 10.0.0.1 port 60372 Jul 7 06:14:12.467550 sshd-session[1728]: pam_unix(sshd:session): session closed for user core Jul 7 06:14:12.475755 systemd[1]: sshd@5-10.0.0.129:22-10.0.0.1:60372.service: Deactivated successfully. Jul 7 06:14:12.477362 systemd[1]: session-6.scope: Deactivated successfully. Jul 7 06:14:12.478128 systemd-logind[1538]: Session 6 logged out. Waiting for processes to exit. Jul 7 06:14:12.480745 systemd[1]: Started sshd@6-10.0.0.129:22-10.0.0.1:60374.service - OpenSSH per-connection server daemon (10.0.0.1:60374). Jul 7 06:14:12.481382 systemd-logind[1538]: Removed session 6. Jul 7 06:14:12.527492 sshd[1763]: Accepted publickey for core from 10.0.0.1 port 60374 ssh2: RSA SHA256:A8rgjtEfWsEGE4smvhHSEJA2ZNBF4eVGnULCJgixfWQ Jul 7 06:14:12.529203 sshd-session[1763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:14:12.534315 systemd-logind[1538]: New session 7 of user core. Jul 7 06:14:12.552272 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 7 06:14:12.607083 sudo[1766]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 7 06:14:12.607480 sudo[1766]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 06:14:12.914145 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 7 06:14:12.936466 (dockerd)[1786]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 7 06:14:13.159772 dockerd[1786]: time="2025-07-07T06:14:13.159706864Z" level=info msg="Starting up" Jul 7 06:14:13.160549 dockerd[1786]: time="2025-07-07T06:14:13.160524127Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 7 06:14:13.495789 dockerd[1786]: time="2025-07-07T06:14:13.495699299Z" level=info msg="Loading containers: start." Jul 7 06:14:13.508152 kernel: Initializing XFRM netlink socket Jul 7 06:14:13.779182 systemd-networkd[1486]: docker0: Link UP Jul 7 06:14:13.784211 dockerd[1786]: time="2025-07-07T06:14:13.784157571Z" level=info msg="Loading containers: done." Jul 7 06:14:13.797902 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3267235702-merged.mount: Deactivated successfully. Jul 7 06:14:13.799364 dockerd[1786]: time="2025-07-07T06:14:13.799286188Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 7 06:14:13.799468 dockerd[1786]: time="2025-07-07T06:14:13.799408732Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 7 06:14:13.799607 dockerd[1786]: time="2025-07-07T06:14:13.799573122Z" level=info msg="Initializing buildkit" Jul 7 06:14:13.829652 dockerd[1786]: time="2025-07-07T06:14:13.829576244Z" level=info msg="Completed buildkit initialization" Jul 7 06:14:13.835289 dockerd[1786]: time="2025-07-07T06:14:13.835231622Z" level=info msg="Daemon has completed initialization" Jul 7 06:14:13.835455 dockerd[1786]: time="2025-07-07T06:14:13.835315757Z" level=info msg="API listen on /run/docker.sock" Jul 7 06:14:13.835533 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 7 06:14:14.373771 containerd[1562]: time="2025-07-07T06:14:14.373734172Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\"" Jul 7 06:14:14.975424 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1778120992.mount: Deactivated successfully. Jul 7 06:14:15.904665 containerd[1562]: time="2025-07-07T06:14:15.904598649Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:14:15.905638 containerd[1562]: time="2025-07-07T06:14:15.905532818Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.2: active requests=0, bytes read=30079099" Jul 7 06:14:15.907272 containerd[1562]: time="2025-07-07T06:14:15.907226600Z" level=info msg="ImageCreate event name:\"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:14:15.910408 containerd[1562]: time="2025-07-07T06:14:15.910382432Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:14:15.911446 containerd[1562]: time="2025-07-07T06:14:15.911399062Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.2\" with image id \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\", size \"30075899\" in 1.537624061s" Jul 7 06:14:15.911446 containerd[1562]: time="2025-07-07T06:14:15.911444533Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\" returns image reference \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\"" Jul 7 06:14:15.912065 containerd[1562]: time="2025-07-07T06:14:15.912040399Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\"" Jul 7 06:14:17.313605 containerd[1562]: time="2025-07-07T06:14:17.313520740Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:14:17.314184 containerd[1562]: time="2025-07-07T06:14:17.314141978Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.2: active requests=0, bytes read=26018946" Jul 7 06:14:17.315314 containerd[1562]: time="2025-07-07T06:14:17.315276020Z" level=info msg="ImageCreate event name:\"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:14:17.318035 containerd[1562]: time="2025-07-07T06:14:17.317990296Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:14:17.318813 containerd[1562]: time="2025-07-07T06:14:17.318781154Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.2\" with image id \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\", size \"27646507\" in 1.406708044s" Jul 7 06:14:17.318858 containerd[1562]: time="2025-07-07T06:14:17.318817450Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\" returns image reference \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\"" Jul 7 06:14:17.319396 containerd[1562]: time="2025-07-07T06:14:17.319322082Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\"" Jul 7 06:14:18.746542 containerd[1562]: time="2025-07-07T06:14:18.746471731Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:14:18.747622 containerd[1562]: time="2025-07-07T06:14:18.747563968Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.2: active requests=0, bytes read=20155055" Jul 7 06:14:18.748994 containerd[1562]: time="2025-07-07T06:14:18.748936321Z" level=info msg="ImageCreate event name:\"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:14:18.751346 containerd[1562]: time="2025-07-07T06:14:18.751313070Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:14:18.752313 containerd[1562]: time="2025-07-07T06:14:18.752276162Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.2\" with image id \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\", size \"21782634\" in 1.432884539s" Jul 7 06:14:18.752313 containerd[1562]: time="2025-07-07T06:14:18.752303971Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\" returns image reference \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\"" Jul 7 06:14:18.752768 containerd[1562]: time="2025-07-07T06:14:18.752731597Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jul 7 06:14:19.529895 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 7 06:14:19.532028 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:14:19.740178 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:14:19.749517 (kubelet)[2073]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 06:14:19.792798 kubelet[2073]: E0707 06:14:19.792659 2073 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 06:14:19.799910 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 06:14:19.800125 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 06:14:19.800505 systemd[1]: kubelet.service: Consumed 226ms CPU time, 108M memory peak. Jul 7 06:14:19.852698 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1575435260.mount: Deactivated successfully. Jul 7 06:14:20.684116 containerd[1562]: time="2025-07-07T06:14:20.684036709Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:14:20.684871 containerd[1562]: time="2025-07-07T06:14:20.684838016Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.2: active requests=0, bytes read=31892746" Jul 7 06:14:20.686120 containerd[1562]: time="2025-07-07T06:14:20.686046247Z" level=info msg="ImageCreate event name:\"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:14:20.687997 containerd[1562]: time="2025-07-07T06:14:20.687954912Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:14:20.688531 containerd[1562]: time="2025-07-07T06:14:20.688479295Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.2\" with image id \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\", repo tag \"registry.k8s.io/kube-proxy:v1.33.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\", size \"31891765\" in 1.935655827s" Jul 7 06:14:20.688567 containerd[1562]: time="2025-07-07T06:14:20.688529259Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\"" Jul 7 06:14:20.689088 containerd[1562]: time="2025-07-07T06:14:20.689047443Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jul 7 06:14:21.287023 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount223614687.mount: Deactivated successfully. Jul 7 06:14:22.380042 containerd[1562]: time="2025-07-07T06:14:22.379979554Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:14:22.380831 containerd[1562]: time="2025-07-07T06:14:22.380766653Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Jul 7 06:14:22.381878 containerd[1562]: time="2025-07-07T06:14:22.381825389Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:14:22.385076 containerd[1562]: time="2025-07-07T06:14:22.385038356Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:14:22.385975 containerd[1562]: time="2025-07-07T06:14:22.385941393Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.696855448s" Jul 7 06:14:22.385975 containerd[1562]: time="2025-07-07T06:14:22.385969564Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jul 7 06:14:22.386397 containerd[1562]: time="2025-07-07T06:14:22.386373897Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 7 06:14:22.899015 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2561390863.mount: Deactivated successfully. Jul 7 06:14:22.906338 containerd[1562]: time="2025-07-07T06:14:22.906305331Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 06:14:22.907083 containerd[1562]: time="2025-07-07T06:14:22.907059352Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 7 06:14:22.908265 containerd[1562]: time="2025-07-07T06:14:22.908246109Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 06:14:22.910223 containerd[1562]: time="2025-07-07T06:14:22.910178583Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 06:14:22.910866 containerd[1562]: time="2025-07-07T06:14:22.910834846Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 524.435109ms" Jul 7 06:14:22.910903 containerd[1562]: time="2025-07-07T06:14:22.910865291Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 7 06:14:22.911322 containerd[1562]: time="2025-07-07T06:14:22.911294578Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jul 7 06:14:23.395539 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3519020366.mount: Deactivated successfully. Jul 7 06:14:25.576235 containerd[1562]: time="2025-07-07T06:14:25.576161276Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:14:25.577033 containerd[1562]: time="2025-07-07T06:14:25.576984515Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58247175" Jul 7 06:14:25.578468 containerd[1562]: time="2025-07-07T06:14:25.578432885Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:14:25.581294 containerd[1562]: time="2025-07-07T06:14:25.581267399Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:14:25.582240 containerd[1562]: time="2025-07-07T06:14:25.582181143Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.67085333s" Jul 7 06:14:25.582240 containerd[1562]: time="2025-07-07T06:14:25.582225963Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jul 7 06:14:28.815355 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:14:28.815574 systemd[1]: kubelet.service: Consumed 226ms CPU time, 108M memory peak. Jul 7 06:14:28.818287 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:14:28.842083 systemd[1]: Reload requested from client PID 2225 ('systemctl') (unit session-7.scope)... Jul 7 06:14:28.842118 systemd[1]: Reloading... Jul 7 06:14:28.922278 zram_generator::config[2272]: No configuration found. Jul 7 06:14:29.107883 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:14:29.227969 systemd[1]: Reloading finished in 385 ms. Jul 7 06:14:29.296999 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 7 06:14:29.297150 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 7 06:14:29.297500 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:14:29.297551 systemd[1]: kubelet.service: Consumed 154ms CPU time, 98.3M memory peak. Jul 7 06:14:29.299453 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:14:29.506137 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:14:29.516424 (kubelet)[2317]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 06:14:29.551029 kubelet[2317]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 06:14:29.551029 kubelet[2317]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 7 06:14:29.551029 kubelet[2317]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 06:14:29.551499 kubelet[2317]: I0707 06:14:29.551063 2317 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 06:14:30.081372 kubelet[2317]: I0707 06:14:30.081322 2317 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 7 06:14:30.081372 kubelet[2317]: I0707 06:14:30.081351 2317 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 06:14:30.081583 kubelet[2317]: I0707 06:14:30.081568 2317 server.go:956] "Client rotation is on, will bootstrap in background" Jul 7 06:14:30.109296 kubelet[2317]: E0707 06:14:30.109242 2317 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.129:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.129:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 7 06:14:30.109472 kubelet[2317]: I0707 06:14:30.109441 2317 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 06:14:30.115354 kubelet[2317]: I0707 06:14:30.115331 2317 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 7 06:14:30.121681 kubelet[2317]: I0707 06:14:30.121653 2317 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 06:14:30.121926 kubelet[2317]: I0707 06:14:30.121889 2317 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 06:14:30.122085 kubelet[2317]: I0707 06:14:30.121914 2317 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 06:14:30.122085 kubelet[2317]: I0707 06:14:30.122085 2317 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 06:14:30.122221 kubelet[2317]: I0707 06:14:30.122112 2317 container_manager_linux.go:303] "Creating device plugin manager" Jul 7 06:14:30.122995 kubelet[2317]: I0707 06:14:30.122965 2317 state_mem.go:36] "Initialized new in-memory state store" Jul 7 06:14:30.125049 kubelet[2317]: I0707 06:14:30.125025 2317 kubelet.go:480] "Attempting to sync node with API server" Jul 7 06:14:30.125049 kubelet[2317]: I0707 06:14:30.125046 2317 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 06:14:30.125130 kubelet[2317]: I0707 06:14:30.125070 2317 kubelet.go:386] "Adding apiserver pod source" Jul 7 06:14:30.127033 kubelet[2317]: I0707 06:14:30.127015 2317 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 06:14:30.133003 kubelet[2317]: E0707 06:14:30.132120 2317 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.129:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.129:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 7 06:14:30.133003 kubelet[2317]: E0707 06:14:30.132429 2317 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.129:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.129:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 7 06:14:30.134404 kubelet[2317]: I0707 06:14:30.134373 2317 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 7 06:14:30.134938 kubelet[2317]: I0707 06:14:30.134912 2317 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 7 06:14:30.135483 kubelet[2317]: W0707 06:14:30.135466 2317 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 7 06:14:30.138004 kubelet[2317]: I0707 06:14:30.137987 2317 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 7 06:14:30.138065 kubelet[2317]: I0707 06:14:30.138038 2317 server.go:1289] "Started kubelet" Jul 7 06:14:30.139366 kubelet[2317]: I0707 06:14:30.139307 2317 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 06:14:30.140397 kubelet[2317]: I0707 06:14:30.140285 2317 server.go:317] "Adding debug handlers to kubelet server" Jul 7 06:14:30.140916 kubelet[2317]: I0707 06:14:30.140465 2317 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 06:14:30.140916 kubelet[2317]: I0707 06:14:30.140832 2317 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 06:14:30.142137 kubelet[2317]: I0707 06:14:30.142114 2317 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 06:14:30.142532 kubelet[2317]: E0707 06:14:30.142496 2317 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 06:14:30.142578 kubelet[2317]: I0707 06:14:30.142537 2317 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 7 06:14:30.142619 kubelet[2317]: I0707 06:14:30.142519 2317 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 06:14:30.142704 kubelet[2317]: I0707 06:14:30.142683 2317 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 7 06:14:30.142754 kubelet[2317]: I0707 06:14:30.142735 2317 reconciler.go:26] "Reconciler: start to sync state" Jul 7 06:14:30.143051 kubelet[2317]: E0707 06:14:30.143024 2317 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.129:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.129:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 7 06:14:30.143468 kubelet[2317]: E0707 06:14:30.143426 2317 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.129:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.129:6443: connect: connection refused" interval="200ms" Jul 7 06:14:30.143876 kubelet[2317]: E0707 06:14:30.142364 2317 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.129:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.129:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.184fe374a1a40dbd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-07 06:14:30.138006973 +0000 UTC m=+0.617556483,LastTimestamp:2025-07-07 06:14:30.138006973 +0000 UTC m=+0.617556483,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 7 06:14:30.143998 kubelet[2317]: I0707 06:14:30.143970 2317 factory.go:223] Registration of the systemd container factory successfully Jul 7 06:14:30.144059 kubelet[2317]: I0707 06:14:30.144036 2317 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 06:14:30.144288 kubelet[2317]: E0707 06:14:30.144262 2317 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 06:14:30.145314 kubelet[2317]: I0707 06:14:30.145291 2317 factory.go:223] Registration of the containerd container factory successfully Jul 7 06:14:30.161045 kubelet[2317]: I0707 06:14:30.160982 2317 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 7 06:14:30.161549 kubelet[2317]: I0707 06:14:30.161505 2317 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 7 06:14:30.161717 kubelet[2317]: I0707 06:14:30.161646 2317 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 7 06:14:30.161717 kubelet[2317]: I0707 06:14:30.161668 2317 state_mem.go:36] "Initialized new in-memory state store" Jul 7 06:14:30.162617 kubelet[2317]: I0707 06:14:30.162586 2317 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 7 06:14:30.162617 kubelet[2317]: I0707 06:14:30.162613 2317 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 7 06:14:30.162831 kubelet[2317]: I0707 06:14:30.162636 2317 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 7 06:14:30.162831 kubelet[2317]: I0707 06:14:30.162644 2317 kubelet.go:2436] "Starting kubelet main sync loop" Jul 7 06:14:30.162831 kubelet[2317]: E0707 06:14:30.162694 2317 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 06:14:30.163441 kubelet[2317]: E0707 06:14:30.163406 2317 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.129:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.129:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 7 06:14:30.243680 kubelet[2317]: E0707 06:14:30.243623 2317 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 06:14:30.263876 kubelet[2317]: E0707 06:14:30.263824 2317 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 7 06:14:30.344600 kubelet[2317]: E0707 06:14:30.344463 2317 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 06:14:30.344896 kubelet[2317]: E0707 06:14:30.344851 2317 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.129:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.129:6443: connect: connection refused" interval="400ms" Jul 7 06:14:30.393511 kubelet[2317]: E0707 06:14:30.393382 2317 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.129:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.129:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.184fe374a1a40dbd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-07 06:14:30.138006973 +0000 UTC m=+0.617556483,LastTimestamp:2025-07-07 06:14:30.138006973 +0000 UTC m=+0.617556483,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 7 06:14:30.444784 kubelet[2317]: E0707 06:14:30.444708 2317 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 06:14:30.464025 kubelet[2317]: E0707 06:14:30.463964 2317 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 7 06:14:30.545575 kubelet[2317]: E0707 06:14:30.545535 2317 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 06:14:30.646664 kubelet[2317]: E0707 06:14:30.646599 2317 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 06:14:30.745718 kubelet[2317]: E0707 06:14:30.745661 2317 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.129:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.129:6443: connect: connection refused" interval="800ms" Jul 7 06:14:30.747761 kubelet[2317]: E0707 06:14:30.747716 2317 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 06:14:30.848538 kubelet[2317]: E0707 06:14:30.848438 2317 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 06:14:30.864738 kubelet[2317]: E0707 06:14:30.864647 2317 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 7 06:14:30.922989 kubelet[2317]: I0707 06:14:30.922806 2317 policy_none.go:49] "None policy: Start" Jul 7 06:14:30.922989 kubelet[2317]: I0707 06:14:30.922840 2317 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 7 06:14:30.922989 kubelet[2317]: I0707 06:14:30.922857 2317 state_mem.go:35] "Initializing new in-memory state store" Jul 7 06:14:30.930862 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 7 06:14:30.943213 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 7 06:14:30.946313 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 7 06:14:30.949084 kubelet[2317]: E0707 06:14:30.949064 2317 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 06:14:30.953950 kubelet[2317]: E0707 06:14:30.953916 2317 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 7 06:14:30.954158 kubelet[2317]: I0707 06:14:30.954136 2317 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 06:14:30.954216 kubelet[2317]: I0707 06:14:30.954156 2317 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 06:14:30.954481 kubelet[2317]: I0707 06:14:30.954417 2317 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 06:14:30.955017 kubelet[2317]: E0707 06:14:30.954975 2317 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 7 06:14:30.955077 kubelet[2317]: E0707 06:14:30.955033 2317 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 7 06:14:31.055536 kubelet[2317]: I0707 06:14:31.055492 2317 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 7 06:14:31.056015 kubelet[2317]: E0707 06:14:31.055956 2317 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.129:6443/api/v1/nodes\": dial tcp 10.0.0.129:6443: connect: connection refused" node="localhost" Jul 7 06:14:31.257899 kubelet[2317]: I0707 06:14:31.257730 2317 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 7 06:14:31.258268 kubelet[2317]: E0707 06:14:31.258220 2317 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.129:6443/api/v1/nodes\": dial tcp 10.0.0.129:6443: connect: connection refused" node="localhost" Jul 7 06:14:31.287005 kubelet[2317]: E0707 06:14:31.286960 2317 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.129:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.129:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 7 06:14:31.353738 kubelet[2317]: E0707 06:14:31.353675 2317 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.129:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.129:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 7 06:14:31.358548 kubelet[2317]: E0707 06:14:31.358503 2317 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.129:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.129:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 7 06:14:31.498322 kubelet[2317]: E0707 06:14:31.498258 2317 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.129:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.129:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 7 06:14:31.546389 kubelet[2317]: E0707 06:14:31.546239 2317 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.129:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.129:6443: connect: connection refused" interval="1.6s" Jul 7 06:14:31.660717 kubelet[2317]: I0707 06:14:31.660667 2317 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 7 06:14:31.661194 kubelet[2317]: E0707 06:14:31.660989 2317 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.129:6443/api/v1/nodes\": dial tcp 10.0.0.129:6443: connect: connection refused" node="localhost" Jul 7 06:14:31.684837 systemd[1]: Created slice kubepods-burstable-podeb7f0b14fae99e7f8c63aa8da801d40b.slice - libcontainer container kubepods-burstable-podeb7f0b14fae99e7f8c63aa8da801d40b.slice. Jul 7 06:14:31.705391 kubelet[2317]: E0707 06:14:31.705353 2317 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 06:14:31.709699 systemd[1]: Created slice kubepods-burstable-pod84b858ec27c8b2738b1d9ff9927e0dcb.slice - libcontainer container kubepods-burstable-pod84b858ec27c8b2738b1d9ff9927e0dcb.slice. Jul 7 06:14:31.728969 kubelet[2317]: E0707 06:14:31.728929 2317 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 06:14:31.730758 systemd[1]: Created slice kubepods-burstable-pod834ee54f1daa06092e339273649eb5ea.slice - libcontainer container kubepods-burstable-pod834ee54f1daa06092e339273649eb5ea.slice. Jul 7 06:14:31.732842 kubelet[2317]: E0707 06:14:31.732815 2317 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 06:14:31.752383 kubelet[2317]: I0707 06:14:31.752276 2317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:14:31.752383 kubelet[2317]: I0707 06:14:31.752338 2317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:14:31.752383 kubelet[2317]: I0707 06:14:31.752367 2317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:14:31.752383 kubelet[2317]: I0707 06:14:31.752389 2317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:14:31.752693 kubelet[2317]: I0707 06:14:31.752482 2317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:14:31.752693 kubelet[2317]: I0707 06:14:31.752514 2317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/834ee54f1daa06092e339273649eb5ea-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"834ee54f1daa06092e339273649eb5ea\") " pod="kube-system/kube-scheduler-localhost" Jul 7 06:14:31.752693 kubelet[2317]: I0707 06:14:31.752535 2317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/eb7f0b14fae99e7f8c63aa8da801d40b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"eb7f0b14fae99e7f8c63aa8da801d40b\") " pod="kube-system/kube-apiserver-localhost" Jul 7 06:14:31.752693 kubelet[2317]: I0707 06:14:31.752555 2317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/eb7f0b14fae99e7f8c63aa8da801d40b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"eb7f0b14fae99e7f8c63aa8da801d40b\") " pod="kube-system/kube-apiserver-localhost" Jul 7 06:14:31.752693 kubelet[2317]: I0707 06:14:31.752589 2317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/eb7f0b14fae99e7f8c63aa8da801d40b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"eb7f0b14fae99e7f8c63aa8da801d40b\") " pod="kube-system/kube-apiserver-localhost" Jul 7 06:14:32.005966 kubelet[2317]: E0707 06:14:32.005917 2317 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:32.006648 containerd[1562]: time="2025-07-07T06:14:32.006591392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:eb7f0b14fae99e7f8c63aa8da801d40b,Namespace:kube-system,Attempt:0,}" Jul 7 06:14:32.029579 kubelet[2317]: E0707 06:14:32.029538 2317 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:32.030107 containerd[1562]: time="2025-07-07T06:14:32.030030102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:84b858ec27c8b2738b1d9ff9927e0dcb,Namespace:kube-system,Attempt:0,}" Jul 7 06:14:32.034814 kubelet[2317]: E0707 06:14:32.034306 2317 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:32.034921 containerd[1562]: time="2025-07-07T06:14:32.034560850Z" level=info msg="connecting to shim 1db1f7a92ffd7a44f5e97833acf3fc8d36728fbf079fde62e6127fbcd86a52ae" address="unix:///run/containerd/s/06710f44deb7115ed3ea763469a4f85e88c86dd6a70fb53071007638575db36f" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:14:32.034921 containerd[1562]: time="2025-07-07T06:14:32.034897454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:834ee54f1daa06092e339273649eb5ea,Namespace:kube-system,Attempt:0,}" Jul 7 06:14:32.193365 containerd[1562]: time="2025-07-07T06:14:32.193278248Z" level=info msg="connecting to shim 6ea38fe591dae2a379cb4e18f302e4a5cb09850e4714a06bef613fde7d85da42" address="unix:///run/containerd/s/d39ba77cec0cb60f2a99bdbbdd7e2dd2f6138d0d02a503e0398f4b844ab9af20" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:14:32.203670 kubelet[2317]: E0707 06:14:32.203605 2317 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.129:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.129:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 7 06:14:32.269554 systemd[1]: Started cri-containerd-1db1f7a92ffd7a44f5e97833acf3fc8d36728fbf079fde62e6127fbcd86a52ae.scope - libcontainer container 1db1f7a92ffd7a44f5e97833acf3fc8d36728fbf079fde62e6127fbcd86a52ae. Jul 7 06:14:32.275962 containerd[1562]: time="2025-07-07T06:14:32.275807877Z" level=info msg="connecting to shim eafe34315f035cd73224ec3eb29ff6aeb4bbcaeadf3c02ba4b2d4405d4c84e6a" address="unix:///run/containerd/s/2762e1b5a83f594775418a4ca03bd12c9d6d234d1f209c79894aaef885f1021d" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:14:32.310295 systemd[1]: Started cri-containerd-6ea38fe591dae2a379cb4e18f302e4a5cb09850e4714a06bef613fde7d85da42.scope - libcontainer container 6ea38fe591dae2a379cb4e18f302e4a5cb09850e4714a06bef613fde7d85da42. Jul 7 06:14:32.314538 systemd[1]: Started cri-containerd-eafe34315f035cd73224ec3eb29ff6aeb4bbcaeadf3c02ba4b2d4405d4c84e6a.scope - libcontainer container eafe34315f035cd73224ec3eb29ff6aeb4bbcaeadf3c02ba4b2d4405d4c84e6a. Jul 7 06:14:32.382736 containerd[1562]: time="2025-07-07T06:14:32.382674966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:eb7f0b14fae99e7f8c63aa8da801d40b,Namespace:kube-system,Attempt:0,} returns sandbox id \"1db1f7a92ffd7a44f5e97833acf3fc8d36728fbf079fde62e6127fbcd86a52ae\"" Jul 7 06:14:32.383973 kubelet[2317]: E0707 06:14:32.383949 2317 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:32.389450 containerd[1562]: time="2025-07-07T06:14:32.389402906Z" level=info msg="CreateContainer within sandbox \"1db1f7a92ffd7a44f5e97833acf3fc8d36728fbf079fde62e6127fbcd86a52ae\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 7 06:14:32.406671 containerd[1562]: time="2025-07-07T06:14:32.405720623Z" level=info msg="Container ee26a05dea5c837b452b230ed5f5ed3646ba04feb8d48278dd6a147e645bca4d: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:14:32.409272 containerd[1562]: time="2025-07-07T06:14:32.409244358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:84b858ec27c8b2738b1d9ff9927e0dcb,Namespace:kube-system,Attempt:0,} returns sandbox id \"6ea38fe591dae2a379cb4e18f302e4a5cb09850e4714a06bef613fde7d85da42\"" Jul 7 06:14:32.410051 kubelet[2317]: E0707 06:14:32.410002 2317 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:32.416423 containerd[1562]: time="2025-07-07T06:14:32.416338013Z" level=info msg="CreateContainer within sandbox \"1db1f7a92ffd7a44f5e97833acf3fc8d36728fbf079fde62e6127fbcd86a52ae\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ee26a05dea5c837b452b230ed5f5ed3646ba04feb8d48278dd6a147e645bca4d\"" Jul 7 06:14:32.416596 containerd[1562]: time="2025-07-07T06:14:32.416553518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:834ee54f1daa06092e339273649eb5ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"eafe34315f035cd73224ec3eb29ff6aeb4bbcaeadf3c02ba4b2d4405d4c84e6a\"" Jul 7 06:14:32.417152 containerd[1562]: time="2025-07-07T06:14:32.417124820Z" level=info msg="StartContainer for \"ee26a05dea5c837b452b230ed5f5ed3646ba04feb8d48278dd6a147e645bca4d\"" Jul 7 06:14:32.417359 containerd[1562]: time="2025-07-07T06:14:32.417322345Z" level=info msg="CreateContainer within sandbox \"6ea38fe591dae2a379cb4e18f302e4a5cb09850e4714a06bef613fde7d85da42\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 7 06:14:32.418197 kubelet[2317]: E0707 06:14:32.418120 2317 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:32.418401 containerd[1562]: time="2025-07-07T06:14:32.418359282Z" level=info msg="connecting to shim ee26a05dea5c837b452b230ed5f5ed3646ba04feb8d48278dd6a147e645bca4d" address="unix:///run/containerd/s/06710f44deb7115ed3ea763469a4f85e88c86dd6a70fb53071007638575db36f" protocol=ttrpc version=3 Jul 7 06:14:32.425238 containerd[1562]: time="2025-07-07T06:14:32.425169092Z" level=info msg="CreateContainer within sandbox \"eafe34315f035cd73224ec3eb29ff6aeb4bbcaeadf3c02ba4b2d4405d4c84e6a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 7 06:14:32.428010 containerd[1562]: time="2025-07-07T06:14:32.427955398Z" level=info msg="Container f6d2774ac23a82599b812e42e502936aa7b42b111606dda8e1ace53d296a2b8d: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:14:32.438732 containerd[1562]: time="2025-07-07T06:14:32.438679446Z" level=info msg="CreateContainer within sandbox \"6ea38fe591dae2a379cb4e18f302e4a5cb09850e4714a06bef613fde7d85da42\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f6d2774ac23a82599b812e42e502936aa7b42b111606dda8e1ace53d296a2b8d\"" Jul 7 06:14:32.439527 containerd[1562]: time="2025-07-07T06:14:32.439290530Z" level=info msg="StartContainer for \"f6d2774ac23a82599b812e42e502936aa7b42b111606dda8e1ace53d296a2b8d\"" Jul 7 06:14:32.441212 containerd[1562]: time="2025-07-07T06:14:32.441171847Z" level=info msg="connecting to shim f6d2774ac23a82599b812e42e502936aa7b42b111606dda8e1ace53d296a2b8d" address="unix:///run/containerd/s/d39ba77cec0cb60f2a99bdbbdd7e2dd2f6138d0d02a503e0398f4b844ab9af20" protocol=ttrpc version=3 Jul 7 06:14:32.441354 containerd[1562]: time="2025-07-07T06:14:32.441309193Z" level=info msg="Container 5313f52ff44be7fd5479a19109e09210add7bb24250c8ce7818886e72e96da7e: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:14:32.445424 systemd[1]: Started cri-containerd-ee26a05dea5c837b452b230ed5f5ed3646ba04feb8d48278dd6a147e645bca4d.scope - libcontainer container ee26a05dea5c837b452b230ed5f5ed3646ba04feb8d48278dd6a147e645bca4d. Jul 7 06:14:32.452124 containerd[1562]: time="2025-07-07T06:14:32.452071187Z" level=info msg="CreateContainer within sandbox \"eafe34315f035cd73224ec3eb29ff6aeb4bbcaeadf3c02ba4b2d4405d4c84e6a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5313f52ff44be7fd5479a19109e09210add7bb24250c8ce7818886e72e96da7e\"" Jul 7 06:14:32.453123 containerd[1562]: time="2025-07-07T06:14:32.452544645Z" level=info msg="StartContainer for \"5313f52ff44be7fd5479a19109e09210add7bb24250c8ce7818886e72e96da7e\"" Jul 7 06:14:32.453487 containerd[1562]: time="2025-07-07T06:14:32.453456433Z" level=info msg="connecting to shim 5313f52ff44be7fd5479a19109e09210add7bb24250c8ce7818886e72e96da7e" address="unix:///run/containerd/s/2762e1b5a83f594775418a4ca03bd12c9d6d234d1f209c79894aaef885f1021d" protocol=ttrpc version=3 Jul 7 06:14:32.463468 kubelet[2317]: I0707 06:14:32.463433 2317 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 7 06:14:32.463737 kubelet[2317]: E0707 06:14:32.463711 2317 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.129:6443/api/v1/nodes\": dial tcp 10.0.0.129:6443: connect: connection refused" node="localhost" Jul 7 06:14:32.472394 systemd[1]: Started cri-containerd-f6d2774ac23a82599b812e42e502936aa7b42b111606dda8e1ace53d296a2b8d.scope - libcontainer container f6d2774ac23a82599b812e42e502936aa7b42b111606dda8e1ace53d296a2b8d. Jul 7 06:14:32.476184 systemd[1]: Started cri-containerd-5313f52ff44be7fd5479a19109e09210add7bb24250c8ce7818886e72e96da7e.scope - libcontainer container 5313f52ff44be7fd5479a19109e09210add7bb24250c8ce7818886e72e96da7e. Jul 7 06:14:32.545487 containerd[1562]: time="2025-07-07T06:14:32.544235118Z" level=info msg="StartContainer for \"5313f52ff44be7fd5479a19109e09210add7bb24250c8ce7818886e72e96da7e\" returns successfully" Jul 7 06:14:32.545946 containerd[1562]: time="2025-07-07T06:14:32.545910366Z" level=info msg="StartContainer for \"f6d2774ac23a82599b812e42e502936aa7b42b111606dda8e1ace53d296a2b8d\" returns successfully" Jul 7 06:14:32.577284 containerd[1562]: time="2025-07-07T06:14:32.577202033Z" level=info msg="StartContainer for \"ee26a05dea5c837b452b230ed5f5ed3646ba04feb8d48278dd6a147e645bca4d\" returns successfully" Jul 7 06:14:33.175768 kubelet[2317]: E0707 06:14:33.175560 2317 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 06:14:33.175768 kubelet[2317]: E0707 06:14:33.175699 2317 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:33.179788 kubelet[2317]: E0707 06:14:33.179752 2317 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 06:14:33.180542 kubelet[2317]: E0707 06:14:33.180458 2317 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:33.182285 kubelet[2317]: E0707 06:14:33.182254 2317 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 06:14:33.182494 kubelet[2317]: E0707 06:14:33.182476 2317 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:33.680471 kubelet[2317]: E0707 06:14:33.680423 2317 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 7 06:14:34.028227 kubelet[2317]: E0707 06:14:34.028087 2317 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jul 7 06:14:34.064858 kubelet[2317]: I0707 06:14:34.064823 2317 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 7 06:14:34.071884 kubelet[2317]: I0707 06:14:34.071845 2317 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 7 06:14:34.071884 kubelet[2317]: E0707 06:14:34.071881 2317 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 7 06:14:34.078996 kubelet[2317]: E0707 06:14:34.078961 2317 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 06:14:34.179178 kubelet[2317]: E0707 06:14:34.179121 2317 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 06:14:34.184728 kubelet[2317]: E0707 06:14:34.184706 2317 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 06:14:34.184849 kubelet[2317]: E0707 06:14:34.184830 2317 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:34.184906 kubelet[2317]: E0707 06:14:34.184826 2317 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 06:14:34.184961 kubelet[2317]: E0707 06:14:34.184945 2317 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:34.243561 kubelet[2317]: I0707 06:14:34.243495 2317 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 7 06:14:34.248791 kubelet[2317]: E0707 06:14:34.248736 2317 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jul 7 06:14:34.248791 kubelet[2317]: I0707 06:14:34.248762 2317 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 7 06:14:34.250952 kubelet[2317]: E0707 06:14:34.250912 2317 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 7 06:14:34.250952 kubelet[2317]: I0707 06:14:34.250932 2317 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 7 06:14:34.252456 kubelet[2317]: E0707 06:14:34.252425 2317 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 7 06:14:34.440305 kubelet[2317]: I0707 06:14:34.440266 2317 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 7 06:14:34.442138 kubelet[2317]: E0707 06:14:34.442112 2317 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jul 7 06:14:34.442273 kubelet[2317]: E0707 06:14:34.442251 2317 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:35.133747 kubelet[2317]: I0707 06:14:35.133693 2317 apiserver.go:52] "Watching apiserver" Jul 7 06:14:35.142945 kubelet[2317]: I0707 06:14:35.142897 2317 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 7 06:14:35.960276 systemd[1]: Reload requested from client PID 2607 ('systemctl') (unit session-7.scope)... Jul 7 06:14:35.960294 systemd[1]: Reloading... Jul 7 06:14:36.039153 zram_generator::config[2650]: No configuration found. Jul 7 06:14:36.155011 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:14:36.287746 systemd[1]: Reloading finished in 327 ms. Jul 7 06:14:36.309339 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:14:36.335072 systemd[1]: kubelet.service: Deactivated successfully. Jul 7 06:14:36.335489 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:14:36.335548 systemd[1]: kubelet.service: Consumed 1.092s CPU time, 132.3M memory peak. Jul 7 06:14:36.337931 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:14:36.617365 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:14:36.623775 (kubelet)[2695]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 06:14:36.666427 kubelet[2695]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 06:14:36.667085 kubelet[2695]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 7 06:14:36.667085 kubelet[2695]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 06:14:36.667223 kubelet[2695]: I0707 06:14:36.666683 2695 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 06:14:36.675337 kubelet[2695]: I0707 06:14:36.675279 2695 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 7 06:14:36.675337 kubelet[2695]: I0707 06:14:36.675322 2695 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 06:14:36.675680 kubelet[2695]: I0707 06:14:36.675653 2695 server.go:956] "Client rotation is on, will bootstrap in background" Jul 7 06:14:36.677274 kubelet[2695]: I0707 06:14:36.677241 2695 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jul 7 06:14:36.927469 kubelet[2695]: I0707 06:14:36.926987 2695 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 06:14:36.938838 kubelet[2695]: I0707 06:14:36.938639 2695 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 7 06:14:36.945130 kubelet[2695]: I0707 06:14:36.944874 2695 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 06:14:36.945307 kubelet[2695]: I0707 06:14:36.945145 2695 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 06:14:36.945410 kubelet[2695]: I0707 06:14:36.945183 2695 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 06:14:36.945410 kubelet[2695]: I0707 06:14:36.945395 2695 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 06:14:36.945410 kubelet[2695]: I0707 06:14:36.945407 2695 container_manager_linux.go:303] "Creating device plugin manager" Jul 7 06:14:36.945682 kubelet[2695]: I0707 06:14:36.945457 2695 state_mem.go:36] "Initialized new in-memory state store" Jul 7 06:14:36.945682 kubelet[2695]: I0707 06:14:36.945660 2695 kubelet.go:480] "Attempting to sync node with API server" Jul 7 06:14:36.945682 kubelet[2695]: I0707 06:14:36.945673 2695 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 06:14:36.945752 kubelet[2695]: I0707 06:14:36.945698 2695 kubelet.go:386] "Adding apiserver pod source" Jul 7 06:14:36.945752 kubelet[2695]: I0707 06:14:36.945716 2695 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 06:14:36.948494 kubelet[2695]: I0707 06:14:36.948416 2695 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 7 06:14:36.950114 kubelet[2695]: I0707 06:14:36.949003 2695 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 7 06:14:36.958115 kubelet[2695]: I0707 06:14:36.956276 2695 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 7 06:14:36.958115 kubelet[2695]: I0707 06:14:36.956347 2695 server.go:1289] "Started kubelet" Jul 7 06:14:36.958115 kubelet[2695]: I0707 06:14:36.956405 2695 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 06:14:36.958269 kubelet[2695]: I0707 06:14:36.958252 2695 server.go:317] "Adding debug handlers to kubelet server" Jul 7 06:14:36.958400 kubelet[2695]: I0707 06:14:36.958331 2695 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 06:14:36.958764 kubelet[2695]: I0707 06:14:36.958741 2695 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 06:14:36.966728 kubelet[2695]: E0707 06:14:36.966697 2695 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 06:14:36.970946 kubelet[2695]: I0707 06:14:36.967366 2695 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 06:14:36.972335 kubelet[2695]: I0707 06:14:36.967480 2695 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 06:14:36.973829 kubelet[2695]: I0707 06:14:36.972518 2695 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 7 06:14:36.975089 kubelet[2695]: I0707 06:14:36.972673 2695 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 7 06:14:36.976009 kubelet[2695]: I0707 06:14:36.975507 2695 reconciler.go:26] "Reconciler: start to sync state" Jul 7 06:14:36.978044 kubelet[2695]: I0707 06:14:36.978015 2695 factory.go:223] Registration of the systemd container factory successfully Jul 7 06:14:36.978308 kubelet[2695]: I0707 06:14:36.978179 2695 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 06:14:36.980811 kubelet[2695]: I0707 06:14:36.980772 2695 factory.go:223] Registration of the containerd container factory successfully Jul 7 06:14:36.998425 kubelet[2695]: I0707 06:14:36.998240 2695 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 7 06:14:37.001789 kubelet[2695]: I0707 06:14:37.001748 2695 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 7 06:14:37.001789 kubelet[2695]: I0707 06:14:37.001789 2695 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 7 06:14:37.001913 kubelet[2695]: I0707 06:14:37.001815 2695 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 7 06:14:37.001913 kubelet[2695]: I0707 06:14:37.001823 2695 kubelet.go:2436] "Starting kubelet main sync loop" Jul 7 06:14:37.001913 kubelet[2695]: E0707 06:14:37.001878 2695 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 06:14:37.020628 sudo[2731]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 7 06:14:37.021046 sudo[2731]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 7 06:14:37.037076 kubelet[2695]: I0707 06:14:37.037041 2695 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 7 06:14:37.037076 kubelet[2695]: I0707 06:14:37.037063 2695 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 7 06:14:37.037076 kubelet[2695]: I0707 06:14:37.037081 2695 state_mem.go:36] "Initialized new in-memory state store" Jul 7 06:14:37.037265 kubelet[2695]: I0707 06:14:37.037236 2695 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 7 06:14:37.037265 kubelet[2695]: I0707 06:14:37.037247 2695 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 7 06:14:37.037309 kubelet[2695]: I0707 06:14:37.037267 2695 policy_none.go:49] "None policy: Start" Jul 7 06:14:37.037309 kubelet[2695]: I0707 06:14:37.037278 2695 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 7 06:14:37.037309 kubelet[2695]: I0707 06:14:37.037290 2695 state_mem.go:35] "Initializing new in-memory state store" Jul 7 06:14:37.038079 kubelet[2695]: I0707 06:14:37.037404 2695 state_mem.go:75] "Updated machine memory state" Jul 7 06:14:37.043635 kubelet[2695]: E0707 06:14:37.043603 2695 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 7 06:14:37.043805 kubelet[2695]: I0707 06:14:37.043777 2695 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 06:14:37.043844 kubelet[2695]: I0707 06:14:37.043791 2695 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 06:14:37.044216 kubelet[2695]: I0707 06:14:37.044188 2695 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 06:14:37.051110 kubelet[2695]: E0707 06:14:37.050500 2695 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 7 06:14:37.103320 kubelet[2695]: I0707 06:14:37.103281 2695 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 7 06:14:37.103500 kubelet[2695]: I0707 06:14:37.103467 2695 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 7 06:14:37.103637 kubelet[2695]: I0707 06:14:37.103620 2695 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 7 06:14:37.157460 kubelet[2695]: I0707 06:14:37.157426 2695 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 7 06:14:37.177718 kubelet[2695]: I0707 06:14:37.177615 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/eb7f0b14fae99e7f8c63aa8da801d40b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"eb7f0b14fae99e7f8c63aa8da801d40b\") " pod="kube-system/kube-apiserver-localhost" Jul 7 06:14:37.177718 kubelet[2695]: I0707 06:14:37.177646 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:14:37.177718 kubelet[2695]: I0707 06:14:37.177669 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:14:37.177718 kubelet[2695]: I0707 06:14:37.177684 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:14:37.177718 kubelet[2695]: I0707 06:14:37.177698 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:14:37.178035 kubelet[2695]: I0707 06:14:37.177713 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:14:37.178035 kubelet[2695]: I0707 06:14:37.177727 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/834ee54f1daa06092e339273649eb5ea-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"834ee54f1daa06092e339273649eb5ea\") " pod="kube-system/kube-scheduler-localhost" Jul 7 06:14:37.178035 kubelet[2695]: I0707 06:14:37.177740 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/eb7f0b14fae99e7f8c63aa8da801d40b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"eb7f0b14fae99e7f8c63aa8da801d40b\") " pod="kube-system/kube-apiserver-localhost" Jul 7 06:14:37.178035 kubelet[2695]: I0707 06:14:37.177813 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/eb7f0b14fae99e7f8c63aa8da801d40b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"eb7f0b14fae99e7f8c63aa8da801d40b\") " pod="kube-system/kube-apiserver-localhost" Jul 7 06:14:37.208748 kubelet[2695]: I0707 06:14:37.208010 2695 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 7 06:14:37.208748 kubelet[2695]: I0707 06:14:37.208142 2695 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 7 06:14:37.435851 kubelet[2695]: E0707 06:14:37.435742 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:37.504249 kubelet[2695]: E0707 06:14:37.504206 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:37.504385 kubelet[2695]: E0707 06:14:37.504302 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:37.524029 sudo[2731]: pam_unix(sudo:session): session closed for user root Jul 7 06:14:37.946991 kubelet[2695]: I0707 06:14:37.946945 2695 apiserver.go:52] "Watching apiserver" Jul 7 06:14:37.976207 kubelet[2695]: I0707 06:14:37.976174 2695 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 7 06:14:38.026080 kubelet[2695]: I0707 06:14:38.025942 2695 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 7 06:14:38.026239 kubelet[2695]: E0707 06:14:38.026147 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:38.026239 kubelet[2695]: E0707 06:14:38.026164 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:38.152960 kubelet[2695]: E0707 06:14:38.152640 2695 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 7 06:14:38.152960 kubelet[2695]: E0707 06:14:38.152915 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:38.177026 kubelet[2695]: I0707 06:14:38.176950 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.176933468 podStartE2EDuration="1.176933468s" podCreationTimestamp="2025-07-07 06:14:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:14:38.176764601 +0000 UTC m=+1.547908254" watchObservedRunningTime="2025-07-07 06:14:38.176933468 +0000 UTC m=+1.548077121" Jul 7 06:14:38.199145 kubelet[2695]: I0707 06:14:38.198977 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.198958704 podStartE2EDuration="1.198958704s" podCreationTimestamp="2025-07-07 06:14:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:14:38.190271014 +0000 UTC m=+1.561414667" watchObservedRunningTime="2025-07-07 06:14:38.198958704 +0000 UTC m=+1.570102357" Jul 7 06:14:38.199300 kubelet[2695]: I0707 06:14:38.199148 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.199143078 podStartE2EDuration="1.199143078s" podCreationTimestamp="2025-07-07 06:14:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:14:38.199123322 +0000 UTC m=+1.570266975" watchObservedRunningTime="2025-07-07 06:14:38.199143078 +0000 UTC m=+1.570286731" Jul 7 06:14:38.773817 sudo[1766]: pam_unix(sudo:session): session closed for user root Jul 7 06:14:38.775488 sshd[1765]: Connection closed by 10.0.0.1 port 60374 Jul 7 06:14:38.775894 sshd-session[1763]: pam_unix(sshd:session): session closed for user core Jul 7 06:14:38.780373 systemd[1]: sshd@6-10.0.0.129:22-10.0.0.1:60374.service: Deactivated successfully. Jul 7 06:14:38.782610 systemd[1]: session-7.scope: Deactivated successfully. Jul 7 06:14:38.782868 systemd[1]: session-7.scope: Consumed 5.326s CPU time, 261.2M memory peak. Jul 7 06:14:38.784288 systemd-logind[1538]: Session 7 logged out. Waiting for processes to exit. Jul 7 06:14:38.785592 systemd-logind[1538]: Removed session 7. Jul 7 06:14:39.027501 kubelet[2695]: E0707 06:14:39.027373 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:39.028510 kubelet[2695]: E0707 06:14:39.027597 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:40.028753 kubelet[2695]: E0707 06:14:40.028699 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:41.716491 kubelet[2695]: I0707 06:14:41.716435 2695 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 7 06:14:41.717027 kubelet[2695]: I0707 06:14:41.716997 2695 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 7 06:14:41.717055 containerd[1562]: time="2025-07-07T06:14:41.716799181Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 7 06:14:41.793647 kubelet[2695]: E0707 06:14:41.793573 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:42.291322 kubelet[2695]: E0707 06:14:42.291263 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:42.570514 systemd[1]: Created slice kubepods-besteffort-pod90213f65_8113_45cd_a651_45e98f2de255.slice - libcontainer container kubepods-besteffort-pod90213f65_8113_45cd_a651_45e98f2de255.slice. Jul 7 06:14:42.586923 systemd[1]: Created slice kubepods-burstable-pod4f512201_6f8b_4c0a_a6c1_61f2688630f3.slice - libcontainer container kubepods-burstable-pod4f512201_6f8b_4c0a_a6c1_61f2688630f3.slice. Jul 7 06:14:42.610940 kubelet[2695]: I0707 06:14:42.610864 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4f512201-6f8b-4c0a-a6c1-61f2688630f3-host-proc-sys-kernel\") pod \"cilium-s7knk\" (UID: \"4f512201-6f8b-4c0a-a6c1-61f2688630f3\") " pod="kube-system/cilium-s7knk" Jul 7 06:14:42.610940 kubelet[2695]: I0707 06:14:42.610926 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knmtg\" (UniqueName: \"kubernetes.io/projected/4f512201-6f8b-4c0a-a6c1-61f2688630f3-kube-api-access-knmtg\") pod \"cilium-s7knk\" (UID: \"4f512201-6f8b-4c0a-a6c1-61f2688630f3\") " pod="kube-system/cilium-s7knk" Jul 7 06:14:42.610940 kubelet[2695]: I0707 06:14:42.610954 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/90213f65-8113-45cd-a651-45e98f2de255-lib-modules\") pod \"kube-proxy-77kqk\" (UID: \"90213f65-8113-45cd-a651-45e98f2de255\") " pod="kube-system/kube-proxy-77kqk" Jul 7 06:14:42.611189 kubelet[2695]: I0707 06:14:42.610972 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4f512201-6f8b-4c0a-a6c1-61f2688630f3-cni-path\") pod \"cilium-s7knk\" (UID: \"4f512201-6f8b-4c0a-a6c1-61f2688630f3\") " pod="kube-system/cilium-s7knk" Jul 7 06:14:42.611189 kubelet[2695]: I0707 06:14:42.610986 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4f512201-6f8b-4c0a-a6c1-61f2688630f3-etc-cni-netd\") pod \"cilium-s7knk\" (UID: \"4f512201-6f8b-4c0a-a6c1-61f2688630f3\") " pod="kube-system/cilium-s7knk" Jul 7 06:14:42.611189 kubelet[2695]: I0707 06:14:42.611001 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4f512201-6f8b-4c0a-a6c1-61f2688630f3-clustermesh-secrets\") pod \"cilium-s7knk\" (UID: \"4f512201-6f8b-4c0a-a6c1-61f2688630f3\") " pod="kube-system/cilium-s7knk" Jul 7 06:14:42.611189 kubelet[2695]: I0707 06:14:42.611016 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4f512201-6f8b-4c0a-a6c1-61f2688630f3-cilium-run\") pod \"cilium-s7knk\" (UID: \"4f512201-6f8b-4c0a-a6c1-61f2688630f3\") " pod="kube-system/cilium-s7knk" Jul 7 06:14:42.611189 kubelet[2695]: I0707 06:14:42.611030 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4f512201-6f8b-4c0a-a6c1-61f2688630f3-hostproc\") pod \"cilium-s7knk\" (UID: \"4f512201-6f8b-4c0a-a6c1-61f2688630f3\") " pod="kube-system/cilium-s7knk" Jul 7 06:14:42.611189 kubelet[2695]: I0707 06:14:42.611043 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4f512201-6f8b-4c0a-a6c1-61f2688630f3-host-proc-sys-net\") pod \"cilium-s7knk\" (UID: \"4f512201-6f8b-4c0a-a6c1-61f2688630f3\") " pod="kube-system/cilium-s7knk" Jul 7 06:14:42.611377 kubelet[2695]: I0707 06:14:42.611059 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4f512201-6f8b-4c0a-a6c1-61f2688630f3-hubble-tls\") pod \"cilium-s7knk\" (UID: \"4f512201-6f8b-4c0a-a6c1-61f2688630f3\") " pod="kube-system/cilium-s7knk" Jul 7 06:14:42.611377 kubelet[2695]: I0707 06:14:42.611146 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/90213f65-8113-45cd-a651-45e98f2de255-kube-proxy\") pod \"kube-proxy-77kqk\" (UID: \"90213f65-8113-45cd-a651-45e98f2de255\") " pod="kube-system/kube-proxy-77kqk" Jul 7 06:14:42.611377 kubelet[2695]: I0707 06:14:42.611181 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/90213f65-8113-45cd-a651-45e98f2de255-xtables-lock\") pod \"kube-proxy-77kqk\" (UID: \"90213f65-8113-45cd-a651-45e98f2de255\") " pod="kube-system/kube-proxy-77kqk" Jul 7 06:14:42.611377 kubelet[2695]: I0707 06:14:42.611206 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8mrl\" (UniqueName: \"kubernetes.io/projected/90213f65-8113-45cd-a651-45e98f2de255-kube-api-access-h8mrl\") pod \"kube-proxy-77kqk\" (UID: \"90213f65-8113-45cd-a651-45e98f2de255\") " pod="kube-system/kube-proxy-77kqk" Jul 7 06:14:42.611377 kubelet[2695]: I0707 06:14:42.611247 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4f512201-6f8b-4c0a-a6c1-61f2688630f3-bpf-maps\") pod \"cilium-s7knk\" (UID: \"4f512201-6f8b-4c0a-a6c1-61f2688630f3\") " pod="kube-system/cilium-s7knk" Jul 7 06:14:42.611377 kubelet[2695]: I0707 06:14:42.611298 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4f512201-6f8b-4c0a-a6c1-61f2688630f3-cilium-cgroup\") pod \"cilium-s7knk\" (UID: \"4f512201-6f8b-4c0a-a6c1-61f2688630f3\") " pod="kube-system/cilium-s7knk" Jul 7 06:14:42.611574 kubelet[2695]: I0707 06:14:42.611313 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4f512201-6f8b-4c0a-a6c1-61f2688630f3-lib-modules\") pod \"cilium-s7knk\" (UID: \"4f512201-6f8b-4c0a-a6c1-61f2688630f3\") " pod="kube-system/cilium-s7knk" Jul 7 06:14:42.611574 kubelet[2695]: I0707 06:14:42.611336 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4f512201-6f8b-4c0a-a6c1-61f2688630f3-xtables-lock\") pod \"cilium-s7knk\" (UID: \"4f512201-6f8b-4c0a-a6c1-61f2688630f3\") " pod="kube-system/cilium-s7knk" Jul 7 06:14:42.611574 kubelet[2695]: I0707 06:14:42.611351 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4f512201-6f8b-4c0a-a6c1-61f2688630f3-cilium-config-path\") pod \"cilium-s7knk\" (UID: \"4f512201-6f8b-4c0a-a6c1-61f2688630f3\") " pod="kube-system/cilium-s7knk" Jul 7 06:14:42.813202 kubelet[2695]: I0707 06:14:42.813138 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfnpj\" (UniqueName: \"kubernetes.io/projected/85150542-9e14-40c1-826b-ea3ca2302240-kube-api-access-pfnpj\") pod \"cilium-operator-6c4d7847fc-wlczw\" (UID: \"85150542-9e14-40c1-826b-ea3ca2302240\") " pod="kube-system/cilium-operator-6c4d7847fc-wlczw" Jul 7 06:14:42.813202 kubelet[2695]: I0707 06:14:42.813185 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/85150542-9e14-40c1-826b-ea3ca2302240-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-wlczw\" (UID: \"85150542-9e14-40c1-826b-ea3ca2302240\") " pod="kube-system/cilium-operator-6c4d7847fc-wlczw" Jul 7 06:14:42.814439 systemd[1]: Created slice kubepods-besteffort-pod85150542_9e14_40c1_826b_ea3ca2302240.slice - libcontainer container kubepods-besteffort-pod85150542_9e14_40c1_826b_ea3ca2302240.slice. Jul 7 06:14:42.882988 kubelet[2695]: E0707 06:14:42.882960 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:42.883646 containerd[1562]: time="2025-07-07T06:14:42.883602450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-77kqk,Uid:90213f65-8113-45cd-a651-45e98f2de255,Namespace:kube-system,Attempt:0,}" Jul 7 06:14:42.891689 kubelet[2695]: E0707 06:14:42.891648 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:42.892121 containerd[1562]: time="2025-07-07T06:14:42.892003318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-s7knk,Uid:4f512201-6f8b-4c0a-a6c1-61f2688630f3,Namespace:kube-system,Attempt:0,}" Jul 7 06:14:42.929815 containerd[1562]: time="2025-07-07T06:14:42.929761878Z" level=info msg="connecting to shim 6103eecd0bd600ddc7f7f7db22a9d38c36b6c851dbd6a889cc8cf39ca4de35ea" address="unix:///run/containerd/s/4502ce6ccfd009ccd3114846e016f92cf3373e07577eed9bd1f6115587acb8fd" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:14:42.929933 containerd[1562]: time="2025-07-07T06:14:42.929806020Z" level=info msg="connecting to shim 156a2b5cb2cb5e367bffc8274cecd5570ba4d82f51a0ab0a43fd12c9833ee4f8" address="unix:///run/containerd/s/503d7080e2b0d8c4e461a2aa5a83537904323f1d2dc35dad1d6113e8679630c5" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:14:42.981327 systemd[1]: Started cri-containerd-6103eecd0bd600ddc7f7f7db22a9d38c36b6c851dbd6a889cc8cf39ca4de35ea.scope - libcontainer container 6103eecd0bd600ddc7f7f7db22a9d38c36b6c851dbd6a889cc8cf39ca4de35ea. Jul 7 06:14:42.986251 systemd[1]: Started cri-containerd-156a2b5cb2cb5e367bffc8274cecd5570ba4d82f51a0ab0a43fd12c9833ee4f8.scope - libcontainer container 156a2b5cb2cb5e367bffc8274cecd5570ba4d82f51a0ab0a43fd12c9833ee4f8. Jul 7 06:14:43.016637 containerd[1562]: time="2025-07-07T06:14:43.016578122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-s7knk,Uid:4f512201-6f8b-4c0a-a6c1-61f2688630f3,Namespace:kube-system,Attempt:0,} returns sandbox id \"6103eecd0bd600ddc7f7f7db22a9d38c36b6c851dbd6a889cc8cf39ca4de35ea\"" Jul 7 06:14:43.017527 kubelet[2695]: E0707 06:14:43.017498 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:43.017997 containerd[1562]: time="2025-07-07T06:14:43.017961874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-77kqk,Uid:90213f65-8113-45cd-a651-45e98f2de255,Namespace:kube-system,Attempt:0,} returns sandbox id \"156a2b5cb2cb5e367bffc8274cecd5570ba4d82f51a0ab0a43fd12c9833ee4f8\"" Jul 7 06:14:43.018805 containerd[1562]: time="2025-07-07T06:14:43.018773169Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 7 06:14:43.019148 kubelet[2695]: E0707 06:14:43.019050 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:43.024606 containerd[1562]: time="2025-07-07T06:14:43.024561989Z" level=info msg="CreateContainer within sandbox \"156a2b5cb2cb5e367bffc8274cecd5570ba4d82f51a0ab0a43fd12c9833ee4f8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 7 06:14:43.034026 kubelet[2695]: E0707 06:14:43.034000 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:43.039511 containerd[1562]: time="2025-07-07T06:14:43.039451381Z" level=info msg="Container 0e8c6a7b9232124ac50037940f303e293d124d650adf7dacc64716bfe8d60633: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:14:43.048006 containerd[1562]: time="2025-07-07T06:14:43.047918698Z" level=info msg="CreateContainer within sandbox \"156a2b5cb2cb5e367bffc8274cecd5570ba4d82f51a0ab0a43fd12c9833ee4f8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0e8c6a7b9232124ac50037940f303e293d124d650adf7dacc64716bfe8d60633\"" Jul 7 06:14:43.048883 containerd[1562]: time="2025-07-07T06:14:43.048866828Z" level=info msg="StartContainer for \"0e8c6a7b9232124ac50037940f303e293d124d650adf7dacc64716bfe8d60633\"" Jul 7 06:14:43.050472 containerd[1562]: time="2025-07-07T06:14:43.050439013Z" level=info msg="connecting to shim 0e8c6a7b9232124ac50037940f303e293d124d650adf7dacc64716bfe8d60633" address="unix:///run/containerd/s/503d7080e2b0d8c4e461a2aa5a83537904323f1d2dc35dad1d6113e8679630c5" protocol=ttrpc version=3 Jul 7 06:14:43.070256 systemd[1]: Started cri-containerd-0e8c6a7b9232124ac50037940f303e293d124d650adf7dacc64716bfe8d60633.scope - libcontainer container 0e8c6a7b9232124ac50037940f303e293d124d650adf7dacc64716bfe8d60633. Jul 7 06:14:43.116071 containerd[1562]: time="2025-07-07T06:14:43.116027820Z" level=info msg="StartContainer for \"0e8c6a7b9232124ac50037940f303e293d124d650adf7dacc64716bfe8d60633\" returns successfully" Jul 7 06:14:43.117641 kubelet[2695]: E0707 06:14:43.117608 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:43.118032 containerd[1562]: time="2025-07-07T06:14:43.117996114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-wlczw,Uid:85150542-9e14-40c1-826b-ea3ca2302240,Namespace:kube-system,Attempt:0,}" Jul 7 06:14:43.138624 containerd[1562]: time="2025-07-07T06:14:43.137975851Z" level=info msg="connecting to shim a55daa35f17c4ae5ab1969057bc3a98dbf12f058e2034e86378daba6261be6cc" address="unix:///run/containerd/s/e6c31547f61a6ee4ff81e51b30aac7fc3f6fd40dd1916c97a278f3d9165dea40" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:14:43.164263 systemd[1]: Started cri-containerd-a55daa35f17c4ae5ab1969057bc3a98dbf12f058e2034e86378daba6261be6cc.scope - libcontainer container a55daa35f17c4ae5ab1969057bc3a98dbf12f058e2034e86378daba6261be6cc. Jul 7 06:14:43.211612 containerd[1562]: time="2025-07-07T06:14:43.211531935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-wlczw,Uid:85150542-9e14-40c1-826b-ea3ca2302240,Namespace:kube-system,Attempt:0,} returns sandbox id \"a55daa35f17c4ae5ab1969057bc3a98dbf12f058e2034e86378daba6261be6cc\"" Jul 7 06:14:43.212823 kubelet[2695]: E0707 06:14:43.212799 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:43.885773 kubelet[2695]: E0707 06:14:43.885721 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:44.037883 kubelet[2695]: E0707 06:14:44.037836 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:44.038775 kubelet[2695]: E0707 06:14:44.038744 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:44.056518 kubelet[2695]: I0707 06:14:44.056443 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-77kqk" podStartSLOduration=2.056423475 podStartE2EDuration="2.056423475s" podCreationTimestamp="2025-07-07 06:14:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:14:44.04852273 +0000 UTC m=+7.419666383" watchObservedRunningTime="2025-07-07 06:14:44.056423475 +0000 UTC m=+7.427567128" Jul 7 06:14:45.040488 kubelet[2695]: E0707 06:14:45.040437 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:45.041235 kubelet[2695]: E0707 06:14:45.040826 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:51.638959 update_engine[1547]: I20250707 06:14:51.638803 1547 update_attempter.cc:509] Updating boot flags... Jul 7 06:14:51.808067 kubelet[2695]: E0707 06:14:51.802663 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:52.049681 kubelet[2695]: E0707 06:14:52.049587 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:52.482786 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount770872397.mount: Deactivated successfully. Jul 7 06:14:57.067374 containerd[1562]: time="2025-07-07T06:14:57.067311435Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:14:57.068077 containerd[1562]: time="2025-07-07T06:14:57.068047669Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jul 7 06:14:57.069186 containerd[1562]: time="2025-07-07T06:14:57.069153591Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:14:57.070704 containerd[1562]: time="2025-07-07T06:14:57.070679047Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 14.051876462s" Jul 7 06:14:57.070772 containerd[1562]: time="2025-07-07T06:14:57.070706876Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 7 06:14:57.076269 containerd[1562]: time="2025-07-07T06:14:57.076213461Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 7 06:14:57.086023 containerd[1562]: time="2025-07-07T06:14:57.085975828Z" level=info msg="CreateContainer within sandbox \"6103eecd0bd600ddc7f7f7db22a9d38c36b6c851dbd6a889cc8cf39ca4de35ea\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 7 06:14:57.095963 containerd[1562]: time="2025-07-07T06:14:57.095923210Z" level=info msg="Container ef7a06707650fc932a3c3e52cf8d7b0747bf6114b0e97988d8b3f9e8d5ed5c61: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:14:57.100850 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3433911473.mount: Deactivated successfully. Jul 7 06:14:57.102724 containerd[1562]: time="2025-07-07T06:14:57.102687656Z" level=info msg="CreateContainer within sandbox \"6103eecd0bd600ddc7f7f7db22a9d38c36b6c851dbd6a889cc8cf39ca4de35ea\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ef7a06707650fc932a3c3e52cf8d7b0747bf6114b0e97988d8b3f9e8d5ed5c61\"" Jul 7 06:14:57.103365 containerd[1562]: time="2025-07-07T06:14:57.103342148Z" level=info msg="StartContainer for \"ef7a06707650fc932a3c3e52cf8d7b0747bf6114b0e97988d8b3f9e8d5ed5c61\"" Jul 7 06:14:57.104293 containerd[1562]: time="2025-07-07T06:14:57.104254297Z" level=info msg="connecting to shim ef7a06707650fc932a3c3e52cf8d7b0747bf6114b0e97988d8b3f9e8d5ed5c61" address="unix:///run/containerd/s/4502ce6ccfd009ccd3114846e016f92cf3373e07577eed9bd1f6115587acb8fd" protocol=ttrpc version=3 Jul 7 06:14:57.135380 systemd[1]: Started cri-containerd-ef7a06707650fc932a3c3e52cf8d7b0747bf6114b0e97988d8b3f9e8d5ed5c61.scope - libcontainer container ef7a06707650fc932a3c3e52cf8d7b0747bf6114b0e97988d8b3f9e8d5ed5c61. Jul 7 06:14:57.167642 containerd[1562]: time="2025-07-07T06:14:57.167600586Z" level=info msg="StartContainer for \"ef7a06707650fc932a3c3e52cf8d7b0747bf6114b0e97988d8b3f9e8d5ed5c61\" returns successfully" Jul 7 06:14:57.175328 systemd[1]: cri-containerd-ef7a06707650fc932a3c3e52cf8d7b0747bf6114b0e97988d8b3f9e8d5ed5c61.scope: Deactivated successfully. Jul 7 06:14:57.177376 containerd[1562]: time="2025-07-07T06:14:57.177338432Z" level=info msg="received exit event container_id:\"ef7a06707650fc932a3c3e52cf8d7b0747bf6114b0e97988d8b3f9e8d5ed5c61\" id:\"ef7a06707650fc932a3c3e52cf8d7b0747bf6114b0e97988d8b3f9e8d5ed5c61\" pid:3140 exited_at:{seconds:1751868897 nanos:176607571}" Jul 7 06:14:57.177643 containerd[1562]: time="2025-07-07T06:14:57.177622364Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ef7a06707650fc932a3c3e52cf8d7b0747bf6114b0e97988d8b3f9e8d5ed5c61\" id:\"ef7a06707650fc932a3c3e52cf8d7b0747bf6114b0e97988d8b3f9e8d5ed5c61\" pid:3140 exited_at:{seconds:1751868897 nanos:176607571}" Jul 7 06:14:57.199110 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ef7a06707650fc932a3c3e52cf8d7b0747bf6114b0e97988d8b3f9e8d5ed5c61-rootfs.mount: Deactivated successfully. Jul 7 06:14:58.059698 kubelet[2695]: E0707 06:14:58.059630 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:58.065254 containerd[1562]: time="2025-07-07T06:14:58.065183191Z" level=info msg="CreateContainer within sandbox \"6103eecd0bd600ddc7f7f7db22a9d38c36b6c851dbd6a889cc8cf39ca4de35ea\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 7 06:14:58.076859 containerd[1562]: time="2025-07-07T06:14:58.076814582Z" level=info msg="Container ead5dc135dfb9f9bd783179e073a9fd1238c3a81b733b2ca745ddc66524896b4: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:14:58.083452 containerd[1562]: time="2025-07-07T06:14:58.083408902Z" level=info msg="CreateContainer within sandbox \"6103eecd0bd600ddc7f7f7db22a9d38c36b6c851dbd6a889cc8cf39ca4de35ea\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ead5dc135dfb9f9bd783179e073a9fd1238c3a81b733b2ca745ddc66524896b4\"" Jul 7 06:14:58.083861 containerd[1562]: time="2025-07-07T06:14:58.083838813Z" level=info msg="StartContainer for \"ead5dc135dfb9f9bd783179e073a9fd1238c3a81b733b2ca745ddc66524896b4\"" Jul 7 06:14:58.084645 containerd[1562]: time="2025-07-07T06:14:58.084623580Z" level=info msg="connecting to shim ead5dc135dfb9f9bd783179e073a9fd1238c3a81b733b2ca745ddc66524896b4" address="unix:///run/containerd/s/4502ce6ccfd009ccd3114846e016f92cf3373e07577eed9bd1f6115587acb8fd" protocol=ttrpc version=3 Jul 7 06:14:58.109287 systemd[1]: Started cri-containerd-ead5dc135dfb9f9bd783179e073a9fd1238c3a81b733b2ca745ddc66524896b4.scope - libcontainer container ead5dc135dfb9f9bd783179e073a9fd1238c3a81b733b2ca745ddc66524896b4. Jul 7 06:14:58.140754 containerd[1562]: time="2025-07-07T06:14:58.140712329Z" level=info msg="StartContainer for \"ead5dc135dfb9f9bd783179e073a9fd1238c3a81b733b2ca745ddc66524896b4\" returns successfully" Jul 7 06:14:58.156788 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 7 06:14:58.157044 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 7 06:14:58.157445 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 7 06:14:58.159428 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 06:14:58.160563 containerd[1562]: time="2025-07-07T06:14:58.160524880Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ead5dc135dfb9f9bd783179e073a9fd1238c3a81b733b2ca745ddc66524896b4\" id:\"ead5dc135dfb9f9bd783179e073a9fd1238c3a81b733b2ca745ddc66524896b4\" pid:3185 exited_at:{seconds:1751868898 nanos:160257255}" Jul 7 06:14:58.160623 containerd[1562]: time="2025-07-07T06:14:58.160596218Z" level=info msg="received exit event container_id:\"ead5dc135dfb9f9bd783179e073a9fd1238c3a81b733b2ca745ddc66524896b4\" id:\"ead5dc135dfb9f9bd783179e073a9fd1238c3a81b733b2ca745ddc66524896b4\" pid:3185 exited_at:{seconds:1751868898 nanos:160257255}" Jul 7 06:14:58.161629 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 7 06:14:58.162070 systemd[1]: cri-containerd-ead5dc135dfb9f9bd783179e073a9fd1238c3a81b733b2ca745ddc66524896b4.scope: Deactivated successfully. Jul 7 06:14:58.187072 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ead5dc135dfb9f9bd783179e073a9fd1238c3a81b733b2ca745ddc66524896b4-rootfs.mount: Deactivated successfully. Jul 7 06:14:58.188479 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 06:14:58.806531 containerd[1562]: time="2025-07-07T06:14:58.806481079Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:14:58.807284 containerd[1562]: time="2025-07-07T06:14:58.807242939Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jul 7 06:14:58.808461 containerd[1562]: time="2025-07-07T06:14:58.808423095Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:14:58.809541 containerd[1562]: time="2025-07-07T06:14:58.809509838Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.733266215s" Jul 7 06:14:58.809541 containerd[1562]: time="2025-07-07T06:14:58.809535762Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 7 06:14:58.814883 containerd[1562]: time="2025-07-07T06:14:58.814856832Z" level=info msg="CreateContainer within sandbox \"a55daa35f17c4ae5ab1969057bc3a98dbf12f058e2034e86378daba6261be6cc\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 7 06:14:58.821241 containerd[1562]: time="2025-07-07T06:14:58.821215413Z" level=info msg="Container f1dc75e6b6796e2f722231369238dd23ced48e14d71fc3ddc9cd1d2a43970dc3: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:14:58.827597 containerd[1562]: time="2025-07-07T06:14:58.827561749Z" level=info msg="CreateContainer within sandbox \"a55daa35f17c4ae5ab1969057bc3a98dbf12f058e2034e86378daba6261be6cc\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"f1dc75e6b6796e2f722231369238dd23ced48e14d71fc3ddc9cd1d2a43970dc3\"" Jul 7 06:14:58.827964 containerd[1562]: time="2025-07-07T06:14:58.827905422Z" level=info msg="StartContainer for \"f1dc75e6b6796e2f722231369238dd23ced48e14d71fc3ddc9cd1d2a43970dc3\"" Jul 7 06:14:58.828757 containerd[1562]: time="2025-07-07T06:14:58.828734911Z" level=info msg="connecting to shim f1dc75e6b6796e2f722231369238dd23ced48e14d71fc3ddc9cd1d2a43970dc3" address="unix:///run/containerd/s/e6c31547f61a6ee4ff81e51b30aac7fc3f6fd40dd1916c97a278f3d9165dea40" protocol=ttrpc version=3 Jul 7 06:14:58.851273 systemd[1]: Started cri-containerd-f1dc75e6b6796e2f722231369238dd23ced48e14d71fc3ddc9cd1d2a43970dc3.scope - libcontainer container f1dc75e6b6796e2f722231369238dd23ced48e14d71fc3ddc9cd1d2a43970dc3. Jul 7 06:14:58.895607 containerd[1562]: time="2025-07-07T06:14:58.895555557Z" level=info msg="StartContainer for \"f1dc75e6b6796e2f722231369238dd23ced48e14d71fc3ddc9cd1d2a43970dc3\" returns successfully" Jul 7 06:14:59.062983 kubelet[2695]: E0707 06:14:59.062462 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:59.066364 kubelet[2695]: E0707 06:14:59.066328 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:14:59.071605 containerd[1562]: time="2025-07-07T06:14:59.071565839Z" level=info msg="CreateContainer within sandbox \"6103eecd0bd600ddc7f7f7db22a9d38c36b6c851dbd6a889cc8cf39ca4de35ea\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 7 06:14:59.073876 kubelet[2695]: I0707 06:14:59.073814 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-wlczw" podStartSLOduration=1.479503201 podStartE2EDuration="17.073795762s" podCreationTimestamp="2025-07-07 06:14:42 +0000 UTC" firstStartedPulling="2025-07-07 06:14:43.215698535 +0000 UTC m=+6.586842188" lastFinishedPulling="2025-07-07 06:14:58.809991085 +0000 UTC m=+22.181134749" observedRunningTime="2025-07-07 06:14:59.072775796 +0000 UTC m=+22.443919469" watchObservedRunningTime="2025-07-07 06:14:59.073795762 +0000 UTC m=+22.444939435" Jul 7 06:14:59.086451 containerd[1562]: time="2025-07-07T06:14:59.086379218Z" level=info msg="Container 1986606c5b0356c9f99193b2c498eb11ec014dd6f394ab3b3b22d10f5f469016: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:14:59.096410 containerd[1562]: time="2025-07-07T06:14:59.096349450Z" level=info msg="CreateContainer within sandbox \"6103eecd0bd600ddc7f7f7db22a9d38c36b6c851dbd6a889cc8cf39ca4de35ea\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1986606c5b0356c9f99193b2c498eb11ec014dd6f394ab3b3b22d10f5f469016\"" Jul 7 06:14:59.097087 containerd[1562]: time="2025-07-07T06:14:59.097060267Z" level=info msg="StartContainer for \"1986606c5b0356c9f99193b2c498eb11ec014dd6f394ab3b3b22d10f5f469016\"" Jul 7 06:14:59.098008 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4277147769.mount: Deactivated successfully. Jul 7 06:14:59.098907 containerd[1562]: time="2025-07-07T06:14:59.098870113Z" level=info msg="connecting to shim 1986606c5b0356c9f99193b2c498eb11ec014dd6f394ab3b3b22d10f5f469016" address="unix:///run/containerd/s/4502ce6ccfd009ccd3114846e016f92cf3373e07577eed9bd1f6115587acb8fd" protocol=ttrpc version=3 Jul 7 06:14:59.124312 systemd[1]: Started cri-containerd-1986606c5b0356c9f99193b2c498eb11ec014dd6f394ab3b3b22d10f5f469016.scope - libcontainer container 1986606c5b0356c9f99193b2c498eb11ec014dd6f394ab3b3b22d10f5f469016. Jul 7 06:14:59.171885 containerd[1562]: time="2025-07-07T06:14:59.171829213Z" level=info msg="StartContainer for \"1986606c5b0356c9f99193b2c498eb11ec014dd6f394ab3b3b22d10f5f469016\" returns successfully" Jul 7 06:14:59.172949 systemd[1]: cri-containerd-1986606c5b0356c9f99193b2c498eb11ec014dd6f394ab3b3b22d10f5f469016.scope: Deactivated successfully. Jul 7 06:14:59.174664 containerd[1562]: time="2025-07-07T06:14:59.174630665Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1986606c5b0356c9f99193b2c498eb11ec014dd6f394ab3b3b22d10f5f469016\" id:\"1986606c5b0356c9f99193b2c498eb11ec014dd6f394ab3b3b22d10f5f469016\" pid:3282 exited_at:{seconds:1751868899 nanos:174177459}" Jul 7 06:14:59.174740 containerd[1562]: time="2025-07-07T06:14:59.174647229Z" level=info msg="received exit event container_id:\"1986606c5b0356c9f99193b2c498eb11ec014dd6f394ab3b3b22d10f5f469016\" id:\"1986606c5b0356c9f99193b2c498eb11ec014dd6f394ab3b3b22d10f5f469016\" pid:3282 exited_at:{seconds:1751868899 nanos:174177459}" Jul 7 06:14:59.204713 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1986606c5b0356c9f99193b2c498eb11ec014dd6f394ab3b3b22d10f5f469016-rootfs.mount: Deactivated successfully. Jul 7 06:15:00.070911 kubelet[2695]: E0707 06:15:00.070869 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:15:00.071869 kubelet[2695]: E0707 06:15:00.071040 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:15:00.076371 containerd[1562]: time="2025-07-07T06:15:00.076302744Z" level=info msg="CreateContainer within sandbox \"6103eecd0bd600ddc7f7f7db22a9d38c36b6c851dbd6a889cc8cf39ca4de35ea\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 7 06:15:00.089619 containerd[1562]: time="2025-07-07T06:15:00.089549622Z" level=info msg="Container 47ac48cf9b5d0f0abc0384feb4d62475c8f2b191f7e640d67a9799007f075fb3: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:15:00.092012 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3737964010.mount: Deactivated successfully. Jul 7 06:15:00.098817 containerd[1562]: time="2025-07-07T06:15:00.098765683Z" level=info msg="CreateContainer within sandbox \"6103eecd0bd600ddc7f7f7db22a9d38c36b6c851dbd6a889cc8cf39ca4de35ea\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"47ac48cf9b5d0f0abc0384feb4d62475c8f2b191f7e640d67a9799007f075fb3\"" Jul 7 06:15:00.099978 containerd[1562]: time="2025-07-07T06:15:00.099278086Z" level=info msg="StartContainer for \"47ac48cf9b5d0f0abc0384feb4d62475c8f2b191f7e640d67a9799007f075fb3\"" Jul 7 06:15:00.100326 containerd[1562]: time="2025-07-07T06:15:00.100299176Z" level=info msg="connecting to shim 47ac48cf9b5d0f0abc0384feb4d62475c8f2b191f7e640d67a9799007f075fb3" address="unix:///run/containerd/s/4502ce6ccfd009ccd3114846e016f92cf3373e07577eed9bd1f6115587acb8fd" protocol=ttrpc version=3 Jul 7 06:15:00.127237 systemd[1]: Started cri-containerd-47ac48cf9b5d0f0abc0384feb4d62475c8f2b191f7e640d67a9799007f075fb3.scope - libcontainer container 47ac48cf9b5d0f0abc0384feb4d62475c8f2b191f7e640d67a9799007f075fb3. Jul 7 06:15:00.158928 systemd[1]: cri-containerd-47ac48cf9b5d0f0abc0384feb4d62475c8f2b191f7e640d67a9799007f075fb3.scope: Deactivated successfully. Jul 7 06:15:00.159644 containerd[1562]: time="2025-07-07T06:15:00.159588105Z" level=info msg="TaskExit event in podsandbox handler container_id:\"47ac48cf9b5d0f0abc0384feb4d62475c8f2b191f7e640d67a9799007f075fb3\" id:\"47ac48cf9b5d0f0abc0384feb4d62475c8f2b191f7e640d67a9799007f075fb3\" pid:3322 exited_at:{seconds:1751868900 nanos:159235590}" Jul 7 06:15:00.161510 containerd[1562]: time="2025-07-07T06:15:00.161467348Z" level=info msg="received exit event container_id:\"47ac48cf9b5d0f0abc0384feb4d62475c8f2b191f7e640d67a9799007f075fb3\" id:\"47ac48cf9b5d0f0abc0384feb4d62475c8f2b191f7e640d67a9799007f075fb3\" pid:3322 exited_at:{seconds:1751868900 nanos:159235590}" Jul 7 06:15:00.169753 containerd[1562]: time="2025-07-07T06:15:00.169708133Z" level=info msg="StartContainer for \"47ac48cf9b5d0f0abc0384feb4d62475c8f2b191f7e640d67a9799007f075fb3\" returns successfully" Jul 7 06:15:00.185296 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-47ac48cf9b5d0f0abc0384feb4d62475c8f2b191f7e640d67a9799007f075fb3-rootfs.mount: Deactivated successfully. Jul 7 06:15:01.078167 kubelet[2695]: E0707 06:15:01.078134 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:15:01.086164 containerd[1562]: time="2025-07-07T06:15:01.086067647Z" level=info msg="CreateContainer within sandbox \"6103eecd0bd600ddc7f7f7db22a9d38c36b6c851dbd6a889cc8cf39ca4de35ea\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 7 06:15:01.139189 containerd[1562]: time="2025-07-07T06:15:01.139143547Z" level=info msg="Container 5c345359a445518e235618d30c61ee3a0e493d6e3ffd70b1b6c179cdefa34ddc: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:15:01.148554 containerd[1562]: time="2025-07-07T06:15:01.148504864Z" level=info msg="CreateContainer within sandbox \"6103eecd0bd600ddc7f7f7db22a9d38c36b6c851dbd6a889cc8cf39ca4de35ea\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5c345359a445518e235618d30c61ee3a0e493d6e3ffd70b1b6c179cdefa34ddc\"" Jul 7 06:15:01.149111 containerd[1562]: time="2025-07-07T06:15:01.149081355Z" level=info msg="StartContainer for \"5c345359a445518e235618d30c61ee3a0e493d6e3ffd70b1b6c179cdefa34ddc\"" Jul 7 06:15:01.150173 containerd[1562]: time="2025-07-07T06:15:01.150117848Z" level=info msg="connecting to shim 5c345359a445518e235618d30c61ee3a0e493d6e3ffd70b1b6c179cdefa34ddc" address="unix:///run/containerd/s/4502ce6ccfd009ccd3114846e016f92cf3373e07577eed9bd1f6115587acb8fd" protocol=ttrpc version=3 Jul 7 06:15:01.178269 systemd[1]: Started cri-containerd-5c345359a445518e235618d30c61ee3a0e493d6e3ffd70b1b6c179cdefa34ddc.scope - libcontainer container 5c345359a445518e235618d30c61ee3a0e493d6e3ffd70b1b6c179cdefa34ddc. Jul 7 06:15:01.263911 containerd[1562]: time="2025-07-07T06:15:01.263864878Z" level=info msg="StartContainer for \"5c345359a445518e235618d30c61ee3a0e493d6e3ffd70b1b6c179cdefa34ddc\" returns successfully" Jul 7 06:15:01.353860 containerd[1562]: time="2025-07-07T06:15:01.353713849Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5c345359a445518e235618d30c61ee3a0e493d6e3ffd70b1b6c179cdefa34ddc\" id:\"d13bef7ede0d1df26147bb3ac0582727d88b35bcfdb0b6796531efbdfafff6ac\" pid:3393 exited_at:{seconds:1751868901 nanos:353398813}" Jul 7 06:15:01.392639 kubelet[2695]: I0707 06:15:01.392592 2695 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 7 06:15:01.439675 systemd[1]: Created slice kubepods-burstable-pod68a6270b_0989_441e_9b0d_9daf2092bf04.slice - libcontainer container kubepods-burstable-pod68a6270b_0989_441e_9b0d_9daf2092bf04.slice. Jul 7 06:15:01.449182 systemd[1]: Created slice kubepods-burstable-pod2c423af0_6099_45dd_85ea_a797ad1e3fb3.slice - libcontainer container kubepods-burstable-pod2c423af0_6099_45dd_85ea_a797ad1e3fb3.slice. Jul 7 06:15:01.534638 kubelet[2695]: I0707 06:15:01.534492 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/68a6270b-0989-441e-9b0d-9daf2092bf04-config-volume\") pod \"coredns-674b8bbfcf-nt8hj\" (UID: \"68a6270b-0989-441e-9b0d-9daf2092bf04\") " pod="kube-system/coredns-674b8bbfcf-nt8hj" Jul 7 06:15:01.534638 kubelet[2695]: I0707 06:15:01.534540 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kd4mm\" (UniqueName: \"kubernetes.io/projected/2c423af0-6099-45dd-85ea-a797ad1e3fb3-kube-api-access-kd4mm\") pod \"coredns-674b8bbfcf-cdr24\" (UID: \"2c423af0-6099-45dd-85ea-a797ad1e3fb3\") " pod="kube-system/coredns-674b8bbfcf-cdr24" Jul 7 06:15:01.534638 kubelet[2695]: I0707 06:15:01.534561 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bj45\" (UniqueName: \"kubernetes.io/projected/68a6270b-0989-441e-9b0d-9daf2092bf04-kube-api-access-8bj45\") pod \"coredns-674b8bbfcf-nt8hj\" (UID: \"68a6270b-0989-441e-9b0d-9daf2092bf04\") " pod="kube-system/coredns-674b8bbfcf-nt8hj" Jul 7 06:15:01.534638 kubelet[2695]: I0707 06:15:01.534582 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2c423af0-6099-45dd-85ea-a797ad1e3fb3-config-volume\") pod \"coredns-674b8bbfcf-cdr24\" (UID: \"2c423af0-6099-45dd-85ea-a797ad1e3fb3\") " pod="kube-system/coredns-674b8bbfcf-cdr24" Jul 7 06:15:01.746433 kubelet[2695]: E0707 06:15:01.746377 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:15:01.747063 containerd[1562]: time="2025-07-07T06:15:01.747019864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nt8hj,Uid:68a6270b-0989-441e-9b0d-9daf2092bf04,Namespace:kube-system,Attempt:0,}" Jul 7 06:15:01.752854 kubelet[2695]: E0707 06:15:01.752821 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:15:01.754120 containerd[1562]: time="2025-07-07T06:15:01.754065688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cdr24,Uid:2c423af0-6099-45dd-85ea-a797ad1e3fb3,Namespace:kube-system,Attempt:0,}" Jul 7 06:15:02.102130 kubelet[2695]: E0707 06:15:02.101975 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:15:02.116917 kubelet[2695]: I0707 06:15:02.116828 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-s7knk" podStartSLOduration=6.059201223 podStartE2EDuration="20.116809178s" podCreationTimestamp="2025-07-07 06:14:42 +0000 UTC" firstStartedPulling="2025-07-07 06:14:43.018479815 +0000 UTC m=+6.389623458" lastFinishedPulling="2025-07-07 06:14:57.07608776 +0000 UTC m=+20.447231413" observedRunningTime="2025-07-07 06:15:02.116183652 +0000 UTC m=+25.487327315" watchObservedRunningTime="2025-07-07 06:15:02.116809178 +0000 UTC m=+25.487952832" Jul 7 06:15:03.131018 kubelet[2695]: E0707 06:15:03.130977 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:15:03.491650 systemd-networkd[1486]: cilium_host: Link UP Jul 7 06:15:03.491814 systemd-networkd[1486]: cilium_net: Link UP Jul 7 06:15:03.491989 systemd-networkd[1486]: cilium_net: Gained carrier Jul 7 06:15:03.492178 systemd-networkd[1486]: cilium_host: Gained carrier Jul 7 06:15:03.597895 systemd-networkd[1486]: cilium_vxlan: Link UP Jul 7 06:15:03.597908 systemd-networkd[1486]: cilium_vxlan: Gained carrier Jul 7 06:15:03.653329 systemd-networkd[1486]: cilium_net: Gained IPv6LL Jul 7 06:15:03.820136 kernel: NET: Registered PF_ALG protocol family Jul 7 06:15:03.925310 systemd-networkd[1486]: cilium_host: Gained IPv6LL Jul 7 06:15:04.133383 kubelet[2695]: E0707 06:15:04.133283 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:15:04.467526 systemd-networkd[1486]: lxc_health: Link UP Jul 7 06:15:04.467832 systemd-networkd[1486]: lxc_health: Gained carrier Jul 7 06:15:04.779861 systemd-networkd[1486]: lxc3b4497d4b1e0: Link UP Jul 7 06:15:04.788135 kernel: eth0: renamed from tmp4de97 Jul 7 06:15:04.802115 kernel: eth0: renamed from tmp37207 Jul 7 06:15:04.804859 systemd-networkd[1486]: lxc3b4497d4b1e0: Gained carrier Jul 7 06:15:04.805057 systemd-networkd[1486]: lxcaaa8171a15a7: Link UP Jul 7 06:15:04.809198 systemd-networkd[1486]: lxcaaa8171a15a7: Gained carrier Jul 7 06:15:04.845232 systemd-networkd[1486]: cilium_vxlan: Gained IPv6LL Jul 7 06:15:05.134906 kubelet[2695]: E0707 06:15:05.134879 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:15:05.376068 systemd[1]: Started sshd@7-10.0.0.129:22-10.0.0.1:33904.service - OpenSSH per-connection server daemon (10.0.0.1:33904). Jul 7 06:15:05.426835 sshd[3859]: Accepted publickey for core from 10.0.0.1 port 33904 ssh2: RSA SHA256:A8rgjtEfWsEGE4smvhHSEJA2ZNBF4eVGnULCJgixfWQ Jul 7 06:15:05.428231 sshd-session[3859]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:15:05.433122 systemd-logind[1538]: New session 8 of user core. Jul 7 06:15:05.442225 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 7 06:15:05.617159 sshd[3862]: Connection closed by 10.0.0.1 port 33904 Jul 7 06:15:05.617521 sshd-session[3859]: pam_unix(sshd:session): session closed for user core Jul 7 06:15:05.622214 systemd[1]: sshd@7-10.0.0.129:22-10.0.0.1:33904.service: Deactivated successfully. Jul 7 06:15:05.624829 systemd[1]: session-8.scope: Deactivated successfully. Jul 7 06:15:05.625781 systemd-logind[1538]: Session 8 logged out. Waiting for processes to exit. Jul 7 06:15:05.627223 systemd-logind[1538]: Removed session 8. Jul 7 06:15:06.136911 kubelet[2695]: E0707 06:15:06.136859 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:15:06.253319 systemd-networkd[1486]: lxc_health: Gained IPv6LL Jul 7 06:15:06.637273 systemd-networkd[1486]: lxc3b4497d4b1e0: Gained IPv6LL Jul 7 06:15:06.765349 systemd-networkd[1486]: lxcaaa8171a15a7: Gained IPv6LL Jul 7 06:15:07.137952 kubelet[2695]: E0707 06:15:07.137917 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:15:08.364310 containerd[1562]: time="2025-07-07T06:15:08.364261983Z" level=info msg="connecting to shim 4de97d69ac8cb31356aff716d0c6d24c9ffb4934cde07311d5f0d7990457f2d5" address="unix:///run/containerd/s/603d4fc7390b3a5931d2b7b3a0043df5f3e74a0663e12cd1ec05b3177d0d65e4" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:15:08.384604 containerd[1562]: time="2025-07-07T06:15:08.384402847Z" level=info msg="connecting to shim 37207281aaa2fe370e887732e41173356d28f9a4f7806ee1a171bf69a99c0a13" address="unix:///run/containerd/s/b26beef0516845cd08e8152e900f7a7a8c63b140d74389db6c6b20ec0fcc8438" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:15:08.397588 systemd[1]: Started cri-containerd-4de97d69ac8cb31356aff716d0c6d24c9ffb4934cde07311d5f0d7990457f2d5.scope - libcontainer container 4de97d69ac8cb31356aff716d0c6d24c9ffb4934cde07311d5f0d7990457f2d5. Jul 7 06:15:08.406527 systemd[1]: Started cri-containerd-37207281aaa2fe370e887732e41173356d28f9a4f7806ee1a171bf69a99c0a13.scope - libcontainer container 37207281aaa2fe370e887732e41173356d28f9a4f7806ee1a171bf69a99c0a13. Jul 7 06:15:08.413621 systemd-resolved[1404]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 06:15:08.420263 systemd-resolved[1404]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 06:15:08.445065 containerd[1562]: time="2025-07-07T06:15:08.445027756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nt8hj,Uid:68a6270b-0989-441e-9b0d-9daf2092bf04,Namespace:kube-system,Attempt:0,} returns sandbox id \"4de97d69ac8cb31356aff716d0c6d24c9ffb4934cde07311d5f0d7990457f2d5\"" Jul 7 06:15:08.448715 kubelet[2695]: E0707 06:15:08.448664 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:15:08.454918 containerd[1562]: time="2025-07-07T06:15:08.454878557Z" level=info msg="CreateContainer within sandbox \"4de97d69ac8cb31356aff716d0c6d24c9ffb4934cde07311d5f0d7990457f2d5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 06:15:08.458025 containerd[1562]: time="2025-07-07T06:15:08.457994851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cdr24,Uid:2c423af0-6099-45dd-85ea-a797ad1e3fb3,Namespace:kube-system,Attempt:0,} returns sandbox id \"37207281aaa2fe370e887732e41173356d28f9a4f7806ee1a171bf69a99c0a13\"" Jul 7 06:15:08.458886 kubelet[2695]: E0707 06:15:08.458857 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:15:08.463689 containerd[1562]: time="2025-07-07T06:15:08.463379449Z" level=info msg="CreateContainer within sandbox \"37207281aaa2fe370e887732e41173356d28f9a4f7806ee1a171bf69a99c0a13\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 06:15:08.469957 containerd[1562]: time="2025-07-07T06:15:08.469934445Z" level=info msg="Container eb09d520f9cb518e1324f89f4f24ffcc779be5c49ef51366bfc82fe0fe4edb50: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:15:08.480549 containerd[1562]: time="2025-07-07T06:15:08.480500622Z" level=info msg="CreateContainer within sandbox \"4de97d69ac8cb31356aff716d0c6d24c9ffb4934cde07311d5f0d7990457f2d5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"eb09d520f9cb518e1324f89f4f24ffcc779be5c49ef51366bfc82fe0fe4edb50\"" Jul 7 06:15:08.481130 containerd[1562]: time="2025-07-07T06:15:08.481044835Z" level=info msg="StartContainer for \"eb09d520f9cb518e1324f89f4f24ffcc779be5c49ef51366bfc82fe0fe4edb50\"" Jul 7 06:15:08.482177 containerd[1562]: time="2025-07-07T06:15:08.482024922Z" level=info msg="Container 1f3ffee7eb4bc97ac45cecc7881b00737e7c6cfeea0b371f2fb22af1e62e7f93: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:15:08.482669 containerd[1562]: time="2025-07-07T06:15:08.482626960Z" level=info msg="connecting to shim eb09d520f9cb518e1324f89f4f24ffcc779be5c49ef51366bfc82fe0fe4edb50" address="unix:///run/containerd/s/603d4fc7390b3a5931d2b7b3a0043df5f3e74a0663e12cd1ec05b3177d0d65e4" protocol=ttrpc version=3 Jul 7 06:15:08.489058 containerd[1562]: time="2025-07-07T06:15:08.489019832Z" level=info msg="CreateContainer within sandbox \"37207281aaa2fe370e887732e41173356d28f9a4f7806ee1a171bf69a99c0a13\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1f3ffee7eb4bc97ac45cecc7881b00737e7c6cfeea0b371f2fb22af1e62e7f93\"" Jul 7 06:15:08.489629 containerd[1562]: time="2025-07-07T06:15:08.489599976Z" level=info msg="StartContainer for \"1f3ffee7eb4bc97ac45cecc7881b00737e7c6cfeea0b371f2fb22af1e62e7f93\"" Jul 7 06:15:08.505234 systemd[1]: Started cri-containerd-eb09d520f9cb518e1324f89f4f24ffcc779be5c49ef51366bfc82fe0fe4edb50.scope - libcontainer container eb09d520f9cb518e1324f89f4f24ffcc779be5c49ef51366bfc82fe0fe4edb50. Jul 7 06:15:08.515691 containerd[1562]: time="2025-07-07T06:15:08.515618457Z" level=info msg="connecting to shim 1f3ffee7eb4bc97ac45cecc7881b00737e7c6cfeea0b371f2fb22af1e62e7f93" address="unix:///run/containerd/s/b26beef0516845cd08e8152e900f7a7a8c63b140d74389db6c6b20ec0fcc8438" protocol=ttrpc version=3 Jul 7 06:15:08.536264 systemd[1]: Started cri-containerd-1f3ffee7eb4bc97ac45cecc7881b00737e7c6cfeea0b371f2fb22af1e62e7f93.scope - libcontainer container 1f3ffee7eb4bc97ac45cecc7881b00737e7c6cfeea0b371f2fb22af1e62e7f93. Jul 7 06:15:08.544440 containerd[1562]: time="2025-07-07T06:15:08.544407799Z" level=info msg="StartContainer for \"eb09d520f9cb518e1324f89f4f24ffcc779be5c49ef51366bfc82fe0fe4edb50\" returns successfully" Jul 7 06:15:08.572632 containerd[1562]: time="2025-07-07T06:15:08.572581716Z" level=info msg="StartContainer for \"1f3ffee7eb4bc97ac45cecc7881b00737e7c6cfeea0b371f2fb22af1e62e7f93\" returns successfully" Jul 7 06:15:09.148044 kubelet[2695]: E0707 06:15:09.147767 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:15:09.151475 kubelet[2695]: E0707 06:15:09.151431 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:15:09.158024 kubelet[2695]: I0707 06:15:09.157478 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-cdr24" podStartSLOduration=27.157312966 podStartE2EDuration="27.157312966s" podCreationTimestamp="2025-07-07 06:14:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:15:09.157307756 +0000 UTC m=+32.528451409" watchObservedRunningTime="2025-07-07 06:15:09.157312966 +0000 UTC m=+32.528456619" Jul 7 06:15:09.166299 kubelet[2695]: I0707 06:15:09.166218 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-nt8hj" podStartSLOduration=27.166198449 podStartE2EDuration="27.166198449s" podCreationTimestamp="2025-07-07 06:14:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:15:09.164967846 +0000 UTC m=+32.536111499" watchObservedRunningTime="2025-07-07 06:15:09.166198449 +0000 UTC m=+32.537342102" Jul 7 06:15:09.360116 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1434219910.mount: Deactivated successfully. Jul 7 06:15:10.152293 kubelet[2695]: E0707 06:15:10.152239 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:15:10.152293 kubelet[2695]: E0707 06:15:10.152250 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:15:10.637083 systemd[1]: Started sshd@8-10.0.0.129:22-10.0.0.1:41080.service - OpenSSH per-connection server daemon (10.0.0.1:41080). Jul 7 06:15:10.690307 sshd[4061]: Accepted publickey for core from 10.0.0.1 port 41080 ssh2: RSA SHA256:A8rgjtEfWsEGE4smvhHSEJA2ZNBF4eVGnULCJgixfWQ Jul 7 06:15:10.691816 sshd-session[4061]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:15:10.696421 systemd-logind[1538]: New session 9 of user core. Jul 7 06:15:10.707253 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 7 06:15:10.986540 sshd[4063]: Connection closed by 10.0.0.1 port 41080 Jul 7 06:15:10.986872 sshd-session[4061]: pam_unix(sshd:session): session closed for user core Jul 7 06:15:10.991237 systemd[1]: sshd@8-10.0.0.129:22-10.0.0.1:41080.service: Deactivated successfully. Jul 7 06:15:10.993855 systemd[1]: session-9.scope: Deactivated successfully. Jul 7 06:15:10.994801 systemd-logind[1538]: Session 9 logged out. Waiting for processes to exit. Jul 7 06:15:10.996712 systemd-logind[1538]: Removed session 9. Jul 7 06:15:11.154199 kubelet[2695]: E0707 06:15:11.154163 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:15:16.002230 systemd[1]: Started sshd@9-10.0.0.129:22-10.0.0.1:41096.service - OpenSSH per-connection server daemon (10.0.0.1:41096). Jul 7 06:15:16.054325 sshd[4081]: Accepted publickey for core from 10.0.0.1 port 41096 ssh2: RSA SHA256:A8rgjtEfWsEGE4smvhHSEJA2ZNBF4eVGnULCJgixfWQ Jul 7 06:15:16.055772 sshd-session[4081]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:15:16.060149 systemd-logind[1538]: New session 10 of user core. Jul 7 06:15:16.071251 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 7 06:15:16.214404 sshd[4083]: Connection closed by 10.0.0.1 port 41096 Jul 7 06:15:16.214724 sshd-session[4081]: pam_unix(sshd:session): session closed for user core Jul 7 06:15:16.218553 systemd[1]: sshd@9-10.0.0.129:22-10.0.0.1:41096.service: Deactivated successfully. Jul 7 06:15:16.220681 systemd[1]: session-10.scope: Deactivated successfully. Jul 7 06:15:16.221547 systemd-logind[1538]: Session 10 logged out. Waiting for processes to exit. Jul 7 06:15:16.222703 systemd-logind[1538]: Removed session 10. Jul 7 06:15:21.226992 systemd[1]: Started sshd@10-10.0.0.129:22-10.0.0.1:60418.service - OpenSSH per-connection server daemon (10.0.0.1:60418). Jul 7 06:15:21.277932 sshd[4097]: Accepted publickey for core from 10.0.0.1 port 60418 ssh2: RSA SHA256:A8rgjtEfWsEGE4smvhHSEJA2ZNBF4eVGnULCJgixfWQ Jul 7 06:15:21.279334 sshd-session[4097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:15:21.283521 systemd-logind[1538]: New session 11 of user core. Jul 7 06:15:21.293253 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 7 06:15:21.408692 sshd[4099]: Connection closed by 10.0.0.1 port 60418 Jul 7 06:15:21.409246 sshd-session[4097]: pam_unix(sshd:session): session closed for user core Jul 7 06:15:21.425656 systemd[1]: sshd@10-10.0.0.129:22-10.0.0.1:60418.service: Deactivated successfully. Jul 7 06:15:21.427378 systemd[1]: session-11.scope: Deactivated successfully. Jul 7 06:15:21.428053 systemd-logind[1538]: Session 11 logged out. Waiting for processes to exit. Jul 7 06:15:21.430771 systemd[1]: Started sshd@11-10.0.0.129:22-10.0.0.1:60426.service - OpenSSH per-connection server daemon (10.0.0.1:60426). Jul 7 06:15:21.431439 systemd-logind[1538]: Removed session 11. Jul 7 06:15:21.479330 sshd[4113]: Accepted publickey for core from 10.0.0.1 port 60426 ssh2: RSA SHA256:A8rgjtEfWsEGE4smvhHSEJA2ZNBF4eVGnULCJgixfWQ Jul 7 06:15:21.480565 sshd-session[4113]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:15:21.485128 systemd-logind[1538]: New session 12 of user core. Jul 7 06:15:21.496239 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 7 06:15:21.647986 sshd[4115]: Connection closed by 10.0.0.1 port 60426 Jul 7 06:15:21.648575 sshd-session[4113]: pam_unix(sshd:session): session closed for user core Jul 7 06:15:21.659895 systemd[1]: sshd@11-10.0.0.129:22-10.0.0.1:60426.service: Deactivated successfully. Jul 7 06:15:21.661761 systemd[1]: session-12.scope: Deactivated successfully. Jul 7 06:15:21.662522 systemd-logind[1538]: Session 12 logged out. Waiting for processes to exit. Jul 7 06:15:21.665154 systemd[1]: Started sshd@12-10.0.0.129:22-10.0.0.1:60436.service - OpenSSH per-connection server daemon (10.0.0.1:60436). Jul 7 06:15:21.665757 systemd-logind[1538]: Removed session 12. Jul 7 06:15:21.717541 sshd[4127]: Accepted publickey for core from 10.0.0.1 port 60436 ssh2: RSA SHA256:A8rgjtEfWsEGE4smvhHSEJA2ZNBF4eVGnULCJgixfWQ Jul 7 06:15:21.718877 sshd-session[4127]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:15:21.723269 systemd-logind[1538]: New session 13 of user core. Jul 7 06:15:21.739264 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 7 06:15:21.884136 sshd[4129]: Connection closed by 10.0.0.1 port 60436 Jul 7 06:15:21.884451 sshd-session[4127]: pam_unix(sshd:session): session closed for user core Jul 7 06:15:21.888717 systemd[1]: sshd@12-10.0.0.129:22-10.0.0.1:60436.service: Deactivated successfully. Jul 7 06:15:21.890813 systemd[1]: session-13.scope: Deactivated successfully. Jul 7 06:15:21.891631 systemd-logind[1538]: Session 13 logged out. Waiting for processes to exit. Jul 7 06:15:21.892930 systemd-logind[1538]: Removed session 13. Jul 7 06:15:26.900565 systemd[1]: Started sshd@13-10.0.0.129:22-10.0.0.1:60452.service - OpenSSH per-connection server daemon (10.0.0.1:60452). Jul 7 06:15:26.950252 sshd[4145]: Accepted publickey for core from 10.0.0.1 port 60452 ssh2: RSA SHA256:A8rgjtEfWsEGE4smvhHSEJA2ZNBF4eVGnULCJgixfWQ Jul 7 06:15:26.951617 sshd-session[4145]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:15:26.957429 systemd-logind[1538]: New session 14 of user core. Jul 7 06:15:26.968334 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 7 06:15:27.104232 sshd[4147]: Connection closed by 10.0.0.1 port 60452 Jul 7 06:15:27.104569 sshd-session[4145]: pam_unix(sshd:session): session closed for user core Jul 7 06:15:27.110404 systemd[1]: sshd@13-10.0.0.129:22-10.0.0.1:60452.service: Deactivated successfully. Jul 7 06:15:27.112871 systemd[1]: session-14.scope: Deactivated successfully. Jul 7 06:15:27.113916 systemd-logind[1538]: Session 14 logged out. Waiting for processes to exit. Jul 7 06:15:27.115823 systemd-logind[1538]: Removed session 14. Jul 7 06:15:32.117033 systemd[1]: Started sshd@14-10.0.0.129:22-10.0.0.1:43416.service - OpenSSH per-connection server daemon (10.0.0.1:43416). Jul 7 06:15:32.168714 sshd[4160]: Accepted publickey for core from 10.0.0.1 port 43416 ssh2: RSA SHA256:A8rgjtEfWsEGE4smvhHSEJA2ZNBF4eVGnULCJgixfWQ Jul 7 06:15:32.170309 sshd-session[4160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:15:32.175490 systemd-logind[1538]: New session 15 of user core. Jul 7 06:15:32.182224 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 7 06:15:32.312838 sshd[4162]: Connection closed by 10.0.0.1 port 43416 Jul 7 06:15:32.313129 sshd-session[4160]: pam_unix(sshd:session): session closed for user core Jul 7 06:15:32.317616 systemd[1]: sshd@14-10.0.0.129:22-10.0.0.1:43416.service: Deactivated successfully. Jul 7 06:15:32.320536 systemd[1]: session-15.scope: Deactivated successfully. Jul 7 06:15:32.321483 systemd-logind[1538]: Session 15 logged out. Waiting for processes to exit. Jul 7 06:15:32.322908 systemd-logind[1538]: Removed session 15. Jul 7 06:15:37.332437 systemd[1]: Started sshd@15-10.0.0.129:22-10.0.0.1:43454.service - OpenSSH per-connection server daemon (10.0.0.1:43454). Jul 7 06:15:37.388339 sshd[4177]: Accepted publickey for core from 10.0.0.1 port 43454 ssh2: RSA SHA256:A8rgjtEfWsEGE4smvhHSEJA2ZNBF4eVGnULCJgixfWQ Jul 7 06:15:37.390033 sshd-session[4177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:15:37.394965 systemd-logind[1538]: New session 16 of user core. Jul 7 06:15:37.402238 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 7 06:15:37.585568 sshd[4179]: Connection closed by 10.0.0.1 port 43454 Jul 7 06:15:37.587353 sshd-session[4177]: pam_unix(sshd:session): session closed for user core Jul 7 06:15:37.597195 systemd[1]: sshd@15-10.0.0.129:22-10.0.0.1:43454.service: Deactivated successfully. Jul 7 06:15:37.599647 systemd[1]: session-16.scope: Deactivated successfully. Jul 7 06:15:37.600803 systemd-logind[1538]: Session 16 logged out. Waiting for processes to exit. Jul 7 06:15:37.605482 systemd[1]: Started sshd@16-10.0.0.129:22-10.0.0.1:43458.service - OpenSSH per-connection server daemon (10.0.0.1:43458). Jul 7 06:15:37.606375 systemd-logind[1538]: Removed session 16. Jul 7 06:15:37.656435 sshd[4193]: Accepted publickey for core from 10.0.0.1 port 43458 ssh2: RSA SHA256:A8rgjtEfWsEGE4smvhHSEJA2ZNBF4eVGnULCJgixfWQ Jul 7 06:15:37.658133 sshd-session[4193]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:15:37.663133 systemd-logind[1538]: New session 17 of user core. Jul 7 06:15:37.673274 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 7 06:15:38.630616 sshd[4195]: Connection closed by 10.0.0.1 port 43458 Jul 7 06:15:38.631352 sshd-session[4193]: pam_unix(sshd:session): session closed for user core Jul 7 06:15:38.647869 systemd[1]: sshd@16-10.0.0.129:22-10.0.0.1:43458.service: Deactivated successfully. Jul 7 06:15:38.649966 systemd[1]: session-17.scope: Deactivated successfully. Jul 7 06:15:38.650830 systemd-logind[1538]: Session 17 logged out. Waiting for processes to exit. Jul 7 06:15:38.654220 systemd[1]: Started sshd@17-10.0.0.129:22-10.0.0.1:43460.service - OpenSSH per-connection server daemon (10.0.0.1:43460). Jul 7 06:15:38.654862 systemd-logind[1538]: Removed session 17. Jul 7 06:15:38.707349 sshd[4208]: Accepted publickey for core from 10.0.0.1 port 43460 ssh2: RSA SHA256:A8rgjtEfWsEGE4smvhHSEJA2ZNBF4eVGnULCJgixfWQ Jul 7 06:15:38.708793 sshd-session[4208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:15:38.713399 systemd-logind[1538]: New session 18 of user core. Jul 7 06:15:38.724215 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 7 06:15:39.939999 sshd[4210]: Connection closed by 10.0.0.1 port 43460 Jul 7 06:15:39.941080 sshd-session[4208]: pam_unix(sshd:session): session closed for user core Jul 7 06:15:39.955220 systemd[1]: sshd@17-10.0.0.129:22-10.0.0.1:43460.service: Deactivated successfully. Jul 7 06:15:39.959670 systemd[1]: session-18.scope: Deactivated successfully. Jul 7 06:15:39.961701 systemd-logind[1538]: Session 18 logged out. Waiting for processes to exit. Jul 7 06:15:39.966940 systemd[1]: Started sshd@18-10.0.0.129:22-10.0.0.1:39138.service - OpenSSH per-connection server daemon (10.0.0.1:39138). Jul 7 06:15:39.969261 systemd-logind[1538]: Removed session 18. Jul 7 06:15:40.014572 sshd[4229]: Accepted publickey for core from 10.0.0.1 port 39138 ssh2: RSA SHA256:A8rgjtEfWsEGE4smvhHSEJA2ZNBF4eVGnULCJgixfWQ Jul 7 06:15:40.016376 sshd-session[4229]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:15:40.021252 systemd-logind[1538]: New session 19 of user core. Jul 7 06:15:40.030256 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 7 06:15:40.304470 sshd[4231]: Connection closed by 10.0.0.1 port 39138 Jul 7 06:15:40.306398 sshd-session[4229]: pam_unix(sshd:session): session closed for user core Jul 7 06:15:40.317227 systemd[1]: sshd@18-10.0.0.129:22-10.0.0.1:39138.service: Deactivated successfully. Jul 7 06:15:40.319185 systemd[1]: session-19.scope: Deactivated successfully. Jul 7 06:15:40.320798 systemd-logind[1538]: Session 19 logged out. Waiting for processes to exit. Jul 7 06:15:40.324647 systemd[1]: Started sshd@19-10.0.0.129:22-10.0.0.1:39148.service - OpenSSH per-connection server daemon (10.0.0.1:39148). Jul 7 06:15:40.325680 systemd-logind[1538]: Removed session 19. Jul 7 06:15:40.369329 sshd[4242]: Accepted publickey for core from 10.0.0.1 port 39148 ssh2: RSA SHA256:A8rgjtEfWsEGE4smvhHSEJA2ZNBF4eVGnULCJgixfWQ Jul 7 06:15:40.370942 sshd-session[4242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:15:40.377365 systemd-logind[1538]: New session 20 of user core. Jul 7 06:15:40.382256 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 7 06:15:40.502219 sshd[4244]: Connection closed by 10.0.0.1 port 39148 Jul 7 06:15:40.502569 sshd-session[4242]: pam_unix(sshd:session): session closed for user core Jul 7 06:15:40.508354 systemd[1]: sshd@19-10.0.0.129:22-10.0.0.1:39148.service: Deactivated successfully. Jul 7 06:15:40.510480 systemd[1]: session-20.scope: Deactivated successfully. Jul 7 06:15:40.511496 systemd-logind[1538]: Session 20 logged out. Waiting for processes to exit. Jul 7 06:15:40.513480 systemd-logind[1538]: Removed session 20. Jul 7 06:15:45.519671 systemd[1]: Started sshd@20-10.0.0.129:22-10.0.0.1:39150.service - OpenSSH per-connection server daemon (10.0.0.1:39150). Jul 7 06:15:45.571861 sshd[4259]: Accepted publickey for core from 10.0.0.1 port 39150 ssh2: RSA SHA256:A8rgjtEfWsEGE4smvhHSEJA2ZNBF4eVGnULCJgixfWQ Jul 7 06:15:45.573631 sshd-session[4259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:15:45.578388 systemd-logind[1538]: New session 21 of user core. Jul 7 06:15:45.589233 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 7 06:15:45.695313 sshd[4261]: Connection closed by 10.0.0.1 port 39150 Jul 7 06:15:45.695633 sshd-session[4259]: pam_unix(sshd:session): session closed for user core Jul 7 06:15:45.699936 systemd[1]: sshd@20-10.0.0.129:22-10.0.0.1:39150.service: Deactivated successfully. Jul 7 06:15:45.702017 systemd[1]: session-21.scope: Deactivated successfully. Jul 7 06:15:45.703146 systemd-logind[1538]: Session 21 logged out. Waiting for processes to exit. Jul 7 06:15:45.704413 systemd-logind[1538]: Removed session 21. Jul 7 06:15:46.005539 kubelet[2695]: E0707 06:15:46.005469 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:15:47.003232 kubelet[2695]: E0707 06:15:47.003165 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:15:50.713547 systemd[1]: Started sshd@21-10.0.0.129:22-10.0.0.1:54852.service - OpenSSH per-connection server daemon (10.0.0.1:54852). Jul 7 06:15:50.759599 sshd[4276]: Accepted publickey for core from 10.0.0.1 port 54852 ssh2: RSA SHA256:A8rgjtEfWsEGE4smvhHSEJA2ZNBF4eVGnULCJgixfWQ Jul 7 06:15:50.761124 sshd-session[4276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:15:50.765449 systemd-logind[1538]: New session 22 of user core. Jul 7 06:15:50.781242 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 7 06:15:50.912105 sshd[4278]: Connection closed by 10.0.0.1 port 54852 Jul 7 06:15:50.912560 sshd-session[4276]: pam_unix(sshd:session): session closed for user core Jul 7 06:15:50.917400 systemd[1]: sshd@21-10.0.0.129:22-10.0.0.1:54852.service: Deactivated successfully. Jul 7 06:15:50.919975 systemd[1]: session-22.scope: Deactivated successfully. Jul 7 06:15:50.920947 systemd-logind[1538]: Session 22 logged out. Waiting for processes to exit. Jul 7 06:15:50.922697 systemd-logind[1538]: Removed session 22. Jul 7 06:15:55.926433 systemd[1]: Started sshd@22-10.0.0.129:22-10.0.0.1:54866.service - OpenSSH per-connection server daemon (10.0.0.1:54866). Jul 7 06:15:56.220172 sshd[4292]: Accepted publickey for core from 10.0.0.1 port 54866 ssh2: RSA SHA256:A8rgjtEfWsEGE4smvhHSEJA2ZNBF4eVGnULCJgixfWQ Jul 7 06:15:56.221789 sshd-session[4292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:15:56.226924 systemd-logind[1538]: New session 23 of user core. Jul 7 06:15:56.236396 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 7 06:15:56.499209 sshd[4294]: Connection closed by 10.0.0.1 port 54866 Jul 7 06:15:56.499562 sshd-session[4292]: pam_unix(sshd:session): session closed for user core Jul 7 06:15:56.511223 systemd[1]: sshd@22-10.0.0.129:22-10.0.0.1:54866.service: Deactivated successfully. Jul 7 06:15:56.513637 systemd[1]: session-23.scope: Deactivated successfully. Jul 7 06:15:56.515136 systemd-logind[1538]: Session 23 logged out. Waiting for processes to exit. Jul 7 06:15:56.519396 systemd[1]: Started sshd@23-10.0.0.129:22-10.0.0.1:54882.service - OpenSSH per-connection server daemon (10.0.0.1:54882). Jul 7 06:15:56.520413 systemd-logind[1538]: Removed session 23. Jul 7 06:15:56.570620 sshd[4307]: Accepted publickey for core from 10.0.0.1 port 54882 ssh2: RSA SHA256:A8rgjtEfWsEGE4smvhHSEJA2ZNBF4eVGnULCJgixfWQ Jul 7 06:15:56.572322 sshd-session[4307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:15:56.576979 systemd-logind[1538]: New session 24 of user core. Jul 7 06:15:56.584240 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 7 06:15:58.408213 containerd[1562]: time="2025-07-07T06:15:58.407981607Z" level=info msg="StopContainer for \"f1dc75e6b6796e2f722231369238dd23ced48e14d71fc3ddc9cd1d2a43970dc3\" with timeout 30 (s)" Jul 7 06:15:58.416312 containerd[1562]: time="2025-07-07T06:15:58.416240213Z" level=info msg="Stop container \"f1dc75e6b6796e2f722231369238dd23ced48e14d71fc3ddc9cd1d2a43970dc3\" with signal terminated" Jul 7 06:15:58.428481 systemd[1]: cri-containerd-f1dc75e6b6796e2f722231369238dd23ced48e14d71fc3ddc9cd1d2a43970dc3.scope: Deactivated successfully. Jul 7 06:15:58.429981 containerd[1562]: time="2025-07-07T06:15:58.429941875Z" level=info msg="received exit event container_id:\"f1dc75e6b6796e2f722231369238dd23ced48e14d71fc3ddc9cd1d2a43970dc3\" id:\"f1dc75e6b6796e2f722231369238dd23ced48e14d71fc3ddc9cd1d2a43970dc3\" pid:3249 exited_at:{seconds:1751868958 nanos:429434030}" Jul 7 06:15:58.430388 containerd[1562]: time="2025-07-07T06:15:58.430364498Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f1dc75e6b6796e2f722231369238dd23ced48e14d71fc3ddc9cd1d2a43970dc3\" id:\"f1dc75e6b6796e2f722231369238dd23ced48e14d71fc3ddc9cd1d2a43970dc3\" pid:3249 exited_at:{seconds:1751868958 nanos:429434030}" Jul 7 06:15:58.449177 containerd[1562]: time="2025-07-07T06:15:58.449106639Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 06:15:58.450405 containerd[1562]: time="2025-07-07T06:15:58.450357569Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5c345359a445518e235618d30c61ee3a0e493d6e3ffd70b1b6c179cdefa34ddc\" id:\"85e6e28d3d67f9564f5968d24e6a79d1ff62b01f77cb946dd7a353eba6fe32d1\" pid:4336 exited_at:{seconds:1751868958 nanos:449690786}" Jul 7 06:15:58.454889 containerd[1562]: time="2025-07-07T06:15:58.452340734Z" level=info msg="StopContainer for \"5c345359a445518e235618d30c61ee3a0e493d6e3ffd70b1b6c179cdefa34ddc\" with timeout 2 (s)" Jul 7 06:15:58.457135 containerd[1562]: time="2025-07-07T06:15:58.455761351Z" level=info msg="Stop container \"5c345359a445518e235618d30c61ee3a0e493d6e3ffd70b1b6c179cdefa34ddc\" with signal terminated" Jul 7 06:15:58.457929 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f1dc75e6b6796e2f722231369238dd23ced48e14d71fc3ddc9cd1d2a43970dc3-rootfs.mount: Deactivated successfully. Jul 7 06:15:58.466572 systemd-networkd[1486]: lxc_health: Link DOWN Jul 7 06:15:58.466586 systemd-networkd[1486]: lxc_health: Lost carrier Jul 7 06:15:58.476844 containerd[1562]: time="2025-07-07T06:15:58.476786150Z" level=info msg="StopContainer for \"f1dc75e6b6796e2f722231369238dd23ced48e14d71fc3ddc9cd1d2a43970dc3\" returns successfully" Jul 7 06:15:58.477664 containerd[1562]: time="2025-07-07T06:15:58.477628443Z" level=info msg="StopPodSandbox for \"a55daa35f17c4ae5ab1969057bc3a98dbf12f058e2034e86378daba6261be6cc\"" Jul 7 06:15:58.477752 containerd[1562]: time="2025-07-07T06:15:58.477718812Z" level=info msg="Container to stop \"f1dc75e6b6796e2f722231369238dd23ced48e14d71fc3ddc9cd1d2a43970dc3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 06:15:58.481612 systemd[1]: cri-containerd-5c345359a445518e235618d30c61ee3a0e493d6e3ffd70b1b6c179cdefa34ddc.scope: Deactivated successfully. Jul 7 06:15:58.482053 systemd[1]: cri-containerd-5c345359a445518e235618d30c61ee3a0e493d6e3ffd70b1b6c179cdefa34ddc.scope: Consumed 6.734s CPU time, 126.7M memory peak, 240K read from disk, 13.3M written to disk. Jul 7 06:15:58.483121 containerd[1562]: time="2025-07-07T06:15:58.483047513Z" level=info msg="received exit event container_id:\"5c345359a445518e235618d30c61ee3a0e493d6e3ffd70b1b6c179cdefa34ddc\" id:\"5c345359a445518e235618d30c61ee3a0e493d6e3ffd70b1b6c179cdefa34ddc\" pid:3359 exited_at:{seconds:1751868958 nanos:482712985}" Jul 7 06:15:58.483342 containerd[1562]: time="2025-07-07T06:15:58.483314275Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5c345359a445518e235618d30c61ee3a0e493d6e3ffd70b1b6c179cdefa34ddc\" id:\"5c345359a445518e235618d30c61ee3a0e493d6e3ffd70b1b6c179cdefa34ddc\" pid:3359 exited_at:{seconds:1751868958 nanos:482712985}" Jul 7 06:15:58.487517 systemd[1]: cri-containerd-a55daa35f17c4ae5ab1969057bc3a98dbf12f058e2034e86378daba6261be6cc.scope: Deactivated successfully. Jul 7 06:15:58.493744 containerd[1562]: time="2025-07-07T06:15:58.493685599Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a55daa35f17c4ae5ab1969057bc3a98dbf12f058e2034e86378daba6261be6cc\" id:\"a55daa35f17c4ae5ab1969057bc3a98dbf12f058e2034e86378daba6261be6cc\" pid:2938 exit_status:137 exited_at:{seconds:1751868958 nanos:492820012}" Jul 7 06:15:58.515460 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5c345359a445518e235618d30c61ee3a0e493d6e3ffd70b1b6c179cdefa34ddc-rootfs.mount: Deactivated successfully. Jul 7 06:15:58.528421 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a55daa35f17c4ae5ab1969057bc3a98dbf12f058e2034e86378daba6261be6cc-rootfs.mount: Deactivated successfully. Jul 7 06:15:58.622271 containerd[1562]: time="2025-07-07T06:15:58.622167847Z" level=info msg="shim disconnected" id=a55daa35f17c4ae5ab1969057bc3a98dbf12f058e2034e86378daba6261be6cc namespace=k8s.io Jul 7 06:15:58.622271 containerd[1562]: time="2025-07-07T06:15:58.622212420Z" level=warning msg="cleaning up after shim disconnected" id=a55daa35f17c4ae5ab1969057bc3a98dbf12f058e2034e86378daba6261be6cc namespace=k8s.io Jul 7 06:15:58.660712 containerd[1562]: time="2025-07-07T06:15:58.622227068Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 06:15:58.660712 containerd[1562]: time="2025-07-07T06:15:58.652004441Z" level=info msg="StopContainer for \"5c345359a445518e235618d30c61ee3a0e493d6e3ffd70b1b6c179cdefa34ddc\" returns successfully" Jul 7 06:15:58.661668 containerd[1562]: time="2025-07-07T06:15:58.661303301Z" level=info msg="StopPodSandbox for \"6103eecd0bd600ddc7f7f7db22a9d38c36b6c851dbd6a889cc8cf39ca4de35ea\"" Jul 7 06:15:58.661668 containerd[1562]: time="2025-07-07T06:15:58.661383751Z" level=info msg="Container to stop \"47ac48cf9b5d0f0abc0384feb4d62475c8f2b191f7e640d67a9799007f075fb3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 06:15:58.661668 containerd[1562]: time="2025-07-07T06:15:58.661403549Z" level=info msg="Container to stop \"ead5dc135dfb9f9bd783179e073a9fd1238c3a81b733b2ca745ddc66524896b4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 06:15:58.661668 containerd[1562]: time="2025-07-07T06:15:58.661415842Z" level=info msg="Container to stop \"1986606c5b0356c9f99193b2c498eb11ec014dd6f394ab3b3b22d10f5f469016\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 06:15:58.661668 containerd[1562]: time="2025-07-07T06:15:58.661428054Z" level=info msg="Container to stop \"5c345359a445518e235618d30c61ee3a0e493d6e3ffd70b1b6c179cdefa34ddc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 06:15:58.661668 containerd[1562]: time="2025-07-07T06:15:58.661440408Z" level=info msg="Container to stop \"ef7a06707650fc932a3c3e52cf8d7b0747bf6114b0e97988d8b3f9e8d5ed5c61\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 06:15:58.670342 systemd[1]: cri-containerd-6103eecd0bd600ddc7f7f7db22a9d38c36b6c851dbd6a889cc8cf39ca4de35ea.scope: Deactivated successfully. Jul 7 06:15:58.686636 containerd[1562]: time="2025-07-07T06:15:58.686413625Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6103eecd0bd600ddc7f7f7db22a9d38c36b6c851dbd6a889cc8cf39ca4de35ea\" id:\"6103eecd0bd600ddc7f7f7db22a9d38c36b6c851dbd6a889cc8cf39ca4de35ea\" pid:2847 exit_status:137 exited_at:{seconds:1751868958 nanos:671363179}" Jul 7 06:15:58.689908 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a55daa35f17c4ae5ab1969057bc3a98dbf12f058e2034e86378daba6261be6cc-shm.mount: Deactivated successfully. Jul 7 06:15:58.698027 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6103eecd0bd600ddc7f7f7db22a9d38c36b6c851dbd6a889cc8cf39ca4de35ea-rootfs.mount: Deactivated successfully. Jul 7 06:15:58.698326 containerd[1562]: time="2025-07-07T06:15:58.698041520Z" level=info msg="received exit event sandbox_id:\"a55daa35f17c4ae5ab1969057bc3a98dbf12f058e2034e86378daba6261be6cc\" exit_status:137 exited_at:{seconds:1751868958 nanos:492820012}" Jul 7 06:15:58.703008 containerd[1562]: time="2025-07-07T06:15:58.702921318Z" level=info msg="TearDown network for sandbox \"a55daa35f17c4ae5ab1969057bc3a98dbf12f058e2034e86378daba6261be6cc\" successfully" Jul 7 06:15:58.703008 containerd[1562]: time="2025-07-07T06:15:58.702977052Z" level=info msg="StopPodSandbox for \"a55daa35f17c4ae5ab1969057bc3a98dbf12f058e2034e86378daba6261be6cc\" returns successfully" Jul 7 06:15:58.704129 containerd[1562]: time="2025-07-07T06:15:58.704051019Z" level=info msg="received exit event sandbox_id:\"6103eecd0bd600ddc7f7f7db22a9d38c36b6c851dbd6a889cc8cf39ca4de35ea\" exit_status:137 exited_at:{seconds:1751868958 nanos:671363179}" Jul 7 06:15:58.704450 containerd[1562]: time="2025-07-07T06:15:58.704328321Z" level=info msg="shim disconnected" id=6103eecd0bd600ddc7f7f7db22a9d38c36b6c851dbd6a889cc8cf39ca4de35ea namespace=k8s.io Jul 7 06:15:58.704450 containerd[1562]: time="2025-07-07T06:15:58.704360442Z" level=warning msg="cleaning up after shim disconnected" id=6103eecd0bd600ddc7f7f7db22a9d38c36b6c851dbd6a889cc8cf39ca4de35ea namespace=k8s.io Jul 7 06:15:58.704450 containerd[1562]: time="2025-07-07T06:15:58.704371633Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 06:15:58.705949 containerd[1562]: time="2025-07-07T06:15:58.705350251Z" level=info msg="TearDown network for sandbox \"6103eecd0bd600ddc7f7f7db22a9d38c36b6c851dbd6a889cc8cf39ca4de35ea\" successfully" Jul 7 06:15:58.705949 containerd[1562]: time="2025-07-07T06:15:58.705389124Z" level=info msg="StopPodSandbox for \"6103eecd0bd600ddc7f7f7db22a9d38c36b6c851dbd6a889cc8cf39ca4de35ea\" returns successfully" Jul 7 06:15:58.802823 kubelet[2695]: I0707 06:15:58.802738 2695 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4f512201-6f8b-4c0a-a6c1-61f2688630f3-host-proc-sys-kernel\") pod \"4f512201-6f8b-4c0a-a6c1-61f2688630f3\" (UID: \"4f512201-6f8b-4c0a-a6c1-61f2688630f3\") " Jul 7 06:15:58.802823 kubelet[2695]: I0707 06:15:58.802801 2695 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4f512201-6f8b-4c0a-a6c1-61f2688630f3-etc-cni-netd\") pod \"4f512201-6f8b-4c0a-a6c1-61f2688630f3\" (UID: \"4f512201-6f8b-4c0a-a6c1-61f2688630f3\") " Jul 7 06:15:58.802823 kubelet[2695]: I0707 06:15:58.802832 2695 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/85150542-9e14-40c1-826b-ea3ca2302240-cilium-config-path\") pod \"85150542-9e14-40c1-826b-ea3ca2302240\" (UID: \"85150542-9e14-40c1-826b-ea3ca2302240\") " Jul 7 06:15:58.803487 kubelet[2695]: I0707 06:15:58.802891 2695 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4f512201-6f8b-4c0a-a6c1-61f2688630f3-clustermesh-secrets\") pod \"4f512201-6f8b-4c0a-a6c1-61f2688630f3\" (UID: \"4f512201-6f8b-4c0a-a6c1-61f2688630f3\") " Jul 7 06:15:58.803487 kubelet[2695]: I0707 06:15:58.802918 2695 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4f512201-6f8b-4c0a-a6c1-61f2688630f3-lib-modules\") pod \"4f512201-6f8b-4c0a-a6c1-61f2688630f3\" (UID: \"4f512201-6f8b-4c0a-a6c1-61f2688630f3\") " Jul 7 06:15:58.803487 kubelet[2695]: I0707 06:15:58.802939 2695 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4f512201-6f8b-4c0a-a6c1-61f2688630f3-host-proc-sys-net\") pod \"4f512201-6f8b-4c0a-a6c1-61f2688630f3\" (UID: \"4f512201-6f8b-4c0a-a6c1-61f2688630f3\") " Jul 7 06:15:58.803487 kubelet[2695]: I0707 06:15:58.802964 2695 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-knmtg\" (UniqueName: \"kubernetes.io/projected/4f512201-6f8b-4c0a-a6c1-61f2688630f3-kube-api-access-knmtg\") pod \"4f512201-6f8b-4c0a-a6c1-61f2688630f3\" (UID: \"4f512201-6f8b-4c0a-a6c1-61f2688630f3\") " Jul 7 06:15:58.803487 kubelet[2695]: I0707 06:15:58.802937 2695 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f512201-6f8b-4c0a-a6c1-61f2688630f3-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "4f512201-6f8b-4c0a-a6c1-61f2688630f3" (UID: "4f512201-6f8b-4c0a-a6c1-61f2688630f3"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 06:15:58.803487 kubelet[2695]: I0707 06:15:58.802987 2695 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4f512201-6f8b-4c0a-a6c1-61f2688630f3-hostproc\") pod \"4f512201-6f8b-4c0a-a6c1-61f2688630f3\" (UID: \"4f512201-6f8b-4c0a-a6c1-61f2688630f3\") " Jul 7 06:15:58.803679 kubelet[2695]: I0707 06:15:58.803008 2695 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4f512201-6f8b-4c0a-a6c1-61f2688630f3-cilium-cgroup\") pod \"4f512201-6f8b-4c0a-a6c1-61f2688630f3\" (UID: \"4f512201-6f8b-4c0a-a6c1-61f2688630f3\") " Jul 7 06:15:58.803679 kubelet[2695]: I0707 06:15:58.803030 2695 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4f512201-6f8b-4c0a-a6c1-61f2688630f3-cilium-run\") pod \"4f512201-6f8b-4c0a-a6c1-61f2688630f3\" (UID: \"4f512201-6f8b-4c0a-a6c1-61f2688630f3\") " Jul 7 06:15:58.803679 kubelet[2695]: I0707 06:15:58.803049 2695 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4f512201-6f8b-4c0a-a6c1-61f2688630f3-bpf-maps\") pod \"4f512201-6f8b-4c0a-a6c1-61f2688630f3\" (UID: \"4f512201-6f8b-4c0a-a6c1-61f2688630f3\") " Jul 7 06:15:58.803679 kubelet[2695]: I0707 06:15:58.803071 2695 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4f512201-6f8b-4c0a-a6c1-61f2688630f3-cilium-config-path\") pod \"4f512201-6f8b-4c0a-a6c1-61f2688630f3\" (UID: \"4f512201-6f8b-4c0a-a6c1-61f2688630f3\") " Jul 7 06:15:58.803679 kubelet[2695]: I0707 06:15:58.803123 2695 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4f512201-6f8b-4c0a-a6c1-61f2688630f3-xtables-lock\") pod \"4f512201-6f8b-4c0a-a6c1-61f2688630f3\" (UID: \"4f512201-6f8b-4c0a-a6c1-61f2688630f3\") " Jul 7 06:15:58.803679 kubelet[2695]: I0707 06:15:58.803147 2695 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4f512201-6f8b-4c0a-a6c1-61f2688630f3-cni-path\") pod \"4f512201-6f8b-4c0a-a6c1-61f2688630f3\" (UID: \"4f512201-6f8b-4c0a-a6c1-61f2688630f3\") " Jul 7 06:15:58.803868 kubelet[2695]: I0707 06:15:58.803074 2695 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f512201-6f8b-4c0a-a6c1-61f2688630f3-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "4f512201-6f8b-4c0a-a6c1-61f2688630f3" (UID: "4f512201-6f8b-4c0a-a6c1-61f2688630f3"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 06:15:58.803868 kubelet[2695]: I0707 06:15:58.803172 2695 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4f512201-6f8b-4c0a-a6c1-61f2688630f3-hubble-tls\") pod \"4f512201-6f8b-4c0a-a6c1-61f2688630f3\" (UID: \"4f512201-6f8b-4c0a-a6c1-61f2688630f3\") " Jul 7 06:15:58.803868 kubelet[2695]: I0707 06:15:58.803197 2695 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pfnpj\" (UniqueName: \"kubernetes.io/projected/85150542-9e14-40c1-826b-ea3ca2302240-kube-api-access-pfnpj\") pod \"85150542-9e14-40c1-826b-ea3ca2302240\" (UID: \"85150542-9e14-40c1-826b-ea3ca2302240\") " Jul 7 06:15:58.803868 kubelet[2695]: I0707 06:15:58.803257 2695 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4f512201-6f8b-4c0a-a6c1-61f2688630f3-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 7 06:15:58.803868 kubelet[2695]: I0707 06:15:58.803272 2695 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4f512201-6f8b-4c0a-a6c1-61f2688630f3-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 7 06:15:58.803868 kubelet[2695]: I0707 06:15:58.803199 2695 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f512201-6f8b-4c0a-a6c1-61f2688630f3-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "4f512201-6f8b-4c0a-a6c1-61f2688630f3" (UID: "4f512201-6f8b-4c0a-a6c1-61f2688630f3"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 06:15:58.804057 kubelet[2695]: I0707 06:15:58.803345 2695 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f512201-6f8b-4c0a-a6c1-61f2688630f3-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "4f512201-6f8b-4c0a-a6c1-61f2688630f3" (UID: "4f512201-6f8b-4c0a-a6c1-61f2688630f3"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 06:15:58.804057 kubelet[2695]: I0707 06:15:58.803344 2695 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f512201-6f8b-4c0a-a6c1-61f2688630f3-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "4f512201-6f8b-4c0a-a6c1-61f2688630f3" (UID: "4f512201-6f8b-4c0a-a6c1-61f2688630f3"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 06:15:58.804057 kubelet[2695]: I0707 06:15:58.803397 2695 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f512201-6f8b-4c0a-a6c1-61f2688630f3-hostproc" (OuterVolumeSpecName: "hostproc") pod "4f512201-6f8b-4c0a-a6c1-61f2688630f3" (UID: "4f512201-6f8b-4c0a-a6c1-61f2688630f3"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 06:15:58.804057 kubelet[2695]: I0707 06:15:58.803428 2695 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f512201-6f8b-4c0a-a6c1-61f2688630f3-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "4f512201-6f8b-4c0a-a6c1-61f2688630f3" (UID: "4f512201-6f8b-4c0a-a6c1-61f2688630f3"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 06:15:58.804057 kubelet[2695]: I0707 06:15:58.803451 2695 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f512201-6f8b-4c0a-a6c1-61f2688630f3-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "4f512201-6f8b-4c0a-a6c1-61f2688630f3" (UID: "4f512201-6f8b-4c0a-a6c1-61f2688630f3"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 06:15:58.804253 kubelet[2695]: I0707 06:15:58.803487 2695 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f512201-6f8b-4c0a-a6c1-61f2688630f3-cni-path" (OuterVolumeSpecName: "cni-path") pod "4f512201-6f8b-4c0a-a6c1-61f2688630f3" (UID: "4f512201-6f8b-4c0a-a6c1-61f2688630f3"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 06:15:58.804253 kubelet[2695]: I0707 06:15:58.803513 2695 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f512201-6f8b-4c0a-a6c1-61f2688630f3-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "4f512201-6f8b-4c0a-a6c1-61f2688630f3" (UID: "4f512201-6f8b-4c0a-a6c1-61f2688630f3"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 06:15:58.821284 kubelet[2695]: I0707 06:15:58.821114 2695 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85150542-9e14-40c1-826b-ea3ca2302240-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "85150542-9e14-40c1-826b-ea3ca2302240" (UID: "85150542-9e14-40c1-826b-ea3ca2302240"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 7 06:15:58.823125 kubelet[2695]: I0707 06:15:58.822611 2695 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f512201-6f8b-4c0a-a6c1-61f2688630f3-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "4f512201-6f8b-4c0a-a6c1-61f2688630f3" (UID: "4f512201-6f8b-4c0a-a6c1-61f2688630f3"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 7 06:15:58.823285 kubelet[2695]: I0707 06:15:58.823164 2695 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f512201-6f8b-4c0a-a6c1-61f2688630f3-kube-api-access-knmtg" (OuterVolumeSpecName: "kube-api-access-knmtg") pod "4f512201-6f8b-4c0a-a6c1-61f2688630f3" (UID: "4f512201-6f8b-4c0a-a6c1-61f2688630f3"). InnerVolumeSpecName "kube-api-access-knmtg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 7 06:15:58.824276 kubelet[2695]: I0707 06:15:58.824229 2695 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85150542-9e14-40c1-826b-ea3ca2302240-kube-api-access-pfnpj" (OuterVolumeSpecName: "kube-api-access-pfnpj") pod "85150542-9e14-40c1-826b-ea3ca2302240" (UID: "85150542-9e14-40c1-826b-ea3ca2302240"). InnerVolumeSpecName "kube-api-access-pfnpj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 7 06:15:58.824613 kubelet[2695]: I0707 06:15:58.824568 2695 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f512201-6f8b-4c0a-a6c1-61f2688630f3-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "4f512201-6f8b-4c0a-a6c1-61f2688630f3" (UID: "4f512201-6f8b-4c0a-a6c1-61f2688630f3"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 7 06:15:58.826861 kubelet[2695]: I0707 06:15:58.826794 2695 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f512201-6f8b-4c0a-a6c1-61f2688630f3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4f512201-6f8b-4c0a-a6c1-61f2688630f3" (UID: "4f512201-6f8b-4c0a-a6c1-61f2688630f3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 7 06:15:58.903757 kubelet[2695]: I0707 06:15:58.903695 2695 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4f512201-6f8b-4c0a-a6c1-61f2688630f3-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 7 06:15:58.903757 kubelet[2695]: I0707 06:15:58.903743 2695 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/85150542-9e14-40c1-826b-ea3ca2302240-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 7 06:15:58.903757 kubelet[2695]: I0707 06:15:58.903756 2695 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4f512201-6f8b-4c0a-a6c1-61f2688630f3-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 7 06:15:58.903757 kubelet[2695]: I0707 06:15:58.903765 2695 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4f512201-6f8b-4c0a-a6c1-61f2688630f3-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 7 06:15:58.903757 kubelet[2695]: I0707 06:15:58.903778 2695 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-knmtg\" (UniqueName: \"kubernetes.io/projected/4f512201-6f8b-4c0a-a6c1-61f2688630f3-kube-api-access-knmtg\") on node \"localhost\" DevicePath \"\"" Jul 7 06:15:58.904027 kubelet[2695]: I0707 06:15:58.903789 2695 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4f512201-6f8b-4c0a-a6c1-61f2688630f3-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 7 06:15:58.904027 kubelet[2695]: I0707 06:15:58.903799 2695 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4f512201-6f8b-4c0a-a6c1-61f2688630f3-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 7 06:15:58.904027 kubelet[2695]: I0707 06:15:58.903809 2695 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4f512201-6f8b-4c0a-a6c1-61f2688630f3-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 7 06:15:58.904027 kubelet[2695]: I0707 06:15:58.903818 2695 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4f512201-6f8b-4c0a-a6c1-61f2688630f3-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 7 06:15:58.904027 kubelet[2695]: I0707 06:15:58.903828 2695 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4f512201-6f8b-4c0a-a6c1-61f2688630f3-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 7 06:15:58.904027 kubelet[2695]: I0707 06:15:58.903838 2695 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4f512201-6f8b-4c0a-a6c1-61f2688630f3-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 7 06:15:58.904027 kubelet[2695]: I0707 06:15:58.903848 2695 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4f512201-6f8b-4c0a-a6c1-61f2688630f3-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 7 06:15:58.904027 kubelet[2695]: I0707 06:15:58.903858 2695 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4f512201-6f8b-4c0a-a6c1-61f2688630f3-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 7 06:15:58.904299 kubelet[2695]: I0707 06:15:58.903868 2695 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pfnpj\" (UniqueName: \"kubernetes.io/projected/85150542-9e14-40c1-826b-ea3ca2302240-kube-api-access-pfnpj\") on node \"localhost\" DevicePath \"\"" Jul 7 06:15:59.011015 systemd[1]: Removed slice kubepods-burstable-pod4f512201_6f8b_4c0a_a6c1_61f2688630f3.slice - libcontainer container kubepods-burstable-pod4f512201_6f8b_4c0a_a6c1_61f2688630f3.slice. Jul 7 06:15:59.011267 systemd[1]: kubepods-burstable-pod4f512201_6f8b_4c0a_a6c1_61f2688630f3.slice: Consumed 6.840s CPU time, 127M memory peak, 248K read from disk, 13.3M written to disk. Jul 7 06:15:59.012969 systemd[1]: Removed slice kubepods-besteffort-pod85150542_9e14_40c1_826b_ea3ca2302240.slice - libcontainer container kubepods-besteffort-pod85150542_9e14_40c1_826b_ea3ca2302240.slice. Jul 7 06:15:59.241983 kubelet[2695]: I0707 06:15:59.241936 2695 scope.go:117] "RemoveContainer" containerID="5c345359a445518e235618d30c61ee3a0e493d6e3ffd70b1b6c179cdefa34ddc" Jul 7 06:15:59.244332 containerd[1562]: time="2025-07-07T06:15:59.244278822Z" level=info msg="RemoveContainer for \"5c345359a445518e235618d30c61ee3a0e493d6e3ffd70b1b6c179cdefa34ddc\"" Jul 7 06:15:59.255073 containerd[1562]: time="2025-07-07T06:15:59.255006633Z" level=info msg="RemoveContainer for \"5c345359a445518e235618d30c61ee3a0e493d6e3ffd70b1b6c179cdefa34ddc\" returns successfully" Jul 7 06:15:59.255447 kubelet[2695]: I0707 06:15:59.255401 2695 scope.go:117] "RemoveContainer" containerID="47ac48cf9b5d0f0abc0384feb4d62475c8f2b191f7e640d67a9799007f075fb3" Jul 7 06:15:59.257146 containerd[1562]: time="2025-07-07T06:15:59.257109005Z" level=info msg="RemoveContainer for \"47ac48cf9b5d0f0abc0384feb4d62475c8f2b191f7e640d67a9799007f075fb3\"" Jul 7 06:15:59.279997 containerd[1562]: time="2025-07-07T06:15:59.279863357Z" level=info msg="RemoveContainer for \"47ac48cf9b5d0f0abc0384feb4d62475c8f2b191f7e640d67a9799007f075fb3\" returns successfully" Jul 7 06:15:59.280159 kubelet[2695]: I0707 06:15:59.280136 2695 scope.go:117] "RemoveContainer" containerID="1986606c5b0356c9f99193b2c498eb11ec014dd6f394ab3b3b22d10f5f469016" Jul 7 06:15:59.284226 containerd[1562]: time="2025-07-07T06:15:59.284185523Z" level=info msg="RemoveContainer for \"1986606c5b0356c9f99193b2c498eb11ec014dd6f394ab3b3b22d10f5f469016\"" Jul 7 06:15:59.289293 containerd[1562]: time="2025-07-07T06:15:59.289252470Z" level=info msg="RemoveContainer for \"1986606c5b0356c9f99193b2c498eb11ec014dd6f394ab3b3b22d10f5f469016\" returns successfully" Jul 7 06:15:59.289555 kubelet[2695]: I0707 06:15:59.289514 2695 scope.go:117] "RemoveContainer" containerID="ead5dc135dfb9f9bd783179e073a9fd1238c3a81b733b2ca745ddc66524896b4" Jul 7 06:15:59.290853 containerd[1562]: time="2025-07-07T06:15:59.290827300Z" level=info msg="RemoveContainer for \"ead5dc135dfb9f9bd783179e073a9fd1238c3a81b733b2ca745ddc66524896b4\"" Jul 7 06:15:59.294810 containerd[1562]: time="2025-07-07T06:15:59.294774722Z" level=info msg="RemoveContainer for \"ead5dc135dfb9f9bd783179e073a9fd1238c3a81b733b2ca745ddc66524896b4\" returns successfully" Jul 7 06:15:59.294995 kubelet[2695]: I0707 06:15:59.294968 2695 scope.go:117] "RemoveContainer" containerID="ef7a06707650fc932a3c3e52cf8d7b0747bf6114b0e97988d8b3f9e8d5ed5c61" Jul 7 06:15:59.296333 containerd[1562]: time="2025-07-07T06:15:59.296309408Z" level=info msg="RemoveContainer for \"ef7a06707650fc932a3c3e52cf8d7b0747bf6114b0e97988d8b3f9e8d5ed5c61\"" Jul 7 06:15:59.300834 containerd[1562]: time="2025-07-07T06:15:59.300787627Z" level=info msg="RemoveContainer for \"ef7a06707650fc932a3c3e52cf8d7b0747bf6114b0e97988d8b3f9e8d5ed5c61\" returns successfully" Jul 7 06:15:59.301043 kubelet[2695]: I0707 06:15:59.301014 2695 scope.go:117] "RemoveContainer" containerID="5c345359a445518e235618d30c61ee3a0e493d6e3ffd70b1b6c179cdefa34ddc" Jul 7 06:15:59.301406 containerd[1562]: time="2025-07-07T06:15:59.301360084Z" level=error msg="ContainerStatus for \"5c345359a445518e235618d30c61ee3a0e493d6e3ffd70b1b6c179cdefa34ddc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5c345359a445518e235618d30c61ee3a0e493d6e3ffd70b1b6c179cdefa34ddc\": not found" Jul 7 06:15:59.305296 kubelet[2695]: E0707 06:15:59.305256 2695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5c345359a445518e235618d30c61ee3a0e493d6e3ffd70b1b6c179cdefa34ddc\": not found" containerID="5c345359a445518e235618d30c61ee3a0e493d6e3ffd70b1b6c179cdefa34ddc" Jul 7 06:15:59.305374 kubelet[2695]: I0707 06:15:59.305300 2695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5c345359a445518e235618d30c61ee3a0e493d6e3ffd70b1b6c179cdefa34ddc"} err="failed to get container status \"5c345359a445518e235618d30c61ee3a0e493d6e3ffd70b1b6c179cdefa34ddc\": rpc error: code = NotFound desc = an error occurred when try to find container \"5c345359a445518e235618d30c61ee3a0e493d6e3ffd70b1b6c179cdefa34ddc\": not found" Jul 7 06:15:59.305374 kubelet[2695]: I0707 06:15:59.305348 2695 scope.go:117] "RemoveContainer" containerID="47ac48cf9b5d0f0abc0384feb4d62475c8f2b191f7e640d67a9799007f075fb3" Jul 7 06:15:59.305608 containerd[1562]: time="2025-07-07T06:15:59.305574718Z" level=error msg="ContainerStatus for \"47ac48cf9b5d0f0abc0384feb4d62475c8f2b191f7e640d67a9799007f075fb3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"47ac48cf9b5d0f0abc0384feb4d62475c8f2b191f7e640d67a9799007f075fb3\": not found" Jul 7 06:15:59.305710 kubelet[2695]: E0707 06:15:59.305686 2695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"47ac48cf9b5d0f0abc0384feb4d62475c8f2b191f7e640d67a9799007f075fb3\": not found" containerID="47ac48cf9b5d0f0abc0384feb4d62475c8f2b191f7e640d67a9799007f075fb3" Jul 7 06:15:59.305754 kubelet[2695]: I0707 06:15:59.305709 2695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"47ac48cf9b5d0f0abc0384feb4d62475c8f2b191f7e640d67a9799007f075fb3"} err="failed to get container status \"47ac48cf9b5d0f0abc0384feb4d62475c8f2b191f7e640d67a9799007f075fb3\": rpc error: code = NotFound desc = an error occurred when try to find container \"47ac48cf9b5d0f0abc0384feb4d62475c8f2b191f7e640d67a9799007f075fb3\": not found" Jul 7 06:15:59.305754 kubelet[2695]: I0707 06:15:59.305723 2695 scope.go:117] "RemoveContainer" containerID="1986606c5b0356c9f99193b2c498eb11ec014dd6f394ab3b3b22d10f5f469016" Jul 7 06:15:59.305899 containerd[1562]: time="2025-07-07T06:15:59.305861487Z" level=error msg="ContainerStatus for \"1986606c5b0356c9f99193b2c498eb11ec014dd6f394ab3b3b22d10f5f469016\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1986606c5b0356c9f99193b2c498eb11ec014dd6f394ab3b3b22d10f5f469016\": not found" Jul 7 06:15:59.305986 kubelet[2695]: E0707 06:15:59.305961 2695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1986606c5b0356c9f99193b2c498eb11ec014dd6f394ab3b3b22d10f5f469016\": not found" containerID="1986606c5b0356c9f99193b2c498eb11ec014dd6f394ab3b3b22d10f5f469016" Jul 7 06:15:59.306022 kubelet[2695]: I0707 06:15:59.305985 2695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1986606c5b0356c9f99193b2c498eb11ec014dd6f394ab3b3b22d10f5f469016"} err="failed to get container status \"1986606c5b0356c9f99193b2c498eb11ec014dd6f394ab3b3b22d10f5f469016\": rpc error: code = NotFound desc = an error occurred when try to find container \"1986606c5b0356c9f99193b2c498eb11ec014dd6f394ab3b3b22d10f5f469016\": not found" Jul 7 06:15:59.306022 kubelet[2695]: I0707 06:15:59.306000 2695 scope.go:117] "RemoveContainer" containerID="ead5dc135dfb9f9bd783179e073a9fd1238c3a81b733b2ca745ddc66524896b4" Jul 7 06:15:59.306195 containerd[1562]: time="2025-07-07T06:15:59.306164156Z" level=error msg="ContainerStatus for \"ead5dc135dfb9f9bd783179e073a9fd1238c3a81b733b2ca745ddc66524896b4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ead5dc135dfb9f9bd783179e073a9fd1238c3a81b733b2ca745ddc66524896b4\": not found" Jul 7 06:15:59.306296 kubelet[2695]: E0707 06:15:59.306271 2695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ead5dc135dfb9f9bd783179e073a9fd1238c3a81b733b2ca745ddc66524896b4\": not found" containerID="ead5dc135dfb9f9bd783179e073a9fd1238c3a81b733b2ca745ddc66524896b4" Jul 7 06:15:59.306345 kubelet[2695]: I0707 06:15:59.306294 2695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ead5dc135dfb9f9bd783179e073a9fd1238c3a81b733b2ca745ddc66524896b4"} err="failed to get container status \"ead5dc135dfb9f9bd783179e073a9fd1238c3a81b733b2ca745ddc66524896b4\": rpc error: code = NotFound desc = an error occurred when try to find container \"ead5dc135dfb9f9bd783179e073a9fd1238c3a81b733b2ca745ddc66524896b4\": not found" Jul 7 06:15:59.306345 kubelet[2695]: I0707 06:15:59.306309 2695 scope.go:117] "RemoveContainer" containerID="ef7a06707650fc932a3c3e52cf8d7b0747bf6114b0e97988d8b3f9e8d5ed5c61" Jul 7 06:15:59.306494 containerd[1562]: time="2025-07-07T06:15:59.306463709Z" level=error msg="ContainerStatus for \"ef7a06707650fc932a3c3e52cf8d7b0747bf6114b0e97988d8b3f9e8d5ed5c61\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ef7a06707650fc932a3c3e52cf8d7b0747bf6114b0e97988d8b3f9e8d5ed5c61\": not found" Jul 7 06:15:59.306596 kubelet[2695]: E0707 06:15:59.306576 2695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ef7a06707650fc932a3c3e52cf8d7b0747bf6114b0e97988d8b3f9e8d5ed5c61\": not found" containerID="ef7a06707650fc932a3c3e52cf8d7b0747bf6114b0e97988d8b3f9e8d5ed5c61" Jul 7 06:15:59.306633 kubelet[2695]: I0707 06:15:59.306600 2695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ef7a06707650fc932a3c3e52cf8d7b0747bf6114b0e97988d8b3f9e8d5ed5c61"} err="failed to get container status \"ef7a06707650fc932a3c3e52cf8d7b0747bf6114b0e97988d8b3f9e8d5ed5c61\": rpc error: code = NotFound desc = an error occurred when try to find container \"ef7a06707650fc932a3c3e52cf8d7b0747bf6114b0e97988d8b3f9e8d5ed5c61\": not found" Jul 7 06:15:59.306633 kubelet[2695]: I0707 06:15:59.306619 2695 scope.go:117] "RemoveContainer" containerID="f1dc75e6b6796e2f722231369238dd23ced48e14d71fc3ddc9cd1d2a43970dc3" Jul 7 06:15:59.308083 containerd[1562]: time="2025-07-07T06:15:59.308060111Z" level=info msg="RemoveContainer for \"f1dc75e6b6796e2f722231369238dd23ced48e14d71fc3ddc9cd1d2a43970dc3\"" Jul 7 06:15:59.311933 containerd[1562]: time="2025-07-07T06:15:59.311899378Z" level=info msg="RemoveContainer for \"f1dc75e6b6796e2f722231369238dd23ced48e14d71fc3ddc9cd1d2a43970dc3\" returns successfully" Jul 7 06:15:59.312119 kubelet[2695]: I0707 06:15:59.312079 2695 scope.go:117] "RemoveContainer" containerID="f1dc75e6b6796e2f722231369238dd23ced48e14d71fc3ddc9cd1d2a43970dc3" Jul 7 06:15:59.312377 containerd[1562]: time="2025-07-07T06:15:59.312323626Z" level=error msg="ContainerStatus for \"f1dc75e6b6796e2f722231369238dd23ced48e14d71fc3ddc9cd1d2a43970dc3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f1dc75e6b6796e2f722231369238dd23ced48e14d71fc3ddc9cd1d2a43970dc3\": not found" Jul 7 06:15:59.312484 kubelet[2695]: E0707 06:15:59.312460 2695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f1dc75e6b6796e2f722231369238dd23ced48e14d71fc3ddc9cd1d2a43970dc3\": not found" containerID="f1dc75e6b6796e2f722231369238dd23ced48e14d71fc3ddc9cd1d2a43970dc3" Jul 7 06:15:59.312525 kubelet[2695]: I0707 06:15:59.312490 2695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f1dc75e6b6796e2f722231369238dd23ced48e14d71fc3ddc9cd1d2a43970dc3"} err="failed to get container status \"f1dc75e6b6796e2f722231369238dd23ced48e14d71fc3ddc9cd1d2a43970dc3\": rpc error: code = NotFound desc = an error occurred when try to find container \"f1dc75e6b6796e2f722231369238dd23ced48e14d71fc3ddc9cd1d2a43970dc3\": not found" Jul 7 06:15:59.457328 systemd[1]: var-lib-kubelet-pods-85150542\x2d9e14\x2d40c1\x2d826b\x2dea3ca2302240-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpfnpj.mount: Deactivated successfully. Jul 7 06:15:59.457476 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6103eecd0bd600ddc7f7f7db22a9d38c36b6c851dbd6a889cc8cf39ca4de35ea-shm.mount: Deactivated successfully. Jul 7 06:15:59.457575 systemd[1]: var-lib-kubelet-pods-4f512201\x2d6f8b\x2d4c0a\x2da6c1\x2d61f2688630f3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dknmtg.mount: Deactivated successfully. Jul 7 06:15:59.457673 systemd[1]: var-lib-kubelet-pods-4f512201\x2d6f8b\x2d4c0a\x2da6c1\x2d61f2688630f3-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 7 06:15:59.457767 systemd[1]: var-lib-kubelet-pods-4f512201\x2d6f8b\x2d4c0a\x2da6c1\x2d61f2688630f3-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 7 06:16:00.346703 sshd[4309]: Connection closed by 10.0.0.1 port 54882 Jul 7 06:16:00.347490 sshd-session[4307]: pam_unix(sshd:session): session closed for user core Jul 7 06:16:00.359996 systemd[1]: sshd@23-10.0.0.129:22-10.0.0.1:54882.service: Deactivated successfully. Jul 7 06:16:00.361915 systemd[1]: session-24.scope: Deactivated successfully. Jul 7 06:16:00.362734 systemd-logind[1538]: Session 24 logged out. Waiting for processes to exit. Jul 7 06:16:00.365583 systemd[1]: Started sshd@24-10.0.0.129:22-10.0.0.1:41350.service - OpenSSH per-connection server daemon (10.0.0.1:41350). Jul 7 06:16:00.366464 systemd-logind[1538]: Removed session 24. Jul 7 06:16:00.424868 sshd[4458]: Accepted publickey for core from 10.0.0.1 port 41350 ssh2: RSA SHA256:A8rgjtEfWsEGE4smvhHSEJA2ZNBF4eVGnULCJgixfWQ Jul 7 06:16:00.426490 sshd-session[4458]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:16:00.431720 systemd-logind[1538]: New session 25 of user core. Jul 7 06:16:00.445271 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 7 06:16:00.963893 sshd[4460]: Connection closed by 10.0.0.1 port 41350 Jul 7 06:16:00.964169 sshd-session[4458]: pam_unix(sshd:session): session closed for user core Jul 7 06:16:00.978552 systemd[1]: sshd@24-10.0.0.129:22-10.0.0.1:41350.service: Deactivated successfully. Jul 7 06:16:00.980806 systemd[1]: session-25.scope: Deactivated successfully. Jul 7 06:16:00.982455 systemd-logind[1538]: Session 25 logged out. Waiting for processes to exit. Jul 7 06:16:00.987027 systemd-logind[1538]: Removed session 25. Jul 7 06:16:00.992691 systemd[1]: Started sshd@25-10.0.0.129:22-10.0.0.1:41354.service - OpenSSH per-connection server daemon (10.0.0.1:41354). Jul 7 06:16:01.006144 systemd[1]: Created slice kubepods-burstable-pod79cf7707_6988_4f83_95c0_e70daf7f5c4d.slice - libcontainer container kubepods-burstable-pod79cf7707_6988_4f83_95c0_e70daf7f5c4d.slice. Jul 7 06:16:01.010379 kubelet[2695]: I0707 06:16:01.010331 2695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f512201-6f8b-4c0a-a6c1-61f2688630f3" path="/var/lib/kubelet/pods/4f512201-6f8b-4c0a-a6c1-61f2688630f3/volumes" Jul 7 06:16:01.011589 kubelet[2695]: I0707 06:16:01.011121 2695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85150542-9e14-40c1-826b-ea3ca2302240" path="/var/lib/kubelet/pods/85150542-9e14-40c1-826b-ea3ca2302240/volumes" Jul 7 06:16:01.016779 kubelet[2695]: I0707 06:16:01.016627 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/79cf7707-6988-4f83-95c0-e70daf7f5c4d-hostproc\") pod \"cilium-2xt9g\" (UID: \"79cf7707-6988-4f83-95c0-e70daf7f5c4d\") " pod="kube-system/cilium-2xt9g" Jul 7 06:16:01.016779 kubelet[2695]: I0707 06:16:01.016674 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/79cf7707-6988-4f83-95c0-e70daf7f5c4d-cilium-config-path\") pod \"cilium-2xt9g\" (UID: \"79cf7707-6988-4f83-95c0-e70daf7f5c4d\") " pod="kube-system/cilium-2xt9g" Jul 7 06:16:01.016779 kubelet[2695]: I0707 06:16:01.016698 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/79cf7707-6988-4f83-95c0-e70daf7f5c4d-hubble-tls\") pod \"cilium-2xt9g\" (UID: \"79cf7707-6988-4f83-95c0-e70daf7f5c4d\") " pod="kube-system/cilium-2xt9g" Jul 7 06:16:01.016779 kubelet[2695]: I0707 06:16:01.016735 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/79cf7707-6988-4f83-95c0-e70daf7f5c4d-cilium-run\") pod \"cilium-2xt9g\" (UID: \"79cf7707-6988-4f83-95c0-e70daf7f5c4d\") " pod="kube-system/cilium-2xt9g" Jul 7 06:16:01.016779 kubelet[2695]: I0707 06:16:01.016756 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/79cf7707-6988-4f83-95c0-e70daf7f5c4d-bpf-maps\") pod \"cilium-2xt9g\" (UID: \"79cf7707-6988-4f83-95c0-e70daf7f5c4d\") " pod="kube-system/cilium-2xt9g" Jul 7 06:16:01.016779 kubelet[2695]: I0707 06:16:01.016776 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/79cf7707-6988-4f83-95c0-e70daf7f5c4d-clustermesh-secrets\") pod \"cilium-2xt9g\" (UID: \"79cf7707-6988-4f83-95c0-e70daf7f5c4d\") " pod="kube-system/cilium-2xt9g" Jul 7 06:16:01.017251 kubelet[2695]: I0707 06:16:01.016799 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/79cf7707-6988-4f83-95c0-e70daf7f5c4d-host-proc-sys-net\") pod \"cilium-2xt9g\" (UID: \"79cf7707-6988-4f83-95c0-e70daf7f5c4d\") " pod="kube-system/cilium-2xt9g" Jul 7 06:16:01.017251 kubelet[2695]: I0707 06:16:01.016822 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/79cf7707-6988-4f83-95c0-e70daf7f5c4d-cni-path\") pod \"cilium-2xt9g\" (UID: \"79cf7707-6988-4f83-95c0-e70daf7f5c4d\") " pod="kube-system/cilium-2xt9g" Jul 7 06:16:01.017251 kubelet[2695]: I0707 06:16:01.016843 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/79cf7707-6988-4f83-95c0-e70daf7f5c4d-lib-modules\") pod \"cilium-2xt9g\" (UID: \"79cf7707-6988-4f83-95c0-e70daf7f5c4d\") " pod="kube-system/cilium-2xt9g" Jul 7 06:16:01.017251 kubelet[2695]: I0707 06:16:01.016862 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/79cf7707-6988-4f83-95c0-e70daf7f5c4d-host-proc-sys-kernel\") pod \"cilium-2xt9g\" (UID: \"79cf7707-6988-4f83-95c0-e70daf7f5c4d\") " pod="kube-system/cilium-2xt9g" Jul 7 06:16:01.017251 kubelet[2695]: I0707 06:16:01.016883 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7spgn\" (UniqueName: \"kubernetes.io/projected/79cf7707-6988-4f83-95c0-e70daf7f5c4d-kube-api-access-7spgn\") pod \"cilium-2xt9g\" (UID: \"79cf7707-6988-4f83-95c0-e70daf7f5c4d\") " pod="kube-system/cilium-2xt9g" Jul 7 06:16:01.017251 kubelet[2695]: I0707 06:16:01.016908 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/79cf7707-6988-4f83-95c0-e70daf7f5c4d-etc-cni-netd\") pod \"cilium-2xt9g\" (UID: \"79cf7707-6988-4f83-95c0-e70daf7f5c4d\") " pod="kube-system/cilium-2xt9g" Jul 7 06:16:01.017466 kubelet[2695]: I0707 06:16:01.016957 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/79cf7707-6988-4f83-95c0-e70daf7f5c4d-cilium-ipsec-secrets\") pod \"cilium-2xt9g\" (UID: \"79cf7707-6988-4f83-95c0-e70daf7f5c4d\") " pod="kube-system/cilium-2xt9g" Jul 7 06:16:01.017466 kubelet[2695]: I0707 06:16:01.016983 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/79cf7707-6988-4f83-95c0-e70daf7f5c4d-xtables-lock\") pod \"cilium-2xt9g\" (UID: \"79cf7707-6988-4f83-95c0-e70daf7f5c4d\") " pod="kube-system/cilium-2xt9g" Jul 7 06:16:01.017466 kubelet[2695]: I0707 06:16:01.017002 2695 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/79cf7707-6988-4f83-95c0-e70daf7f5c4d-cilium-cgroup\") pod \"cilium-2xt9g\" (UID: \"79cf7707-6988-4f83-95c0-e70daf7f5c4d\") " pod="kube-system/cilium-2xt9g" Jul 7 06:16:01.053165 sshd[4472]: Accepted publickey for core from 10.0.0.1 port 41354 ssh2: RSA SHA256:A8rgjtEfWsEGE4smvhHSEJA2ZNBF4eVGnULCJgixfWQ Jul 7 06:16:01.055027 sshd-session[4472]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:16:01.060318 systemd-logind[1538]: New session 26 of user core. Jul 7 06:16:01.070331 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 7 06:16:01.124124 sshd[4474]: Connection closed by 10.0.0.1 port 41354 Jul 7 06:16:01.126224 sshd-session[4472]: pam_unix(sshd:session): session closed for user core Jul 7 06:16:01.143297 systemd[1]: sshd@25-10.0.0.129:22-10.0.0.1:41354.service: Deactivated successfully. Jul 7 06:16:01.145334 systemd[1]: session-26.scope: Deactivated successfully. Jul 7 06:16:01.146436 systemd-logind[1538]: Session 26 logged out. Waiting for processes to exit. Jul 7 06:16:01.149298 systemd[1]: Started sshd@26-10.0.0.129:22-10.0.0.1:41366.service - OpenSSH per-connection server daemon (10.0.0.1:41366). Jul 7 06:16:01.150090 systemd-logind[1538]: Removed session 26. Jul 7 06:16:01.206976 sshd[4484]: Accepted publickey for core from 10.0.0.1 port 41366 ssh2: RSA SHA256:A8rgjtEfWsEGE4smvhHSEJA2ZNBF4eVGnULCJgixfWQ Jul 7 06:16:01.208641 sshd-session[4484]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:16:01.213584 systemd-logind[1538]: New session 27 of user core. Jul 7 06:16:01.221311 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 7 06:16:01.609724 kubelet[2695]: E0707 06:16:01.609553 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:16:01.610268 containerd[1562]: time="2025-07-07T06:16:01.610206806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2xt9g,Uid:79cf7707-6988-4f83-95c0-e70daf7f5c4d,Namespace:kube-system,Attempt:0,}" Jul 7 06:16:01.643267 containerd[1562]: time="2025-07-07T06:16:01.643207192Z" level=info msg="connecting to shim 5fc626a25f279593040fd921e72c2ee1f41ed7e21e8a6eeb15d670a2d479a238" address="unix:///run/containerd/s/1570238b4de1341b4768d5a9522b6f8113da360d0e5cd1e51dd90c945b77de76" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:16:01.672358 systemd[1]: Started cri-containerd-5fc626a25f279593040fd921e72c2ee1f41ed7e21e8a6eeb15d670a2d479a238.scope - libcontainer container 5fc626a25f279593040fd921e72c2ee1f41ed7e21e8a6eeb15d670a2d479a238. Jul 7 06:16:01.702656 containerd[1562]: time="2025-07-07T06:16:01.702608839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2xt9g,Uid:79cf7707-6988-4f83-95c0-e70daf7f5c4d,Namespace:kube-system,Attempt:0,} returns sandbox id \"5fc626a25f279593040fd921e72c2ee1f41ed7e21e8a6eeb15d670a2d479a238\"" Jul 7 06:16:01.703693 kubelet[2695]: E0707 06:16:01.703667 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:16:01.709668 containerd[1562]: time="2025-07-07T06:16:01.709604692Z" level=info msg="CreateContainer within sandbox \"5fc626a25f279593040fd921e72c2ee1f41ed7e21e8a6eeb15d670a2d479a238\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 7 06:16:01.729293 containerd[1562]: time="2025-07-07T06:16:01.729237730Z" level=info msg="Container d781f9d1f7ba71be6fd215d576dc5bf548cc309a1fe909a9ea5ca7b054ce9b27: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:16:01.736507 containerd[1562]: time="2025-07-07T06:16:01.736458365Z" level=info msg="CreateContainer within sandbox \"5fc626a25f279593040fd921e72c2ee1f41ed7e21e8a6eeb15d670a2d479a238\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d781f9d1f7ba71be6fd215d576dc5bf548cc309a1fe909a9ea5ca7b054ce9b27\"" Jul 7 06:16:01.736973 containerd[1562]: time="2025-07-07T06:16:01.736947386Z" level=info msg="StartContainer for \"d781f9d1f7ba71be6fd215d576dc5bf548cc309a1fe909a9ea5ca7b054ce9b27\"" Jul 7 06:16:01.737971 containerd[1562]: time="2025-07-07T06:16:01.737929765Z" level=info msg="connecting to shim d781f9d1f7ba71be6fd215d576dc5bf548cc309a1fe909a9ea5ca7b054ce9b27" address="unix:///run/containerd/s/1570238b4de1341b4768d5a9522b6f8113da360d0e5cd1e51dd90c945b77de76" protocol=ttrpc version=3 Jul 7 06:16:01.768348 systemd[1]: Started cri-containerd-d781f9d1f7ba71be6fd215d576dc5bf548cc309a1fe909a9ea5ca7b054ce9b27.scope - libcontainer container d781f9d1f7ba71be6fd215d576dc5bf548cc309a1fe909a9ea5ca7b054ce9b27. Jul 7 06:16:01.797224 containerd[1562]: time="2025-07-07T06:16:01.797166242Z" level=info msg="StartContainer for \"d781f9d1f7ba71be6fd215d576dc5bf548cc309a1fe909a9ea5ca7b054ce9b27\" returns successfully" Jul 7 06:16:01.808146 systemd[1]: cri-containerd-d781f9d1f7ba71be6fd215d576dc5bf548cc309a1fe909a9ea5ca7b054ce9b27.scope: Deactivated successfully. Jul 7 06:16:01.810535 containerd[1562]: time="2025-07-07T06:16:01.810494428Z" level=info msg="received exit event container_id:\"d781f9d1f7ba71be6fd215d576dc5bf548cc309a1fe909a9ea5ca7b054ce9b27\" id:\"d781f9d1f7ba71be6fd215d576dc5bf548cc309a1fe909a9ea5ca7b054ce9b27\" pid:4553 exited_at:{seconds:1751868961 nanos:810218899}" Jul 7 06:16:01.810636 containerd[1562]: time="2025-07-07T06:16:01.810555773Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d781f9d1f7ba71be6fd215d576dc5bf548cc309a1fe909a9ea5ca7b054ce9b27\" id:\"d781f9d1f7ba71be6fd215d576dc5bf548cc309a1fe909a9ea5ca7b054ce9b27\" pid:4553 exited_at:{seconds:1751868961 nanos:810218899}" Jul 7 06:16:02.072605 kubelet[2695]: E0707 06:16:02.072558 2695 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 7 06:16:02.254642 kubelet[2695]: E0707 06:16:02.254604 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:16:02.259361 containerd[1562]: time="2025-07-07T06:16:02.259317439Z" level=info msg="CreateContainer within sandbox \"5fc626a25f279593040fd921e72c2ee1f41ed7e21e8a6eeb15d670a2d479a238\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 7 06:16:02.267050 containerd[1562]: time="2025-07-07T06:16:02.266999614Z" level=info msg="Container 6d9859d6a9e2ec03a0119d1450a5b9ea6972bf91ead5a129bb93d8f7bb6677fa: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:16:02.275119 containerd[1562]: time="2025-07-07T06:16:02.275051976Z" level=info msg="CreateContainer within sandbox \"5fc626a25f279593040fd921e72c2ee1f41ed7e21e8a6eeb15d670a2d479a238\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6d9859d6a9e2ec03a0119d1450a5b9ea6972bf91ead5a129bb93d8f7bb6677fa\"" Jul 7 06:16:02.276049 containerd[1562]: time="2025-07-07T06:16:02.275999151Z" level=info msg="StartContainer for \"6d9859d6a9e2ec03a0119d1450a5b9ea6972bf91ead5a129bb93d8f7bb6677fa\"" Jul 7 06:16:02.277323 containerd[1562]: time="2025-07-07T06:16:02.277274594Z" level=info msg="connecting to shim 6d9859d6a9e2ec03a0119d1450a5b9ea6972bf91ead5a129bb93d8f7bb6677fa" address="unix:///run/containerd/s/1570238b4de1341b4768d5a9522b6f8113da360d0e5cd1e51dd90c945b77de76" protocol=ttrpc version=3 Jul 7 06:16:02.305398 systemd[1]: Started cri-containerd-6d9859d6a9e2ec03a0119d1450a5b9ea6972bf91ead5a129bb93d8f7bb6677fa.scope - libcontainer container 6d9859d6a9e2ec03a0119d1450a5b9ea6972bf91ead5a129bb93d8f7bb6677fa. Jul 7 06:16:02.343220 containerd[1562]: time="2025-07-07T06:16:02.343067722Z" level=info msg="StartContainer for \"6d9859d6a9e2ec03a0119d1450a5b9ea6972bf91ead5a129bb93d8f7bb6677fa\" returns successfully" Jul 7 06:16:02.349921 systemd[1]: cri-containerd-6d9859d6a9e2ec03a0119d1450a5b9ea6972bf91ead5a129bb93d8f7bb6677fa.scope: Deactivated successfully. Jul 7 06:16:02.350411 containerd[1562]: time="2025-07-07T06:16:02.350350225Z" level=info msg="received exit event container_id:\"6d9859d6a9e2ec03a0119d1450a5b9ea6972bf91ead5a129bb93d8f7bb6677fa\" id:\"6d9859d6a9e2ec03a0119d1450a5b9ea6972bf91ead5a129bb93d8f7bb6677fa\" pid:4598 exited_at:{seconds:1751868962 nanos:350040370}" Jul 7 06:16:02.350481 containerd[1562]: time="2025-07-07T06:16:02.350397844Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6d9859d6a9e2ec03a0119d1450a5b9ea6972bf91ead5a129bb93d8f7bb6677fa\" id:\"6d9859d6a9e2ec03a0119d1450a5b9ea6972bf91ead5a129bb93d8f7bb6677fa\" pid:4598 exited_at:{seconds:1751868962 nanos:350040370}" Jul 7 06:16:02.372015 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6d9859d6a9e2ec03a0119d1450a5b9ea6972bf91ead5a129bb93d8f7bb6677fa-rootfs.mount: Deactivated successfully. Jul 7 06:16:03.258998 kubelet[2695]: E0707 06:16:03.258960 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:16:03.266217 containerd[1562]: time="2025-07-07T06:16:03.266152487Z" level=info msg="CreateContainer within sandbox \"5fc626a25f279593040fd921e72c2ee1f41ed7e21e8a6eeb15d670a2d479a238\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 7 06:16:03.287070 containerd[1562]: time="2025-07-07T06:16:03.286999444Z" level=info msg="Container 8c95ea39b3a41518930deebc1ce2e787c28cc9161f3377d6837cd52276ab9a83: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:16:03.297913 containerd[1562]: time="2025-07-07T06:16:03.297847297Z" level=info msg="CreateContainer within sandbox \"5fc626a25f279593040fd921e72c2ee1f41ed7e21e8a6eeb15d670a2d479a238\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8c95ea39b3a41518930deebc1ce2e787c28cc9161f3377d6837cd52276ab9a83\"" Jul 7 06:16:03.298552 containerd[1562]: time="2025-07-07T06:16:03.298507953Z" level=info msg="StartContainer for \"8c95ea39b3a41518930deebc1ce2e787c28cc9161f3377d6837cd52276ab9a83\"" Jul 7 06:16:03.300421 containerd[1562]: time="2025-07-07T06:16:03.300391154Z" level=info msg="connecting to shim 8c95ea39b3a41518930deebc1ce2e787c28cc9161f3377d6837cd52276ab9a83" address="unix:///run/containerd/s/1570238b4de1341b4768d5a9522b6f8113da360d0e5cd1e51dd90c945b77de76" protocol=ttrpc version=3 Jul 7 06:16:03.325317 systemd[1]: Started cri-containerd-8c95ea39b3a41518930deebc1ce2e787c28cc9161f3377d6837cd52276ab9a83.scope - libcontainer container 8c95ea39b3a41518930deebc1ce2e787c28cc9161f3377d6837cd52276ab9a83. Jul 7 06:16:03.379821 systemd[1]: cri-containerd-8c95ea39b3a41518930deebc1ce2e787c28cc9161f3377d6837cd52276ab9a83.scope: Deactivated successfully. Jul 7 06:16:03.380675 containerd[1562]: time="2025-07-07T06:16:03.380615727Z" level=info msg="received exit event container_id:\"8c95ea39b3a41518930deebc1ce2e787c28cc9161f3377d6837cd52276ab9a83\" id:\"8c95ea39b3a41518930deebc1ce2e787c28cc9161f3377d6837cd52276ab9a83\" pid:4642 exited_at:{seconds:1751868963 nanos:380214611}" Jul 7 06:16:03.382137 containerd[1562]: time="2025-07-07T06:16:03.381406548Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8c95ea39b3a41518930deebc1ce2e787c28cc9161f3377d6837cd52276ab9a83\" id:\"8c95ea39b3a41518930deebc1ce2e787c28cc9161f3377d6837cd52276ab9a83\" pid:4642 exited_at:{seconds:1751868963 nanos:380214611}" Jul 7 06:16:03.382758 containerd[1562]: time="2025-07-07T06:16:03.382729273Z" level=info msg="StartContainer for \"8c95ea39b3a41518930deebc1ce2e787c28cc9161f3377d6837cd52276ab9a83\" returns successfully" Jul 7 06:16:03.413207 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8c95ea39b3a41518930deebc1ce2e787c28cc9161f3377d6837cd52276ab9a83-rootfs.mount: Deactivated successfully. Jul 7 06:16:04.264183 kubelet[2695]: E0707 06:16:04.264138 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:16:04.445372 containerd[1562]: time="2025-07-07T06:16:04.445307460Z" level=info msg="CreateContainer within sandbox \"5fc626a25f279593040fd921e72c2ee1f41ed7e21e8a6eeb15d670a2d479a238\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 7 06:16:04.463206 containerd[1562]: time="2025-07-07T06:16:04.463128072Z" level=info msg="Container efc2f87e55edf5918c2af013d0cc9d0ef04bf91f38f1256543c48db4ae7002bf: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:16:04.468592 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1574920460.mount: Deactivated successfully. Jul 7 06:16:04.473255 containerd[1562]: time="2025-07-07T06:16:04.473207890Z" level=info msg="CreateContainer within sandbox \"5fc626a25f279593040fd921e72c2ee1f41ed7e21e8a6eeb15d670a2d479a238\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"efc2f87e55edf5918c2af013d0cc9d0ef04bf91f38f1256543c48db4ae7002bf\"" Jul 7 06:16:04.473939 containerd[1562]: time="2025-07-07T06:16:04.473889937Z" level=info msg="StartContainer for \"efc2f87e55edf5918c2af013d0cc9d0ef04bf91f38f1256543c48db4ae7002bf\"" Jul 7 06:16:04.475915 containerd[1562]: time="2025-07-07T06:16:04.475875854Z" level=info msg="connecting to shim efc2f87e55edf5918c2af013d0cc9d0ef04bf91f38f1256543c48db4ae7002bf" address="unix:///run/containerd/s/1570238b4de1341b4768d5a9522b6f8113da360d0e5cd1e51dd90c945b77de76" protocol=ttrpc version=3 Jul 7 06:16:04.505373 systemd[1]: Started cri-containerd-efc2f87e55edf5918c2af013d0cc9d0ef04bf91f38f1256543c48db4ae7002bf.scope - libcontainer container efc2f87e55edf5918c2af013d0cc9d0ef04bf91f38f1256543c48db4ae7002bf. Jul 7 06:16:04.535108 systemd[1]: cri-containerd-efc2f87e55edf5918c2af013d0cc9d0ef04bf91f38f1256543c48db4ae7002bf.scope: Deactivated successfully. Jul 7 06:16:04.537068 containerd[1562]: time="2025-07-07T06:16:04.537023002Z" level=info msg="TaskExit event in podsandbox handler container_id:\"efc2f87e55edf5918c2af013d0cc9d0ef04bf91f38f1256543c48db4ae7002bf\" id:\"efc2f87e55edf5918c2af013d0cc9d0ef04bf91f38f1256543c48db4ae7002bf\" pid:4682 exited_at:{seconds:1751868964 nanos:535436870}" Jul 7 06:16:04.537464 containerd[1562]: time="2025-07-07T06:16:04.537429890Z" level=info msg="received exit event container_id:\"efc2f87e55edf5918c2af013d0cc9d0ef04bf91f38f1256543c48db4ae7002bf\" id:\"efc2f87e55edf5918c2af013d0cc9d0ef04bf91f38f1256543c48db4ae7002bf\" pid:4682 exited_at:{seconds:1751868964 nanos:535436870}" Jul 7 06:16:04.545747 containerd[1562]: time="2025-07-07T06:16:04.545688802Z" level=info msg="StartContainer for \"efc2f87e55edf5918c2af013d0cc9d0ef04bf91f38f1256543c48db4ae7002bf\" returns successfully" Jul 7 06:16:04.559784 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-efc2f87e55edf5918c2af013d0cc9d0ef04bf91f38f1256543c48db4ae7002bf-rootfs.mount: Deactivated successfully. Jul 7 06:16:05.270659 kubelet[2695]: E0707 06:16:05.270620 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:16:05.277118 containerd[1562]: time="2025-07-07T06:16:05.277050338Z" level=info msg="CreateContainer within sandbox \"5fc626a25f279593040fd921e72c2ee1f41ed7e21e8a6eeb15d670a2d479a238\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 7 06:16:05.291517 containerd[1562]: time="2025-07-07T06:16:05.291467397Z" level=info msg="Container 487789d6cf68f7c152aafc0e4e98bef5dc83231109a4d94062234df321f0fd26: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:16:05.300325 containerd[1562]: time="2025-07-07T06:16:05.300195966Z" level=info msg="CreateContainer within sandbox \"5fc626a25f279593040fd921e72c2ee1f41ed7e21e8a6eeb15d670a2d479a238\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"487789d6cf68f7c152aafc0e4e98bef5dc83231109a4d94062234df321f0fd26\"" Jul 7 06:16:05.301275 containerd[1562]: time="2025-07-07T06:16:05.301225810Z" level=info msg="StartContainer for \"487789d6cf68f7c152aafc0e4e98bef5dc83231109a4d94062234df321f0fd26\"" Jul 7 06:16:05.303042 containerd[1562]: time="2025-07-07T06:16:05.302995301Z" level=info msg="connecting to shim 487789d6cf68f7c152aafc0e4e98bef5dc83231109a4d94062234df321f0fd26" address="unix:///run/containerd/s/1570238b4de1341b4768d5a9522b6f8113da360d0e5cd1e51dd90c945b77de76" protocol=ttrpc version=3 Jul 7 06:16:05.324277 systemd[1]: Started cri-containerd-487789d6cf68f7c152aafc0e4e98bef5dc83231109a4d94062234df321f0fd26.scope - libcontainer container 487789d6cf68f7c152aafc0e4e98bef5dc83231109a4d94062234df321f0fd26. Jul 7 06:16:05.365661 containerd[1562]: time="2025-07-07T06:16:05.365605424Z" level=info msg="StartContainer for \"487789d6cf68f7c152aafc0e4e98bef5dc83231109a4d94062234df321f0fd26\" returns successfully" Jul 7 06:16:05.436041 containerd[1562]: time="2025-07-07T06:16:05.435970779Z" level=info msg="TaskExit event in podsandbox handler container_id:\"487789d6cf68f7c152aafc0e4e98bef5dc83231109a4d94062234df321f0fd26\" id:\"f392874826c07e319cd99a408484c55030f35a76cd8ded7b08d9032bf7c6e11a\" pid:4752 exited_at:{seconds:1751868965 nanos:435637730}" Jul 7 06:16:05.872150 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Jul 7 06:16:06.276957 kubelet[2695]: E0707 06:16:06.276919 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:16:06.463872 kubelet[2695]: I0707 06:16:06.463493 2695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-2xt9g" podStartSLOduration=6.463478461 podStartE2EDuration="6.463478461s" podCreationTimestamp="2025-07-07 06:16:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:16:06.462895029 +0000 UTC m=+89.834038682" watchObservedRunningTime="2025-07-07 06:16:06.463478461 +0000 UTC m=+89.834622114" Jul 7 06:16:07.612278 kubelet[2695]: E0707 06:16:07.612234 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:16:07.633454 containerd[1562]: time="2025-07-07T06:16:07.633397837Z" level=info msg="TaskExit event in podsandbox handler container_id:\"487789d6cf68f7c152aafc0e4e98bef5dc83231109a4d94062234df321f0fd26\" id:\"998fe001e943b586a11299f6cefcaa32da3efbc36ffc38385e9a36be4e709012\" pid:4893 exit_status:1 exited_at:{seconds:1751868967 nanos:631858257}" Jul 7 06:16:09.056025 systemd-networkd[1486]: lxc_health: Link UP Jul 7 06:16:09.057824 systemd-networkd[1486]: lxc_health: Gained carrier Jul 7 06:16:09.613127 kubelet[2695]: E0707 06:16:09.612806 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:16:09.774469 containerd[1562]: time="2025-07-07T06:16:09.774421985Z" level=info msg="TaskExit event in podsandbox handler container_id:\"487789d6cf68f7c152aafc0e4e98bef5dc83231109a4d94062234df321f0fd26\" id:\"b2098eac207cc401eae9b5f9d407148e10ee628e8bf8a20534dc5886b93d277f\" pid:5278 exited_at:{seconds:1751868969 nanos:772338964}" Jul 7 06:16:10.283657 kubelet[2695]: E0707 06:16:10.283580 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:16:11.003932 kubelet[2695]: E0707 06:16:11.003474 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:16:11.085400 systemd-networkd[1486]: lxc_health: Gained IPv6LL Jul 7 06:16:11.285417 kubelet[2695]: E0707 06:16:11.285131 2695 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:16:12.153512 containerd[1562]: time="2025-07-07T06:16:12.153463568Z" level=info msg="TaskExit event in podsandbox handler container_id:\"487789d6cf68f7c152aafc0e4e98bef5dc83231109a4d94062234df321f0fd26\" id:\"7f0e8f624e55865a85d6c6d48b36934bbe93887b5c72a1a07a5c280e46071f22\" pid:5315 exited_at:{seconds:1751868972 nanos:153022372}" Jul 7 06:16:14.251723 containerd[1562]: time="2025-07-07T06:16:14.251675326Z" level=info msg="TaskExit event in podsandbox handler container_id:\"487789d6cf68f7c152aafc0e4e98bef5dc83231109a4d94062234df321f0fd26\" id:\"99a68234d69cb9f890bcc1ff192000ed5b777dcfece5431ecfbd225cfcc713b7\" pid:5348 exited_at:{seconds:1751868974 nanos:251304302}" Jul 7 06:16:16.366064 containerd[1562]: time="2025-07-07T06:16:16.366009428Z" level=info msg="TaskExit event in podsandbox handler container_id:\"487789d6cf68f7c152aafc0e4e98bef5dc83231109a4d94062234df321f0fd26\" id:\"681e072b5321211029f0c34c4701b6a3202d795ab741e1cd8b81b7902d89e023\" pid:5371 exited_at:{seconds:1751868976 nanos:365671546}" Jul 7 06:16:16.372715 sshd[4486]: Connection closed by 10.0.0.1 port 41366 Jul 7 06:16:16.373138 sshd-session[4484]: pam_unix(sshd:session): session closed for user core Jul 7 06:16:16.377192 systemd[1]: sshd@26-10.0.0.129:22-10.0.0.1:41366.service: Deactivated successfully. Jul 7 06:16:16.379407 systemd[1]: session-27.scope: Deactivated successfully. Jul 7 06:16:16.380237 systemd-logind[1538]: Session 27 logged out. Waiting for processes to exit. Jul 7 06:16:16.381472 systemd-logind[1538]: Removed session 27.