Jul 12 00:16:00.866812 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Jul 11 22:06:57 -00 2025 Jul 12 00:16:00.866835 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=403b91c9a87828c895f7b7bfd580cc2c7aac71fa87076ee6fb7434b6c136b8f2 Jul 12 00:16:00.866846 kernel: BIOS-provided physical RAM map: Jul 12 00:16:00.866853 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 12 00:16:00.866859 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jul 12 00:16:00.866866 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jul 12 00:16:00.866874 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jul 12 00:16:00.866881 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jul 12 00:16:00.866894 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Jul 12 00:16:00.866901 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jul 12 00:16:00.866907 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Jul 12 00:16:00.866914 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jul 12 00:16:00.866920 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jul 12 00:16:00.866927 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jul 12 00:16:00.866938 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jul 12 00:16:00.866946 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jul 12 00:16:00.866956 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jul 12 00:16:00.866963 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jul 12 00:16:00.866970 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jul 12 00:16:00.866977 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jul 12 00:16:00.866984 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jul 12 00:16:00.866992 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jul 12 00:16:00.866999 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jul 12 00:16:00.867006 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 12 00:16:00.867013 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jul 12 00:16:00.867030 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 12 00:16:00.867038 kernel: NX (Execute Disable) protection: active Jul 12 00:16:00.867046 kernel: APIC: Static calls initialized Jul 12 00:16:00.867053 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Jul 12 00:16:00.867060 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Jul 12 00:16:00.867067 kernel: extended physical RAM map: Jul 12 00:16:00.867074 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 12 00:16:00.867082 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Jul 12 00:16:00.867089 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jul 12 00:16:00.867096 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Jul 12 00:16:00.867103 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jul 12 00:16:00.867113 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Jul 12 00:16:00.867120 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jul 12 00:16:00.867127 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Jul 12 00:16:00.867135 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Jul 12 00:16:00.867145 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Jul 12 00:16:00.867152 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Jul 12 00:16:00.867162 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Jul 12 00:16:00.867170 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jul 12 00:16:00.867177 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jul 12 00:16:00.867185 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jul 12 00:16:00.867195 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jul 12 00:16:00.867202 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jul 12 00:16:00.867210 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jul 12 00:16:00.867219 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jul 12 00:16:00.867228 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jul 12 00:16:00.867239 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jul 12 00:16:00.867247 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jul 12 00:16:00.867255 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jul 12 00:16:00.867262 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jul 12 00:16:00.867270 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 12 00:16:00.867277 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jul 12 00:16:00.867285 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 12 00:16:00.867295 kernel: efi: EFI v2.7 by EDK II Jul 12 00:16:00.867303 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Jul 12 00:16:00.867310 kernel: random: crng init done Jul 12 00:16:00.867320 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Jul 12 00:16:00.867328 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Jul 12 00:16:00.867339 kernel: secureboot: Secure boot disabled Jul 12 00:16:00.867347 kernel: SMBIOS 2.8 present. Jul 12 00:16:00.867355 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Jul 12 00:16:00.867362 kernel: DMI: Memory slots populated: 1/1 Jul 12 00:16:00.867370 kernel: Hypervisor detected: KVM Jul 12 00:16:00.867377 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 12 00:16:00.867385 kernel: kvm-clock: using sched offset of 6248436957 cycles Jul 12 00:16:00.867393 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 12 00:16:00.867401 kernel: tsc: Detected 2794.746 MHz processor Jul 12 00:16:00.867409 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 12 00:16:00.867424 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 12 00:16:00.867435 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jul 12 00:16:00.867451 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jul 12 00:16:00.867470 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 12 00:16:00.867485 kernel: Using GB pages for direct mapping Jul 12 00:16:00.867493 kernel: ACPI: Early table checksum verification disabled Jul 12 00:16:00.867501 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jul 12 00:16:00.867509 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jul 12 00:16:00.867516 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:16:00.867527 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:16:00.867534 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jul 12 00:16:00.867542 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:16:00.867550 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:16:00.867571 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:16:00.867579 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:16:00.867586 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jul 12 00:16:00.867594 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jul 12 00:16:00.867602 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Jul 12 00:16:00.867613 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jul 12 00:16:00.867621 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jul 12 00:16:00.867628 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jul 12 00:16:00.867636 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jul 12 00:16:00.867643 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jul 12 00:16:00.867651 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jul 12 00:16:00.867659 kernel: No NUMA configuration found Jul 12 00:16:00.867667 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Jul 12 00:16:00.867675 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Jul 12 00:16:00.867685 kernel: Zone ranges: Jul 12 00:16:00.867692 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 12 00:16:00.867700 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Jul 12 00:16:00.867708 kernel: Normal empty Jul 12 00:16:00.867715 kernel: Device empty Jul 12 00:16:00.867723 kernel: Movable zone start for each node Jul 12 00:16:00.867730 kernel: Early memory node ranges Jul 12 00:16:00.867749 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jul 12 00:16:00.867757 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jul 12 00:16:00.867777 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jul 12 00:16:00.867788 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Jul 12 00:16:00.867796 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Jul 12 00:16:00.867804 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Jul 12 00:16:00.867812 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Jul 12 00:16:00.867819 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Jul 12 00:16:00.867827 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Jul 12 00:16:00.867837 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 12 00:16:00.867845 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jul 12 00:16:00.867862 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jul 12 00:16:00.867870 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 12 00:16:00.867878 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Jul 12 00:16:00.867886 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Jul 12 00:16:00.867897 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jul 12 00:16:00.867905 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Jul 12 00:16:00.867913 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Jul 12 00:16:00.867921 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 12 00:16:00.867929 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 12 00:16:00.867940 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 12 00:16:00.867948 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 12 00:16:00.867956 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 12 00:16:00.867964 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 12 00:16:00.867973 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 12 00:16:00.867981 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 12 00:16:00.867989 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 12 00:16:00.867997 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 12 00:16:00.868005 kernel: TSC deadline timer available Jul 12 00:16:00.868015 kernel: CPU topo: Max. logical packages: 1 Jul 12 00:16:00.868030 kernel: CPU topo: Max. logical dies: 1 Jul 12 00:16:00.868038 kernel: CPU topo: Max. dies per package: 1 Jul 12 00:16:00.868047 kernel: CPU topo: Max. threads per core: 1 Jul 12 00:16:00.868055 kernel: CPU topo: Num. cores per package: 4 Jul 12 00:16:00.868063 kernel: CPU topo: Num. threads per package: 4 Jul 12 00:16:00.868071 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jul 12 00:16:00.868079 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 12 00:16:00.868088 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 12 00:16:00.868096 kernel: kvm-guest: setup PV sched yield Jul 12 00:16:00.868108 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Jul 12 00:16:00.868117 kernel: Booting paravirtualized kernel on KVM Jul 12 00:16:00.868125 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 12 00:16:00.868133 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jul 12 00:16:00.868141 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jul 12 00:16:00.868149 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jul 12 00:16:00.868158 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 12 00:16:00.868165 kernel: kvm-guest: PV spinlocks enabled Jul 12 00:16:00.868176 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 12 00:16:00.868185 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=403b91c9a87828c895f7b7bfd580cc2c7aac71fa87076ee6fb7434b6c136b8f2 Jul 12 00:16:00.868197 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 12 00:16:00.868205 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 12 00:16:00.868213 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 12 00:16:00.868221 kernel: Fallback order for Node 0: 0 Jul 12 00:16:00.868229 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Jul 12 00:16:00.868237 kernel: Policy zone: DMA32 Jul 12 00:16:00.868245 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 12 00:16:00.868255 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 12 00:16:00.868263 kernel: ftrace: allocating 40095 entries in 157 pages Jul 12 00:16:00.868271 kernel: ftrace: allocated 157 pages with 5 groups Jul 12 00:16:00.868279 kernel: Dynamic Preempt: voluntary Jul 12 00:16:00.868287 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 12 00:16:00.868295 kernel: rcu: RCU event tracing is enabled. Jul 12 00:16:00.868303 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 12 00:16:00.868312 kernel: Trampoline variant of Tasks RCU enabled. Jul 12 00:16:00.868320 kernel: Rude variant of Tasks RCU enabled. Jul 12 00:16:00.868330 kernel: Tracing variant of Tasks RCU enabled. Jul 12 00:16:00.868338 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 12 00:16:00.868348 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 12 00:16:00.868356 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 12 00:16:00.868364 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 12 00:16:00.868373 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 12 00:16:00.868381 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 12 00:16:00.868389 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 12 00:16:00.868397 kernel: Console: colour dummy device 80x25 Jul 12 00:16:00.868407 kernel: printk: legacy console [ttyS0] enabled Jul 12 00:16:00.868415 kernel: ACPI: Core revision 20240827 Jul 12 00:16:00.868423 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 12 00:16:00.868431 kernel: APIC: Switch to symmetric I/O mode setup Jul 12 00:16:00.868438 kernel: x2apic enabled Jul 12 00:16:00.868447 kernel: APIC: Switched APIC routing to: physical x2apic Jul 12 00:16:00.868455 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jul 12 00:16:00.868463 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jul 12 00:16:00.868471 kernel: kvm-guest: setup PV IPIs Jul 12 00:16:00.868481 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 12 00:16:00.868489 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848ddd4e75, max_idle_ns: 440795346320 ns Jul 12 00:16:00.868497 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794746) Jul 12 00:16:00.868505 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 12 00:16:00.868513 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 12 00:16:00.868521 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 12 00:16:00.868529 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 12 00:16:00.868537 kernel: Spectre V2 : Mitigation: Retpolines Jul 12 00:16:00.868545 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 12 00:16:00.868555 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 12 00:16:00.868659 kernel: RETBleed: Mitigation: untrained return thunk Jul 12 00:16:00.868667 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 12 00:16:00.868678 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 12 00:16:00.868686 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jul 12 00:16:00.868695 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jul 12 00:16:00.868703 kernel: x86/bugs: return thunk changed Jul 12 00:16:00.868711 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jul 12 00:16:00.868722 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 12 00:16:00.868730 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 12 00:16:00.868738 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 12 00:16:00.868746 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 12 00:16:00.868754 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 12 00:16:00.868762 kernel: Freeing SMP alternatives memory: 32K Jul 12 00:16:00.868770 kernel: pid_max: default: 32768 minimum: 301 Jul 12 00:16:00.868778 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 12 00:16:00.868786 kernel: landlock: Up and running. Jul 12 00:16:00.868796 kernel: SELinux: Initializing. Jul 12 00:16:00.868804 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 12 00:16:00.868812 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 12 00:16:00.868820 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 12 00:16:00.868828 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 12 00:16:00.868836 kernel: ... version: 0 Jul 12 00:16:00.868844 kernel: ... bit width: 48 Jul 12 00:16:00.868852 kernel: ... generic registers: 6 Jul 12 00:16:00.868860 kernel: ... value mask: 0000ffffffffffff Jul 12 00:16:00.868870 kernel: ... max period: 00007fffffffffff Jul 12 00:16:00.868878 kernel: ... fixed-purpose events: 0 Jul 12 00:16:00.868886 kernel: ... event mask: 000000000000003f Jul 12 00:16:00.868894 kernel: signal: max sigframe size: 1776 Jul 12 00:16:00.868902 kernel: rcu: Hierarchical SRCU implementation. Jul 12 00:16:00.868910 kernel: rcu: Max phase no-delay instances is 400. Jul 12 00:16:00.868921 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 12 00:16:00.868929 kernel: smp: Bringing up secondary CPUs ... Jul 12 00:16:00.868938 kernel: smpboot: x86: Booting SMP configuration: Jul 12 00:16:00.868948 kernel: .... node #0, CPUs: #1 #2 #3 Jul 12 00:16:00.868956 kernel: smp: Brought up 1 node, 4 CPUs Jul 12 00:16:00.868964 kernel: smpboot: Total of 4 processors activated (22357.96 BogoMIPS) Jul 12 00:16:00.868972 kernel: Memory: 2422664K/2565800K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54420K init, 2548K bss, 137196K reserved, 0K cma-reserved) Jul 12 00:16:00.868980 kernel: devtmpfs: initialized Jul 12 00:16:00.868988 kernel: x86/mm: Memory block size: 128MB Jul 12 00:16:00.868996 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jul 12 00:16:00.869004 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jul 12 00:16:00.869012 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Jul 12 00:16:00.869034 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jul 12 00:16:00.869053 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Jul 12 00:16:00.869062 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jul 12 00:16:00.869079 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 12 00:16:00.869089 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 12 00:16:00.869109 kernel: pinctrl core: initialized pinctrl subsystem Jul 12 00:16:00.869120 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 12 00:16:00.869130 kernel: audit: initializing netlink subsys (disabled) Jul 12 00:16:00.869140 kernel: audit: type=2000 audit(1752279358.123:1): state=initialized audit_enabled=0 res=1 Jul 12 00:16:00.869154 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 12 00:16:00.869164 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 12 00:16:00.869174 kernel: cpuidle: using governor menu Jul 12 00:16:00.869184 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 12 00:16:00.869200 kernel: dca service started, version 1.12.1 Jul 12 00:16:00.869211 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Jul 12 00:16:00.869221 kernel: PCI: Using configuration type 1 for base access Jul 12 00:16:00.869231 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 12 00:16:00.869245 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 12 00:16:00.869255 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 12 00:16:00.869265 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 12 00:16:00.869276 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 12 00:16:00.869286 kernel: ACPI: Added _OSI(Module Device) Jul 12 00:16:00.869296 kernel: ACPI: Added _OSI(Processor Device) Jul 12 00:16:00.869306 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 12 00:16:00.869316 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 12 00:16:00.869323 kernel: ACPI: Interpreter enabled Jul 12 00:16:00.869331 kernel: ACPI: PM: (supports S0 S3 S5) Jul 12 00:16:00.869342 kernel: ACPI: Using IOAPIC for interrupt routing Jul 12 00:16:00.869350 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 12 00:16:00.869358 kernel: PCI: Using E820 reservations for host bridge windows Jul 12 00:16:00.869366 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 12 00:16:00.869374 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 12 00:16:00.869627 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 12 00:16:00.869762 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jul 12 00:16:00.869889 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jul 12 00:16:00.869900 kernel: PCI host bridge to bus 0000:00 Jul 12 00:16:00.870051 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 12 00:16:00.870223 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 12 00:16:00.870342 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 12 00:16:00.870452 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Jul 12 00:16:00.870577 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jul 12 00:16:00.870695 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Jul 12 00:16:00.870804 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 12 00:16:00.870966 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jul 12 00:16:00.871118 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jul 12 00:16:00.871241 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Jul 12 00:16:00.871362 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Jul 12 00:16:00.871496 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Jul 12 00:16:00.871633 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 12 00:16:00.871775 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jul 12 00:16:00.871899 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Jul 12 00:16:00.872036 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Jul 12 00:16:00.872164 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Jul 12 00:16:00.872304 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jul 12 00:16:00.872432 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Jul 12 00:16:00.872553 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Jul 12 00:16:00.872697 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Jul 12 00:16:00.872838 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jul 12 00:16:00.872961 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Jul 12 00:16:00.873095 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Jul 12 00:16:00.873217 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Jul 12 00:16:00.873343 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Jul 12 00:16:00.873495 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jul 12 00:16:00.873639 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 12 00:16:00.873779 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jul 12 00:16:00.873901 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Jul 12 00:16:00.874032 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Jul 12 00:16:00.874173 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jul 12 00:16:00.874302 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Jul 12 00:16:00.874313 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 12 00:16:00.874322 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 12 00:16:00.874330 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 12 00:16:00.874339 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 12 00:16:00.874347 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 12 00:16:00.874355 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 12 00:16:00.874366 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 12 00:16:00.874381 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 12 00:16:00.874390 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 12 00:16:00.874398 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 12 00:16:00.874406 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 12 00:16:00.874414 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 12 00:16:00.874422 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 12 00:16:00.874430 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 12 00:16:00.874438 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 12 00:16:00.874447 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 12 00:16:00.874457 kernel: iommu: Default domain type: Translated Jul 12 00:16:00.874465 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 12 00:16:00.874474 kernel: efivars: Registered efivars operations Jul 12 00:16:00.874482 kernel: PCI: Using ACPI for IRQ routing Jul 12 00:16:00.874490 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 12 00:16:00.874498 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jul 12 00:16:00.874506 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Jul 12 00:16:00.874514 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Jul 12 00:16:00.874522 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Jul 12 00:16:00.874532 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Jul 12 00:16:00.874540 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Jul 12 00:16:00.874548 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Jul 12 00:16:00.874571 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Jul 12 00:16:00.874721 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 12 00:16:00.874843 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 12 00:16:00.874961 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 12 00:16:00.874972 kernel: vgaarb: loaded Jul 12 00:16:00.874985 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 12 00:16:00.874993 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 12 00:16:00.875002 kernel: clocksource: Switched to clocksource kvm-clock Jul 12 00:16:00.875010 kernel: VFS: Disk quotas dquot_6.6.0 Jul 12 00:16:00.875028 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 12 00:16:00.875037 kernel: pnp: PnP ACPI init Jul 12 00:16:00.875198 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Jul 12 00:16:00.875227 kernel: pnp: PnP ACPI: found 6 devices Jul 12 00:16:00.875242 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 12 00:16:00.875253 kernel: NET: Registered PF_INET protocol family Jul 12 00:16:00.875264 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 12 00:16:00.875274 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 12 00:16:00.875285 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 12 00:16:00.875296 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 12 00:16:00.875307 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 12 00:16:00.875318 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 12 00:16:00.875329 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 12 00:16:00.875343 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 12 00:16:00.875354 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 12 00:16:00.875365 kernel: NET: Registered PF_XDP protocol family Jul 12 00:16:00.875499 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Jul 12 00:16:00.875642 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Jul 12 00:16:00.875756 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 12 00:16:00.875866 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 12 00:16:00.875976 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 12 00:16:00.876109 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Jul 12 00:16:00.876219 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jul 12 00:16:00.876331 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Jul 12 00:16:00.876343 kernel: PCI: CLS 0 bytes, default 64 Jul 12 00:16:00.876352 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848ddd4e75, max_idle_ns: 440795346320 ns Jul 12 00:16:00.876360 kernel: Initialise system trusted keyrings Jul 12 00:16:00.876369 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 12 00:16:00.876378 kernel: Key type asymmetric registered Jul 12 00:16:00.876390 kernel: Asymmetric key parser 'x509' registered Jul 12 00:16:00.876398 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 12 00:16:00.876407 kernel: io scheduler mq-deadline registered Jul 12 00:16:00.876418 kernel: io scheduler kyber registered Jul 12 00:16:00.876426 kernel: io scheduler bfq registered Jul 12 00:16:00.876435 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 12 00:16:00.876446 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 12 00:16:00.876455 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 12 00:16:00.876463 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jul 12 00:16:00.876472 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 12 00:16:00.876481 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 12 00:16:00.876490 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 12 00:16:00.876499 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 12 00:16:00.876507 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 12 00:16:00.876671 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 12 00:16:00.876690 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 12 00:16:00.876811 kernel: rtc_cmos 00:04: registered as rtc0 Jul 12 00:16:00.876933 kernel: rtc_cmos 00:04: setting system clock to 2025-07-12T00:16:00 UTC (1752279360) Jul 12 00:16:00.877061 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jul 12 00:16:00.877072 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 12 00:16:00.877081 kernel: efifb: probing for efifb Jul 12 00:16:00.877090 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Jul 12 00:16:00.877098 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jul 12 00:16:00.877110 kernel: efifb: scrolling: redraw Jul 12 00:16:00.877119 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 12 00:16:00.877128 kernel: Console: switching to colour frame buffer device 160x50 Jul 12 00:16:00.877136 kernel: fb0: EFI VGA frame buffer device Jul 12 00:16:00.877145 kernel: pstore: Using crash dump compression: deflate Jul 12 00:16:00.877154 kernel: pstore: Registered efi_pstore as persistent store backend Jul 12 00:16:00.877163 kernel: NET: Registered PF_INET6 protocol family Jul 12 00:16:00.877171 kernel: Segment Routing with IPv6 Jul 12 00:16:00.877179 kernel: In-situ OAM (IOAM) with IPv6 Jul 12 00:16:00.877190 kernel: NET: Registered PF_PACKET protocol family Jul 12 00:16:00.877198 kernel: Key type dns_resolver registered Jul 12 00:16:00.877207 kernel: IPI shorthand broadcast: enabled Jul 12 00:16:00.877216 kernel: sched_clock: Marking stable (3629006681, 173462274)->(3894044014, -91575059) Jul 12 00:16:00.877224 kernel: registered taskstats version 1 Jul 12 00:16:00.877233 kernel: Loading compiled-in X.509 certificates Jul 12 00:16:00.877241 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: f8f9174ae27e6261b0ae25e5f0210210a376c8b8' Jul 12 00:16:00.877250 kernel: Demotion targets for Node 0: null Jul 12 00:16:00.877258 kernel: Key type .fscrypt registered Jul 12 00:16:00.877269 kernel: Key type fscrypt-provisioning registered Jul 12 00:16:00.877278 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 12 00:16:00.877287 kernel: ima: Allocated hash algorithm: sha1 Jul 12 00:16:00.877295 kernel: ima: No architecture policies found Jul 12 00:16:00.877304 kernel: clk: Disabling unused clocks Jul 12 00:16:00.877312 kernel: Warning: unable to open an initial console. Jul 12 00:16:00.877321 kernel: Freeing unused kernel image (initmem) memory: 54420K Jul 12 00:16:00.877330 kernel: Write protecting the kernel read-only data: 24576k Jul 12 00:16:00.877338 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 12 00:16:00.877349 kernel: Run /init as init process Jul 12 00:16:00.877358 kernel: with arguments: Jul 12 00:16:00.877366 kernel: /init Jul 12 00:16:00.877374 kernel: with environment: Jul 12 00:16:00.877382 kernel: HOME=/ Jul 12 00:16:00.877391 kernel: TERM=linux Jul 12 00:16:00.877399 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 12 00:16:00.877408 systemd[1]: Successfully made /usr/ read-only. Jul 12 00:16:00.877423 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 12 00:16:00.877433 systemd[1]: Detected virtualization kvm. Jul 12 00:16:00.877442 systemd[1]: Detected architecture x86-64. Jul 12 00:16:00.877450 systemd[1]: Running in initrd. Jul 12 00:16:00.877459 systemd[1]: No hostname configured, using default hostname. Jul 12 00:16:00.877468 systemd[1]: Hostname set to . Jul 12 00:16:00.877477 systemd[1]: Initializing machine ID from VM UUID. Jul 12 00:16:00.877488 systemd[1]: Queued start job for default target initrd.target. Jul 12 00:16:00.877499 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 12 00:16:00.877509 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 12 00:16:00.877518 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 12 00:16:00.877527 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 12 00:16:00.877537 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 12 00:16:00.877547 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 12 00:16:00.877571 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 12 00:16:00.877584 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 12 00:16:00.877593 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 12 00:16:00.877602 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 12 00:16:00.877611 systemd[1]: Reached target paths.target - Path Units. Jul 12 00:16:00.877620 systemd[1]: Reached target slices.target - Slice Units. Jul 12 00:16:00.877629 systemd[1]: Reached target swap.target - Swaps. Jul 12 00:16:00.877638 systemd[1]: Reached target timers.target - Timer Units. Jul 12 00:16:00.877647 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 12 00:16:00.877658 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 12 00:16:00.877667 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 12 00:16:00.877676 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 12 00:16:00.877686 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 12 00:16:00.877695 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 12 00:16:00.877704 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 12 00:16:00.877713 systemd[1]: Reached target sockets.target - Socket Units. Jul 12 00:16:00.877722 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 12 00:16:00.877731 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 12 00:16:00.877742 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 12 00:16:00.877752 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 12 00:16:00.877761 systemd[1]: Starting systemd-fsck-usr.service... Jul 12 00:16:00.877770 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 12 00:16:00.877779 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 12 00:16:00.877788 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:16:00.877797 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 12 00:16:00.877809 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 12 00:16:00.877818 systemd[1]: Finished systemd-fsck-usr.service. Jul 12 00:16:00.877828 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 12 00:16:00.877861 systemd-journald[219]: Collecting audit messages is disabled. Jul 12 00:16:00.877887 systemd-journald[219]: Journal started Jul 12 00:16:00.877906 systemd-journald[219]: Runtime Journal (/run/log/journal/5c7f04b9d5064f9dbdbc54a8e215e96c) is 6M, max 48.5M, 42.4M free. Jul 12 00:16:00.867849 systemd-modules-load[221]: Inserted module 'overlay' Jul 12 00:16:00.883340 systemd[1]: Started systemd-journald.service - Journal Service. Jul 12 00:16:00.880995 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 12 00:16:00.881902 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 12 00:16:00.898862 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 12 00:16:00.898726 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 12 00:16:00.901134 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:16:00.906764 kernel: Bridge firewalling registered Jul 12 00:16:00.903923 systemd-modules-load[221]: Inserted module 'br_netfilter' Jul 12 00:16:00.905035 systemd-tmpfiles[233]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 12 00:16:00.905094 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 12 00:16:00.923150 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 12 00:16:00.927784 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 12 00:16:00.930696 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 12 00:16:00.934122 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 12 00:16:00.941171 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 12 00:16:00.943345 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 12 00:16:00.954716 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 00:16:00.957339 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 12 00:16:00.988515 dracut-cmdline[262]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=403b91c9a87828c895f7b7bfd580cc2c7aac71fa87076ee6fb7434b6c136b8f2 Jul 12 00:16:00.997013 systemd-resolved[255]: Positive Trust Anchors: Jul 12 00:16:00.997044 systemd-resolved[255]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 12 00:16:00.997080 systemd-resolved[255]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 12 00:16:00.999640 systemd-resolved[255]: Defaulting to hostname 'linux'. Jul 12 00:16:01.005941 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 12 00:16:01.024295 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 12 00:16:01.122596 kernel: SCSI subsystem initialized Jul 12 00:16:01.136597 kernel: Loading iSCSI transport class v2.0-870. Jul 12 00:16:01.151597 kernel: iscsi: registered transport (tcp) Jul 12 00:16:01.174679 kernel: iscsi: registered transport (qla4xxx) Jul 12 00:16:01.174761 kernel: QLogic iSCSI HBA Driver Jul 12 00:16:01.198809 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 12 00:16:01.226263 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 12 00:16:01.230551 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 12 00:16:01.303932 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 12 00:16:01.306169 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 12 00:16:01.386606 kernel: raid6: avx2x4 gen() 26269 MB/s Jul 12 00:16:01.412615 kernel: raid6: avx2x2 gen() 26198 MB/s Jul 12 00:16:01.439622 kernel: raid6: avx2x1 gen() 22765 MB/s Jul 12 00:16:01.439707 kernel: raid6: using algorithm avx2x4 gen() 26269 MB/s Jul 12 00:16:01.465075 kernel: raid6: .... xor() 7578 MB/s, rmw enabled Jul 12 00:16:01.465131 kernel: raid6: using avx2x2 recovery algorithm Jul 12 00:16:01.486609 kernel: xor: automatically using best checksumming function avx Jul 12 00:16:01.668627 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 12 00:16:01.678407 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 12 00:16:01.681261 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 12 00:16:01.713726 systemd-udevd[471]: Using default interface naming scheme 'v255'. Jul 12 00:16:01.719832 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 12 00:16:01.721922 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 12 00:16:01.757837 dracut-pre-trigger[477]: rd.md=0: removing MD RAID activation Jul 12 00:16:01.793838 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 12 00:16:01.796730 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 12 00:16:01.878859 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 12 00:16:01.882033 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 12 00:16:01.931629 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jul 12 00:16:01.936190 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 12 00:16:01.936460 kernel: cryptd: max_cpu_qlen set to 1000 Jul 12 00:16:01.945161 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 12 00:16:01.945221 kernel: GPT:9289727 != 19775487 Jul 12 00:16:01.945233 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 12 00:16:01.945244 kernel: GPT:9289727 != 19775487 Jul 12 00:16:01.945267 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 12 00:16:01.945277 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 12 00:16:01.961597 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jul 12 00:16:01.973928 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 12 00:16:01.984410 kernel: AES CTR mode by8 optimization enabled Jul 12 00:16:01.984439 kernel: libata version 3.00 loaded. Jul 12 00:16:01.974318 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:16:01.985824 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:16:01.990885 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:16:01.993139 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 12 00:16:02.001813 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 12 00:16:02.001948 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:16:02.008802 kernel: ahci 0000:00:1f.2: version 3.0 Jul 12 00:16:02.009092 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 12 00:16:02.010599 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jul 12 00:16:02.012392 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jul 12 00:16:02.012590 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 12 00:16:02.014280 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:16:02.019590 kernel: scsi host0: ahci Jul 12 00:16:02.035612 kernel: scsi host1: ahci Jul 12 00:16:02.036597 kernel: scsi host2: ahci Jul 12 00:16:02.038658 kernel: scsi host3: ahci Jul 12 00:16:02.038835 kernel: scsi host4: ahci Jul 12 00:16:02.046607 kernel: scsi host5: ahci Jul 12 00:16:02.046857 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 0 Jul 12 00:16:02.046876 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 0 Jul 12 00:16:02.046886 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 0 Jul 12 00:16:02.046896 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 0 Jul 12 00:16:02.046906 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 0 Jul 12 00:16:02.046916 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 0 Jul 12 00:16:02.051450 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 12 00:16:02.054261 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:16:02.063686 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 12 00:16:02.076506 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 12 00:16:02.077821 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 12 00:16:02.086233 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 12 00:16:02.088231 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 12 00:16:02.148940 disk-uuid[634]: Primary Header is updated. Jul 12 00:16:02.148940 disk-uuid[634]: Secondary Entries is updated. Jul 12 00:16:02.148940 disk-uuid[634]: Secondary Header is updated. Jul 12 00:16:02.153600 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 12 00:16:02.158582 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 12 00:16:02.358888 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jul 12 00:16:02.358987 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 12 00:16:02.359006 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 12 00:16:02.360601 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 12 00:16:02.360708 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 12 00:16:02.361593 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 12 00:16:02.362613 kernel: ata3.00: applying bridge limits Jul 12 00:16:02.362638 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 12 00:16:02.363606 kernel: ata3.00: configured for UDMA/100 Jul 12 00:16:02.364604 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 12 00:16:02.456684 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 12 00:16:02.457039 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 12 00:16:02.477598 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 12 00:16:02.888838 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 12 00:16:02.891883 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 12 00:16:02.894322 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 12 00:16:02.896761 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 12 00:16:02.899889 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 12 00:16:02.939246 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 12 00:16:03.159602 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 12 00:16:03.160454 disk-uuid[635]: The operation has completed successfully. Jul 12 00:16:03.197805 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 12 00:16:03.198005 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 12 00:16:03.239170 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 12 00:16:03.270617 sh[664]: Success Jul 12 00:16:03.290450 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 12 00:16:03.290511 kernel: device-mapper: uevent: version 1.0.3 Jul 12 00:16:03.290529 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 12 00:16:03.301605 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jul 12 00:16:03.340184 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 12 00:16:03.344214 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 12 00:16:03.366520 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 12 00:16:03.375147 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 12 00:16:03.375219 kernel: BTRFS: device fsid bb55a55d-83fd-4659-93e1-1a7697cb01ff devid 1 transid 39 /dev/mapper/usr (253:0) scanned by mount (676) Jul 12 00:16:03.377760 kernel: BTRFS info (device dm-0): first mount of filesystem bb55a55d-83fd-4659-93e1-1a7697cb01ff Jul 12 00:16:03.377789 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 12 00:16:03.377804 kernel: BTRFS info (device dm-0): using free-space-tree Jul 12 00:16:03.383869 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 12 00:16:03.385543 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 12 00:16:03.387130 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 12 00:16:03.388185 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 12 00:16:03.390180 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 12 00:16:03.420632 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (709) Jul 12 00:16:03.420700 kernel: BTRFS info (device vda6): first mount of filesystem 09be57b1-ecdf-4447-b4fe-0c07e0aee6f7 Jul 12 00:16:03.422778 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 12 00:16:03.422815 kernel: BTRFS info (device vda6): using free-space-tree Jul 12 00:16:03.431687 kernel: BTRFS info (device vda6): last unmount of filesystem 09be57b1-ecdf-4447-b4fe-0c07e0aee6f7 Jul 12 00:16:03.432343 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 12 00:16:03.436380 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 12 00:16:03.535545 ignition[750]: Ignition 2.21.0 Jul 12 00:16:03.535581 ignition[750]: Stage: fetch-offline Jul 12 00:16:03.535617 ignition[750]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:16:03.535626 ignition[750]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 00:16:03.535712 ignition[750]: parsed url from cmdline: "" Jul 12 00:16:03.535717 ignition[750]: no config URL provided Jul 12 00:16:03.535721 ignition[750]: reading system config file "/usr/lib/ignition/user.ign" Jul 12 00:16:03.535730 ignition[750]: no config at "/usr/lib/ignition/user.ign" Jul 12 00:16:03.535755 ignition[750]: op(1): [started] loading QEMU firmware config module Jul 12 00:16:03.535761 ignition[750]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 12 00:16:03.547114 ignition[750]: op(1): [finished] loading QEMU firmware config module Jul 12 00:16:03.561461 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 12 00:16:03.565669 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 12 00:16:03.598136 ignition[750]: parsing config with SHA512: cfde3b92de29fb5cc6417f4a114e2d01ab43efe1be57c868d1aafbc0ce5e47d71f819a7295cfe0167a62e21e4c9b9bce85bbda9e312b357859f4d6c7ccb0bcee Jul 12 00:16:03.604648 unknown[750]: fetched base config from "system" Jul 12 00:16:03.605150 ignition[750]: fetch-offline: fetch-offline passed Jul 12 00:16:03.604668 unknown[750]: fetched user config from "qemu" Jul 12 00:16:03.605240 ignition[750]: Ignition finished successfully Jul 12 00:16:03.609081 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 12 00:16:03.628785 systemd-networkd[855]: lo: Link UP Jul 12 00:16:03.628797 systemd-networkd[855]: lo: Gained carrier Jul 12 00:16:03.630750 systemd-networkd[855]: Enumeration completed Jul 12 00:16:03.631038 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 12 00:16:03.631284 systemd-networkd[855]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:16:03.631290 systemd-networkd[855]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 12 00:16:03.632339 systemd[1]: Reached target network.target - Network. Jul 12 00:16:03.632849 systemd-networkd[855]: eth0: Link UP Jul 12 00:16:03.632854 systemd-networkd[855]: eth0: Gained carrier Jul 12 00:16:03.632865 systemd-networkd[855]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:16:03.635018 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 12 00:16:03.638536 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 12 00:16:03.677699 systemd-networkd[855]: eth0: DHCPv4 address 10.0.0.95/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 12 00:16:03.697474 ignition[859]: Ignition 2.21.0 Jul 12 00:16:03.697490 ignition[859]: Stage: kargs Jul 12 00:16:03.697666 ignition[859]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:16:03.697678 ignition[859]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 00:16:03.698898 ignition[859]: kargs: kargs passed Jul 12 00:16:03.698994 ignition[859]: Ignition finished successfully Jul 12 00:16:03.704213 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 12 00:16:03.706633 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 12 00:16:03.741371 ignition[868]: Ignition 2.21.0 Jul 12 00:16:03.741387 ignition[868]: Stage: disks Jul 12 00:16:03.741541 ignition[868]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:16:03.741552 ignition[868]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 00:16:03.742286 ignition[868]: disks: disks passed Jul 12 00:16:03.742341 ignition[868]: Ignition finished successfully Jul 12 00:16:03.747701 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 12 00:16:03.750415 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 12 00:16:03.750581 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 12 00:16:03.752682 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 12 00:16:03.756111 systemd[1]: Reached target sysinit.target - System Initialization. Jul 12 00:16:03.758002 systemd[1]: Reached target basic.target - Basic System. Jul 12 00:16:03.760273 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 12 00:16:03.793470 systemd-resolved[255]: Detected conflict on linux IN A 10.0.0.95 Jul 12 00:16:03.793486 systemd-resolved[255]: Hostname conflict, changing published hostname from 'linux' to 'linux3'. Jul 12 00:16:03.794470 systemd-fsck[878]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 12 00:16:03.803432 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 12 00:16:03.804923 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 12 00:16:03.916608 kernel: EXT4-fs (vda9): mounted filesystem 0ad89691-b65b-416c-92a9-d1ab167398bb r/w with ordered data mode. Quota mode: none. Jul 12 00:16:03.917491 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 12 00:16:03.919104 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 12 00:16:03.921802 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 12 00:16:03.923676 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 12 00:16:03.925022 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 12 00:16:03.925075 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 12 00:16:03.925106 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 12 00:16:03.935916 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 12 00:16:03.937625 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 12 00:16:03.942446 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (886) Jul 12 00:16:03.942485 kernel: BTRFS info (device vda6): first mount of filesystem 09be57b1-ecdf-4447-b4fe-0c07e0aee6f7 Jul 12 00:16:03.942497 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 12 00:16:03.944582 kernel: BTRFS info (device vda6): using free-space-tree Jul 12 00:16:03.949186 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 12 00:16:03.980476 initrd-setup-root[910]: cut: /sysroot/etc/passwd: No such file or directory Jul 12 00:16:03.986252 initrd-setup-root[917]: cut: /sysroot/etc/group: No such file or directory Jul 12 00:16:03.992593 initrd-setup-root[924]: cut: /sysroot/etc/shadow: No such file or directory Jul 12 00:16:03.999322 initrd-setup-root[931]: cut: /sysroot/etc/gshadow: No such file or directory Jul 12 00:16:04.111731 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 12 00:16:04.114256 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 12 00:16:04.116897 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 12 00:16:04.144632 kernel: BTRFS info (device vda6): last unmount of filesystem 09be57b1-ecdf-4447-b4fe-0c07e0aee6f7 Jul 12 00:16:04.158670 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 12 00:16:04.183661 ignition[1000]: INFO : Ignition 2.21.0 Jul 12 00:16:04.183661 ignition[1000]: INFO : Stage: mount Jul 12 00:16:04.183661 ignition[1000]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 00:16:04.183661 ignition[1000]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 00:16:04.183661 ignition[1000]: INFO : mount: mount passed Jul 12 00:16:04.183661 ignition[1000]: INFO : Ignition finished successfully Jul 12 00:16:04.184055 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 12 00:16:04.187845 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 12 00:16:04.374193 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 12 00:16:04.376242 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 12 00:16:04.414692 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1011) Jul 12 00:16:04.414745 kernel: BTRFS info (device vda6): first mount of filesystem 09be57b1-ecdf-4447-b4fe-0c07e0aee6f7 Jul 12 00:16:04.414757 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 12 00:16:04.416152 kernel: BTRFS info (device vda6): using free-space-tree Jul 12 00:16:04.419814 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 12 00:16:04.449700 ignition[1028]: INFO : Ignition 2.21.0 Jul 12 00:16:04.449700 ignition[1028]: INFO : Stage: files Jul 12 00:16:04.451980 ignition[1028]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 00:16:04.451980 ignition[1028]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 00:16:04.451980 ignition[1028]: DEBUG : files: compiled without relabeling support, skipping Jul 12 00:16:04.455934 ignition[1028]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 12 00:16:04.455934 ignition[1028]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 12 00:16:04.458766 ignition[1028]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 12 00:16:04.458766 ignition[1028]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 12 00:16:04.458766 ignition[1028]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 12 00:16:04.458766 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jul 12 00:16:04.458766 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jul 12 00:16:04.457028 unknown[1028]: wrote ssh authorized keys file for user: core Jul 12 00:16:04.504981 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 12 00:16:04.680639 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jul 12 00:16:04.680639 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 12 00:16:04.684756 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 12 00:16:04.831811 systemd-networkd[855]: eth0: Gained IPv6LL Jul 12 00:16:05.159225 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 12 00:16:05.492313 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 12 00:16:05.492313 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 12 00:16:05.496304 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 12 00:16:05.496304 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 12 00:16:05.496304 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 12 00:16:05.496304 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 12 00:16:05.503361 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 12 00:16:05.505158 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 12 00:16:05.507071 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 12 00:16:05.513049 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 12 00:16:05.515772 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 12 00:16:05.515772 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 12 00:16:05.520972 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 12 00:16:05.520972 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 12 00:16:05.520972 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jul 12 00:16:05.767346 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 12 00:16:06.124649 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 12 00:16:06.124649 ignition[1028]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 12 00:16:06.128320 ignition[1028]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 12 00:16:06.134587 ignition[1028]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 12 00:16:06.134587 ignition[1028]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 12 00:16:06.134587 ignition[1028]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jul 12 00:16:06.138956 ignition[1028]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 12 00:16:06.140815 ignition[1028]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 12 00:16:06.140815 ignition[1028]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jul 12 00:16:06.140815 ignition[1028]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jul 12 00:16:06.167037 ignition[1028]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 12 00:16:06.172489 ignition[1028]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 12 00:16:06.174292 ignition[1028]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jul 12 00:16:06.174292 ignition[1028]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jul 12 00:16:06.177080 ignition[1028]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jul 12 00:16:06.177080 ignition[1028]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 12 00:16:06.177080 ignition[1028]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 12 00:16:06.177080 ignition[1028]: INFO : files: files passed Jul 12 00:16:06.177080 ignition[1028]: INFO : Ignition finished successfully Jul 12 00:16:06.183621 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 12 00:16:06.187512 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 12 00:16:06.190057 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 12 00:16:06.204818 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 12 00:16:06.205001 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 12 00:16:06.209209 initrd-setup-root-after-ignition[1057]: grep: /sysroot/oem/oem-release: No such file or directory Jul 12 00:16:06.213478 initrd-setup-root-after-ignition[1059]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 12 00:16:06.213478 initrd-setup-root-after-ignition[1059]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 12 00:16:06.216831 initrd-setup-root-after-ignition[1063]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 12 00:16:06.220280 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 12 00:16:06.223056 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 12 00:16:06.224354 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 12 00:16:06.279512 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 12 00:16:06.279722 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 12 00:16:06.283292 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 12 00:16:06.283407 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 12 00:16:06.285390 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 12 00:16:06.288846 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 12 00:16:06.318485 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 12 00:16:06.321033 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 12 00:16:06.355024 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 12 00:16:06.355291 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 12 00:16:06.358723 systemd[1]: Stopped target timers.target - Timer Units. Jul 12 00:16:06.359931 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 12 00:16:06.360076 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 12 00:16:06.365425 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 12 00:16:06.366534 systemd[1]: Stopped target basic.target - Basic System. Jul 12 00:16:06.368414 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 12 00:16:06.369405 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 12 00:16:06.369845 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 12 00:16:06.370172 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 12 00:16:06.370493 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 12 00:16:06.370992 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 12 00:16:06.371326 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 12 00:16:06.371661 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 12 00:16:06.372127 systemd[1]: Stopped target swap.target - Swaps. Jul 12 00:16:06.372428 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 12 00:16:06.372571 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 12 00:16:06.389262 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 12 00:16:06.390654 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 12 00:16:06.392877 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 12 00:16:06.394011 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 12 00:16:06.394518 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 12 00:16:06.394728 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 12 00:16:06.399272 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 12 00:16:06.399476 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 12 00:16:06.400794 systemd[1]: Stopped target paths.target - Path Units. Jul 12 00:16:06.401182 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 12 00:16:06.407698 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 12 00:16:06.410439 systemd[1]: Stopped target slices.target - Slice Units. Jul 12 00:16:06.410669 systemd[1]: Stopped target sockets.target - Socket Units. Jul 12 00:16:06.411156 systemd[1]: iscsid.socket: Deactivated successfully. Jul 12 00:16:06.411285 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 12 00:16:06.414841 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 12 00:16:06.414978 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 12 00:16:06.415930 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 12 00:16:06.416113 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 12 00:16:06.418013 systemd[1]: ignition-files.service: Deactivated successfully. Jul 12 00:16:06.418160 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 12 00:16:06.422061 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 12 00:16:06.422491 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 12 00:16:06.422700 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 12 00:16:06.424218 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 12 00:16:06.427003 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 12 00:16:06.427207 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 12 00:16:06.427924 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 12 00:16:06.428029 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 12 00:16:06.433963 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 12 00:16:06.449829 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 12 00:16:06.475997 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 12 00:16:06.477125 ignition[1083]: INFO : Ignition 2.21.0 Jul 12 00:16:06.477125 ignition[1083]: INFO : Stage: umount Jul 12 00:16:06.477125 ignition[1083]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 00:16:06.477125 ignition[1083]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 00:16:06.483038 ignition[1083]: INFO : umount: umount passed Jul 12 00:16:06.483038 ignition[1083]: INFO : Ignition finished successfully Jul 12 00:16:06.487199 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 12 00:16:06.487386 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 12 00:16:06.488061 systemd[1]: Stopped target network.target - Network. Jul 12 00:16:06.491118 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 12 00:16:06.491209 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 12 00:16:06.492150 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 12 00:16:06.492218 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 12 00:16:06.494193 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 12 00:16:06.494272 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 12 00:16:06.497261 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 12 00:16:06.497356 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 12 00:16:06.499941 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 12 00:16:06.501232 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 12 00:16:06.507505 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 12 00:16:06.507671 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 12 00:16:06.515417 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 12 00:16:06.516233 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 12 00:16:06.516350 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 12 00:16:06.522132 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 12 00:16:06.527185 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 12 00:16:06.527360 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 12 00:16:06.532190 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 12 00:16:06.532446 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 12 00:16:06.535183 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 12 00:16:06.535235 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 12 00:16:06.540147 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 12 00:16:06.541184 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 12 00:16:06.541252 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 12 00:16:06.541930 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 12 00:16:06.542004 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 12 00:16:06.547496 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 12 00:16:06.547552 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 12 00:16:06.548630 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 12 00:16:06.550105 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 12 00:16:06.567545 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 12 00:16:06.577990 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 12 00:16:06.635725 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 12 00:16:06.635915 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 12 00:16:06.639409 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 12 00:16:06.639523 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 12 00:16:06.639875 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 12 00:16:06.639922 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 12 00:16:06.640231 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 12 00:16:06.640301 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 12 00:16:06.641246 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 12 00:16:06.641300 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 12 00:16:06.649596 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 12 00:16:06.649668 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 00:16:06.698540 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 12 00:16:06.699928 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 12 00:16:06.700043 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 12 00:16:06.704381 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 12 00:16:06.704458 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 12 00:16:06.710433 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 12 00:16:06.710547 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:16:06.724537 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 12 00:16:06.724729 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 12 00:16:06.846443 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 12 00:16:06.846670 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 12 00:16:06.849067 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 12 00:16:06.852290 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 12 00:16:06.852440 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 12 00:16:06.856196 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 12 00:16:06.886398 systemd[1]: Switching root. Jul 12 00:16:06.921215 systemd-journald[219]: Journal stopped Jul 12 00:16:08.829415 systemd-journald[219]: Received SIGTERM from PID 1 (systemd). Jul 12 00:16:08.829497 kernel: SELinux: policy capability network_peer_controls=1 Jul 12 00:16:08.829514 kernel: SELinux: policy capability open_perms=1 Jul 12 00:16:08.829525 kernel: SELinux: policy capability extended_socket_class=1 Jul 12 00:16:08.829542 kernel: SELinux: policy capability always_check_network=0 Jul 12 00:16:08.829553 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 12 00:16:08.829581 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 12 00:16:08.829593 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 12 00:16:08.829613 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 12 00:16:08.829625 kernel: SELinux: policy capability userspace_initial_context=0 Jul 12 00:16:08.829636 kernel: audit: type=1403 audit(1752279367.782:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 12 00:16:08.829649 systemd[1]: Successfully loaded SELinux policy in 52.316ms. Jul 12 00:16:08.829674 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 17.062ms. Jul 12 00:16:08.829688 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 12 00:16:08.829701 systemd[1]: Detected virtualization kvm. Jul 12 00:16:08.829713 systemd[1]: Detected architecture x86-64. Jul 12 00:16:08.829730 systemd[1]: Detected first boot. Jul 12 00:16:08.829743 systemd[1]: Initializing machine ID from VM UUID. Jul 12 00:16:08.829755 zram_generator::config[1129]: No configuration found. Jul 12 00:16:08.829784 kernel: Guest personality initialized and is inactive Jul 12 00:16:08.829807 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 12 00:16:08.829822 kernel: Initialized host personality Jul 12 00:16:08.829836 kernel: NET: Registered PF_VSOCK protocol family Jul 12 00:16:08.829851 systemd[1]: Populated /etc with preset unit settings. Jul 12 00:16:08.829868 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 12 00:16:08.829888 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 12 00:16:08.829901 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 12 00:16:08.829913 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 12 00:16:08.829936 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 12 00:16:08.829948 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 12 00:16:08.829960 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 12 00:16:08.829972 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 12 00:16:08.829985 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 12 00:16:08.830002 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 12 00:16:08.830015 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 12 00:16:08.830027 systemd[1]: Created slice user.slice - User and Session Slice. Jul 12 00:16:08.830039 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 12 00:16:08.830052 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 12 00:16:08.830064 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 12 00:16:08.830076 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 12 00:16:08.830089 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 12 00:16:08.830106 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 12 00:16:08.830119 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 12 00:16:08.830131 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 12 00:16:08.830144 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 12 00:16:08.830158 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 12 00:16:08.830170 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 12 00:16:08.830182 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 12 00:16:08.830202 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 12 00:16:08.830226 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 12 00:16:08.830250 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 12 00:16:08.830263 systemd[1]: Reached target slices.target - Slice Units. Jul 12 00:16:08.830276 systemd[1]: Reached target swap.target - Swaps. Jul 12 00:16:08.830288 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 12 00:16:08.830300 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 12 00:16:08.830313 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 12 00:16:08.830327 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 12 00:16:08.830339 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 12 00:16:08.830352 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 12 00:16:08.830370 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 12 00:16:08.830382 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 12 00:16:08.830394 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 12 00:16:08.830406 systemd[1]: Mounting media.mount - External Media Directory... Jul 12 00:16:08.830419 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 12 00:16:08.830431 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 12 00:16:08.830443 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 12 00:16:08.830455 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 12 00:16:08.830468 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 12 00:16:08.830486 systemd[1]: Reached target machines.target - Containers. Jul 12 00:16:08.830499 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 12 00:16:08.830511 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 12 00:16:08.830524 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 12 00:16:08.830536 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 12 00:16:08.830548 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 12 00:16:08.830628 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 12 00:16:08.830643 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 12 00:16:08.830662 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 12 00:16:08.830675 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 12 00:16:08.830687 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 12 00:16:08.830700 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 12 00:16:08.830712 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 12 00:16:08.830725 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 12 00:16:08.830737 systemd[1]: Stopped systemd-fsck-usr.service. Jul 12 00:16:08.830750 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 12 00:16:08.830767 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 12 00:16:08.830780 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 12 00:16:08.830792 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 12 00:16:08.830816 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 12 00:16:08.830831 kernel: fuse: init (API version 7.41) Jul 12 00:16:08.830846 kernel: loop: module loaded Jul 12 00:16:08.830861 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 12 00:16:08.830873 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 12 00:16:08.830892 systemd[1]: verity-setup.service: Deactivated successfully. Jul 12 00:16:08.830905 systemd[1]: Stopped verity-setup.service. Jul 12 00:16:08.830932 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 12 00:16:08.830946 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 12 00:16:08.830958 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 12 00:16:08.830970 systemd[1]: Mounted media.mount - External Media Directory. Jul 12 00:16:08.830989 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 12 00:16:08.831001 kernel: ACPI: bus type drm_connector registered Jul 12 00:16:08.831013 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 12 00:16:08.831025 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 12 00:16:08.831038 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 12 00:16:08.831050 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 12 00:16:08.831091 systemd-journald[1198]: Collecting audit messages is disabled. Jul 12 00:16:08.831115 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 12 00:16:08.831128 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 12 00:16:08.831146 systemd-journald[1198]: Journal started Jul 12 00:16:08.831169 systemd-journald[1198]: Runtime Journal (/run/log/journal/5c7f04b9d5064f9dbdbc54a8e215e96c) is 6M, max 48.5M, 42.4M free. Jul 12 00:16:08.413622 systemd[1]: Queued start job for default target multi-user.target. Jul 12 00:16:08.427022 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 12 00:16:08.427612 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 12 00:16:08.834985 systemd[1]: Started systemd-journald.service - Journal Service. Jul 12 00:16:08.836361 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:16:08.836681 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 12 00:16:08.838491 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 12 00:16:08.838822 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 12 00:16:08.840605 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:16:08.840912 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 12 00:16:08.842877 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 12 00:16:08.843176 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 12 00:16:08.845035 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:16:08.845335 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 12 00:16:08.847409 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 12 00:16:08.849399 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 12 00:16:08.851607 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 12 00:16:08.853650 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 12 00:16:08.874863 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 12 00:16:08.878489 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 12 00:16:08.884093 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 12 00:16:08.885484 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 12 00:16:08.885533 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 12 00:16:08.888476 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 12 00:16:08.894485 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 12 00:16:08.896930 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 12 00:16:08.900842 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 12 00:16:08.904137 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 12 00:16:08.905581 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 12 00:16:08.906948 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 12 00:16:08.908388 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 12 00:16:08.910260 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 12 00:16:08.914198 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 12 00:16:08.929764 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 12 00:16:08.935955 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 12 00:16:08.938528 systemd-journald[1198]: Time spent on flushing to /var/log/journal/5c7f04b9d5064f9dbdbc54a8e215e96c is 20.587ms for 1069 entries. Jul 12 00:16:08.938528 systemd-journald[1198]: System Journal (/var/log/journal/5c7f04b9d5064f9dbdbc54a8e215e96c) is 8M, max 195.6M, 187.6M free. Jul 12 00:16:09.088701 systemd-journald[1198]: Received client request to flush runtime journal. Jul 12 00:16:09.088779 kernel: loop0: detected capacity change from 0 to 113872 Jul 12 00:16:09.088819 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 12 00:16:09.088844 kernel: loop1: detected capacity change from 0 to 229808 Jul 12 00:16:08.938405 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 12 00:16:08.942705 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 12 00:16:08.973503 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 12 00:16:08.986199 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 12 00:16:08.988424 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 12 00:16:08.993286 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 12 00:16:08.996124 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 12 00:16:09.000713 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 12 00:16:09.093389 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 12 00:16:09.114105 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 12 00:16:09.128734 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Jul 12 00:16:09.129275 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Jul 12 00:16:09.136811 kernel: loop2: detected capacity change from 0 to 146240 Jul 12 00:16:09.137372 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 12 00:16:09.182631 kernel: loop3: detected capacity change from 0 to 113872 Jul 12 00:16:09.192600 kernel: loop4: detected capacity change from 0 to 229808 Jul 12 00:16:09.208591 kernel: loop5: detected capacity change from 0 to 146240 Jul 12 00:16:09.242473 (sd-merge)[1273]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 12 00:16:09.243100 (sd-merge)[1273]: Merged extensions into '/usr'. Jul 12 00:16:09.248321 systemd[1]: Reload requested from client PID 1248 ('systemd-sysext') (unit systemd-sysext.service)... Jul 12 00:16:09.248337 systemd[1]: Reloading... Jul 12 00:16:09.306595 zram_generator::config[1295]: No configuration found. Jul 12 00:16:09.608552 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:16:09.647433 ldconfig[1243]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 12 00:16:09.698258 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 12 00:16:09.698852 systemd[1]: Reloading finished in 449 ms. Jul 12 00:16:09.748505 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 12 00:16:09.778305 systemd[1]: Starting ensure-sysext.service... Jul 12 00:16:09.780675 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 12 00:16:09.789638 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 12 00:16:09.796908 systemd[1]: Reload requested from client PID 1335 ('systemctl') (unit ensure-sysext.service)... Jul 12 00:16:09.796928 systemd[1]: Reloading... Jul 12 00:16:09.816993 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 12 00:16:09.817467 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 12 00:16:09.817990 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 12 00:16:09.818511 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 12 00:16:09.819983 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 12 00:16:09.820505 systemd-tmpfiles[1336]: ACLs are not supported, ignoring. Jul 12 00:16:09.820741 systemd-tmpfiles[1336]: ACLs are not supported, ignoring. Jul 12 00:16:09.826208 systemd-tmpfiles[1336]: Detected autofs mount point /boot during canonicalization of boot. Jul 12 00:16:09.826314 systemd-tmpfiles[1336]: Skipping /boot Jul 12 00:16:09.858910 systemd-tmpfiles[1336]: Detected autofs mount point /boot during canonicalization of boot. Jul 12 00:16:09.859107 systemd-tmpfiles[1336]: Skipping /boot Jul 12 00:16:09.878645 zram_generator::config[1364]: No configuration found. Jul 12 00:16:10.003352 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:16:10.096611 systemd[1]: Reloading finished in 299 ms. Jul 12 00:16:10.141721 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 12 00:16:10.152086 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 12 00:16:10.155150 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 12 00:16:10.168000 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 12 00:16:10.173680 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 12 00:16:10.177365 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 12 00:16:10.181540 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 12 00:16:10.181741 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 12 00:16:10.183775 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 12 00:16:10.186643 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 12 00:16:10.194861 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 12 00:16:10.198027 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 12 00:16:10.198162 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 12 00:16:10.198264 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 12 00:16:10.199725 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 12 00:16:10.203300 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:16:10.203631 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 12 00:16:10.206700 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:16:10.207326 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 12 00:16:10.209875 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 12 00:16:10.212241 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:16:10.212512 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 12 00:16:10.230234 augenrules[1434]: No rules Jul 12 00:16:10.230249 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 12 00:16:10.232484 systemd[1]: audit-rules.service: Deactivated successfully. Jul 12 00:16:10.232799 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 12 00:16:10.238744 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 12 00:16:10.239112 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 12 00:16:10.240515 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 12 00:16:10.243120 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 12 00:16:10.245608 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 12 00:16:10.251823 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 12 00:16:10.253069 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 12 00:16:10.253121 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 12 00:16:10.254593 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 12 00:16:10.256891 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 12 00:16:10.259666 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 12 00:16:10.261539 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 12 00:16:10.263209 systemd[1]: Finished ensure-sysext.service. Jul 12 00:16:10.264862 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 12 00:16:10.266719 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:16:10.267034 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 12 00:16:10.269299 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 12 00:16:10.269619 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 12 00:16:10.271745 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:16:10.272083 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 12 00:16:10.274020 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:16:10.274309 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 12 00:16:10.276133 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 12 00:16:10.285996 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 12 00:16:10.286094 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 12 00:16:10.289780 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 12 00:16:10.291023 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 12 00:16:10.296855 systemd-udevd[1445]: Using default interface naming scheme 'v255'. Jul 12 00:16:10.319182 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 12 00:16:10.324196 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 12 00:16:10.326367 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 12 00:16:10.396650 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 12 00:16:10.486701 kernel: mousedev: PS/2 mouse device common for all mice Jul 12 00:16:10.497600 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 12 00:16:10.504480 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 12 00:16:10.507649 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 12 00:16:10.515593 kernel: ACPI: button: Power Button [PWRF] Jul 12 00:16:10.540014 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 12 00:16:10.551141 systemd-networkd[1461]: lo: Link UP Jul 12 00:16:10.551729 systemd-networkd[1461]: lo: Gained carrier Jul 12 00:16:10.554594 systemd-networkd[1461]: Enumeration completed Jul 12 00:16:10.555200 systemd-networkd[1461]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:16:10.555277 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 12 00:16:10.558882 systemd-networkd[1461]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 12 00:16:10.559848 systemd-networkd[1461]: eth0: Link UP Jul 12 00:16:10.560262 systemd-networkd[1461]: eth0: Gained carrier Jul 12 00:16:10.560343 systemd-networkd[1461]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:16:10.561108 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 12 00:16:10.565831 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 12 00:16:10.573981 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jul 12 00:16:10.574436 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 12 00:16:10.574799 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 12 00:16:10.575629 systemd-networkd[1461]: eth0: DHCPv4 address 10.0.0.95/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 12 00:16:10.583994 systemd-resolved[1406]: Positive Trust Anchors: Jul 12 00:16:10.584005 systemd-resolved[1406]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 12 00:16:10.584037 systemd-resolved[1406]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 12 00:16:10.590830 systemd-resolved[1406]: Defaulting to hostname 'linux'. Jul 12 00:16:10.596038 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 12 00:16:10.597518 systemd[1]: Reached target network.target - Network. Jul 12 00:16:10.598551 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 12 00:16:10.603792 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 12 00:16:10.611432 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 12 00:16:11.781547 systemd-resolved[1406]: Clock change detected. Flushing caches. Jul 12 00:16:11.781643 systemd-timesyncd[1455]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 12 00:16:11.781690 systemd-timesyncd[1455]: Initial clock synchronization to Sat 2025-07-12 00:16:11.781481 UTC. Jul 12 00:16:11.781881 systemd[1]: Reached target sysinit.target - System Initialization. Jul 12 00:16:11.783550 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 12 00:16:11.784966 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 12 00:16:11.786224 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 12 00:16:11.787418 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 12 00:16:11.788671 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 12 00:16:11.788697 systemd[1]: Reached target paths.target - Path Units. Jul 12 00:16:11.789659 systemd[1]: Reached target time-set.target - System Time Set. Jul 12 00:16:11.790899 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 12 00:16:11.792105 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 12 00:16:11.793407 systemd[1]: Reached target timers.target - Timer Units. Jul 12 00:16:11.795780 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 12 00:16:11.799075 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 12 00:16:11.805582 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 12 00:16:11.807109 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 12 00:16:11.808657 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 12 00:16:11.861417 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 12 00:16:11.864123 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 12 00:16:11.866371 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 12 00:16:11.868681 systemd[1]: Reached target sockets.target - Socket Units. Jul 12 00:16:11.869807 systemd[1]: Reached target basic.target - Basic System. Jul 12 00:16:11.871429 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 12 00:16:11.871469 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 12 00:16:11.919137 systemd[1]: Starting containerd.service - containerd container runtime... Jul 12 00:16:11.923123 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 12 00:16:11.928822 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 12 00:16:11.938612 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 12 00:16:11.942274 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 12 00:16:11.943383 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 12 00:16:11.947438 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 12 00:16:11.951515 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 12 00:16:11.953830 jq[1526]: false Jul 12 00:16:11.956426 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 12 00:16:11.960840 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 12 00:16:11.967577 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 12 00:16:11.976492 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 12 00:16:11.979631 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 12 00:16:11.984692 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 12 00:16:11.985283 extend-filesystems[1527]: Found /dev/vda6 Jul 12 00:16:11.986613 systemd[1]: Starting update-engine.service - Update Engine... Jul 12 00:16:11.989204 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 12 00:16:11.998237 extend-filesystems[1527]: Found /dev/vda9 Jul 12 00:16:12.004569 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 12 00:16:12.008097 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 12 00:16:12.008492 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 12 00:16:12.011016 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 12 00:16:12.011400 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 12 00:16:12.015641 extend-filesystems[1527]: Checking size of /dev/vda9 Jul 12 00:16:12.027107 google_oslogin_nss_cache[1528]: oslogin_cache_refresh[1528]: Refreshing passwd entry cache Jul 12 00:16:12.027120 oslogin_cache_refresh[1528]: Refreshing passwd entry cache Jul 12 00:16:12.029528 systemd[1]: motdgen.service: Deactivated successfully. Jul 12 00:16:12.032697 kernel: kvm_amd: TSC scaling supported Jul 12 00:16:12.032812 kernel: kvm_amd: Nested Virtualization enabled Jul 12 00:16:12.032847 kernel: kvm_amd: Nested Paging enabled Jul 12 00:16:12.032876 kernel: kvm_amd: LBR virtualization supported Jul 12 00:16:12.037544 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jul 12 00:16:12.037599 kernel: kvm_amd: Virtual GIF supported Jul 12 00:16:12.037939 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 12 00:16:12.045285 google_oslogin_nss_cache[1528]: oslogin_cache_refresh[1528]: Failure getting users, quitting Jul 12 00:16:12.045285 google_oslogin_nss_cache[1528]: oslogin_cache_refresh[1528]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 12 00:16:12.045285 google_oslogin_nss_cache[1528]: oslogin_cache_refresh[1528]: Refreshing group entry cache Jul 12 00:16:12.044111 oslogin_cache_refresh[1528]: Failure getting users, quitting Jul 12 00:16:12.044142 oslogin_cache_refresh[1528]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 12 00:16:12.044250 oslogin_cache_refresh[1528]: Refreshing group entry cache Jul 12 00:16:12.052284 google_oslogin_nss_cache[1528]: oslogin_cache_refresh[1528]: Failure getting groups, quitting Jul 12 00:16:12.052284 google_oslogin_nss_cache[1528]: oslogin_cache_refresh[1528]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 12 00:16:12.049986 oslogin_cache_refresh[1528]: Failure getting groups, quitting Jul 12 00:16:12.050003 oslogin_cache_refresh[1528]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 12 00:16:12.054311 jq[1545]: true Jul 12 00:16:12.054856 (ntainerd)[1552]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 12 00:16:12.056013 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 12 00:16:12.059066 extend-filesystems[1527]: Resized partition /dev/vda9 Jul 12 00:16:12.056337 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 12 00:16:12.074252 update_engine[1542]: I20250712 00:16:12.065887 1542 main.cc:92] Flatcar Update Engine starting Jul 12 00:16:12.074551 extend-filesystems[1571]: resize2fs 1.47.2 (1-Jan-2025) Jul 12 00:16:12.081086 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 12 00:16:12.087325 jq[1569]: true Jul 12 00:16:12.128014 tar[1548]: linux-amd64/LICENSE Jul 12 00:16:12.128014 tar[1548]: linux-amd64/helm Jul 12 00:16:12.130648 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:16:12.205258 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 12 00:16:12.315547 sshd_keygen[1558]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 12 00:16:12.219929 dbus-daemon[1524]: [system] SELinux support is enabled Jul 12 00:16:12.316091 extend-filesystems[1571]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 12 00:16:12.316091 extend-filesystems[1571]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 12 00:16:12.316091 extend-filesystems[1571]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 12 00:16:12.320309 update_engine[1542]: I20250712 00:16:12.250077 1542 update_check_scheduler.cc:74] Next update check in 4m51s Jul 12 00:16:12.220434 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 12 00:16:12.224656 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 12 00:16:12.224684 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 12 00:16:12.226205 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 12 00:16:12.226256 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 12 00:16:12.249752 systemd[1]: Started update-engine.service - Update Engine. Jul 12 00:16:12.255780 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 12 00:16:12.323697 extend-filesystems[1527]: Resized filesystem in /dev/vda9 Jul 12 00:16:12.350335 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 12 00:16:12.350800 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 12 00:16:12.356892 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 12 00:16:12.446074 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 12 00:16:12.446632 systemd[1]: issuegen.service: Deactivated successfully. Jul 12 00:16:12.446928 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 12 00:16:12.457235 bash[1598]: Updated "/home/core/.ssh/authorized_keys" Jul 12 00:16:12.460560 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 12 00:16:12.489227 kernel: EDAC MC: Ver: 3.0.0 Jul 12 00:16:12.492510 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 12 00:16:12.493920 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 12 00:16:12.501741 systemd-logind[1538]: Watching system buttons on /dev/input/event2 (Power Button) Jul 12 00:16:12.502061 systemd-logind[1538]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 12 00:16:12.502514 systemd-logind[1538]: New seat seat0. Jul 12 00:16:12.505720 systemd[1]: Started systemd-logind.service - User Login Management. Jul 12 00:16:12.530265 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 12 00:16:12.532879 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 12 00:16:12.537096 locksmithd[1588]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 12 00:16:12.541591 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 12 00:16:12.543194 systemd[1]: Reached target getty.target - Login Prompts. Jul 12 00:16:12.545257 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:16:12.711253 containerd[1552]: time="2025-07-12T00:16:12Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 12 00:16:12.712139 containerd[1552]: time="2025-07-12T00:16:12.712076342Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 12 00:16:12.722644 containerd[1552]: time="2025-07-12T00:16:12.722593433Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.987µs" Jul 12 00:16:12.722644 containerd[1552]: time="2025-07-12T00:16:12.722624842Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 12 00:16:12.722644 containerd[1552]: time="2025-07-12T00:16:12.722646262Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 12 00:16:12.722876 containerd[1552]: time="2025-07-12T00:16:12.722841198Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 12 00:16:12.722876 containerd[1552]: time="2025-07-12T00:16:12.722867257Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 12 00:16:12.722919 containerd[1552]: time="2025-07-12T00:16:12.722895740Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 12 00:16:12.723000 containerd[1552]: time="2025-07-12T00:16:12.722974979Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 12 00:16:12.723000 containerd[1552]: time="2025-07-12T00:16:12.722993664Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 12 00:16:12.723651 containerd[1552]: time="2025-07-12T00:16:12.723608507Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 12 00:16:12.723651 containerd[1552]: time="2025-07-12T00:16:12.723634226Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 12 00:16:12.723699 containerd[1552]: time="2025-07-12T00:16:12.723649494Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 12 00:16:12.723699 containerd[1552]: time="2025-07-12T00:16:12.723664943Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 12 00:16:12.723825 containerd[1552]: time="2025-07-12T00:16:12.723794085Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 12 00:16:12.724138 containerd[1552]: time="2025-07-12T00:16:12.724105790Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 12 00:16:12.724199 containerd[1552]: time="2025-07-12T00:16:12.724176182Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 12 00:16:12.724248 containerd[1552]: time="2025-07-12T00:16:12.724196250Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 12 00:16:12.724270 containerd[1552]: time="2025-07-12T00:16:12.724257064Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 12 00:16:12.724567 containerd[1552]: time="2025-07-12T00:16:12.724530888Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 12 00:16:12.724651 containerd[1552]: time="2025-07-12T00:16:12.724620806Z" level=info msg="metadata content store policy set" policy=shared Jul 12 00:16:12.745306 containerd[1552]: time="2025-07-12T00:16:12.745206607Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 12 00:16:12.745306 containerd[1552]: time="2025-07-12T00:16:12.745330209Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 12 00:16:12.745646 containerd[1552]: time="2025-07-12T00:16:12.745354074Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 12 00:16:12.745646 containerd[1552]: time="2025-07-12T00:16:12.745386905Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 12 00:16:12.745646 containerd[1552]: time="2025-07-12T00:16:12.745407303Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 12 00:16:12.745646 containerd[1552]: time="2025-07-12T00:16:12.745423033Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 12 00:16:12.745646 containerd[1552]: time="2025-07-12T00:16:12.745439434Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 12 00:16:12.745646 containerd[1552]: time="2025-07-12T00:16:12.745461285Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 12 00:16:12.745646 containerd[1552]: time="2025-07-12T00:16:12.745479459Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 12 00:16:12.745646 containerd[1552]: time="2025-07-12T00:16:12.745493235Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 12 00:16:12.745646 containerd[1552]: time="2025-07-12T00:16:12.745507331Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 12 00:16:12.745646 containerd[1552]: time="2025-07-12T00:16:12.745524724Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 12 00:16:12.745898 containerd[1552]: time="2025-07-12T00:16:12.745743174Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 12 00:16:12.745898 containerd[1552]: time="2025-07-12T00:16:12.745777358Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 12 00:16:12.745898 containerd[1552]: time="2025-07-12T00:16:12.745798467Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 12 00:16:12.745898 containerd[1552]: time="2025-07-12T00:16:12.745813205Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 12 00:16:12.745898 containerd[1552]: time="2025-07-12T00:16:12.745850174Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 12 00:16:12.745898 containerd[1552]: time="2025-07-12T00:16:12.745869430Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 12 00:16:12.745898 containerd[1552]: time="2025-07-12T00:16:12.745885551Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 12 00:16:12.746088 containerd[1552]: time="2025-07-12T00:16:12.745904186Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 12 00:16:12.746088 containerd[1552]: time="2025-07-12T00:16:12.745919354Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 12 00:16:12.746088 containerd[1552]: time="2025-07-12T00:16:12.745934402Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 12 00:16:12.746088 containerd[1552]: time="2025-07-12T00:16:12.745949270Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 12 00:16:12.746088 containerd[1552]: time="2025-07-12T00:16:12.746058916Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 12 00:16:12.746088 containerd[1552]: time="2025-07-12T00:16:12.746085446Z" level=info msg="Start snapshots syncer" Jul 12 00:16:12.746272 containerd[1552]: time="2025-07-12T00:16:12.746118808Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 12 00:16:12.746589 containerd[1552]: time="2025-07-12T00:16:12.746538335Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 12 00:16:12.746744 containerd[1552]: time="2025-07-12T00:16:12.746611012Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 12 00:16:12.746744 containerd[1552]: time="2025-07-12T00:16:12.746720988Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 12 00:16:12.746881 containerd[1552]: time="2025-07-12T00:16:12.746854749Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 12 00:16:12.746937 containerd[1552]: time="2025-07-12T00:16:12.746886619Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 12 00:16:12.746937 containerd[1552]: time="2025-07-12T00:16:12.746904292Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 12 00:16:12.746937 containerd[1552]: time="2025-07-12T00:16:12.746918589Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 12 00:16:12.746937 containerd[1552]: time="2025-07-12T00:16:12.746935561Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 12 00:16:12.747047 containerd[1552]: time="2025-07-12T00:16:12.746953023Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 12 00:16:12.747047 containerd[1552]: time="2025-07-12T00:16:12.746974323Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 12 00:16:12.747047 containerd[1552]: time="2025-07-12T00:16:12.747004670Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 12 00:16:12.747047 containerd[1552]: time="2025-07-12T00:16:12.747018777Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 12 00:16:12.747047 containerd[1552]: time="2025-07-12T00:16:12.747031631Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 12 00:16:12.747178 containerd[1552]: time="2025-07-12T00:16:12.747087205Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 12 00:16:12.747178 containerd[1552]: time="2025-07-12T00:16:12.747108735Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 12 00:16:12.747178 containerd[1552]: time="2025-07-12T00:16:12.747122060Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 12 00:16:12.747178 containerd[1552]: time="2025-07-12T00:16:12.747134995Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 12 00:16:12.747178 containerd[1552]: time="2025-07-12T00:16:12.747146366Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 12 00:16:12.747178 containerd[1552]: time="2025-07-12T00:16:12.747159040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 12 00:16:12.747178 containerd[1552]: time="2025-07-12T00:16:12.747178346Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 12 00:16:12.747414 containerd[1552]: time="2025-07-12T00:16:12.747229803Z" level=info msg="runtime interface created" Jul 12 00:16:12.747414 containerd[1552]: time="2025-07-12T00:16:12.747239831Z" level=info msg="created NRI interface" Jul 12 00:16:12.747414 containerd[1552]: time="2025-07-12T00:16:12.747252495Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 12 00:16:12.747414 containerd[1552]: time="2025-07-12T00:16:12.747268665Z" level=info msg="Connect containerd service" Jul 12 00:16:12.747414 containerd[1552]: time="2025-07-12T00:16:12.747302359Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 12 00:16:12.748526 containerd[1552]: time="2025-07-12T00:16:12.748473997Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 12 00:16:12.858702 containerd[1552]: time="2025-07-12T00:16:12.858615680Z" level=info msg="Start subscribing containerd event" Jul 12 00:16:12.858837 containerd[1552]: time="2025-07-12T00:16:12.858710639Z" level=info msg="Start recovering state" Jul 12 00:16:12.858865 containerd[1552]: time="2025-07-12T00:16:12.858848918Z" level=info msg="Start event monitor" Jul 12 00:16:12.858891 containerd[1552]: time="2025-07-12T00:16:12.858871791Z" level=info msg="Start cni network conf syncer for default" Jul 12 00:16:12.858891 containerd[1552]: time="2025-07-12T00:16:12.858882681Z" level=info msg="Start streaming server" Jul 12 00:16:12.859088 containerd[1552]: time="2025-07-12T00:16:12.858851613Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 12 00:16:12.859127 containerd[1552]: time="2025-07-12T00:16:12.858894574Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 12 00:16:12.859127 containerd[1552]: time="2025-07-12T00:16:12.859114907Z" level=info msg="runtime interface starting up..." Jul 12 00:16:12.859127 containerd[1552]: time="2025-07-12T00:16:12.859124024Z" level=info msg="starting plugins..." Jul 12 00:16:12.859246 containerd[1552]: time="2025-07-12T00:16:12.859142639Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 12 00:16:12.859246 containerd[1552]: time="2025-07-12T00:16:12.859149412Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 12 00:16:12.859468 containerd[1552]: time="2025-07-12T00:16:12.859443704Z" level=info msg="containerd successfully booted in 0.148993s" Jul 12 00:16:12.859665 systemd[1]: Started containerd.service - containerd container runtime. Jul 12 00:16:12.884957 tar[1548]: linux-amd64/README.md Jul 12 00:16:12.930206 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 12 00:16:12.977033 systemd-networkd[1461]: eth0: Gained IPv6LL Jul 12 00:16:12.981231 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 12 00:16:12.983557 systemd[1]: Reached target network-online.target - Network is Online. Jul 12 00:16:12.986998 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 12 00:16:12.990408 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:16:12.993205 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 12 00:16:13.021630 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 12 00:16:13.021998 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 12 00:16:13.024015 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 12 00:16:13.026731 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 12 00:16:14.313065 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:16:14.327086 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 12 00:16:14.327507 (kubelet)[1671]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 00:16:14.328623 systemd[1]: Startup finished in 3.690s (kernel) + 7.148s (initrd) + 5.427s (userspace) = 16.266s. Jul 12 00:16:15.043810 kubelet[1671]: E0712 00:16:15.043718 1671 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:16:15.050542 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:16:15.050819 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:16:15.051508 systemd[1]: kubelet.service: Consumed 1.811s CPU time, 268M memory peak. Jul 12 00:16:15.884747 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 12 00:16:15.886463 systemd[1]: Started sshd@0-10.0.0.95:22-10.0.0.1:55064.service - OpenSSH per-connection server daemon (10.0.0.1:55064). Jul 12 00:16:15.964339 sshd[1684]: Accepted publickey for core from 10.0.0.1 port 55064 ssh2: RSA SHA256:vyezqvaDT/l+5kUccKQO1QIecRdQxwI+fSccHDXDwwc Jul 12 00:16:15.966631 sshd-session[1684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:16:15.974457 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 12 00:16:15.975874 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 12 00:16:15.983296 systemd-logind[1538]: New session 1 of user core. Jul 12 00:16:16.006939 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 12 00:16:16.010607 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 12 00:16:16.034372 (systemd)[1688]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:16:16.037488 systemd-logind[1538]: New session c1 of user core. Jul 12 00:16:16.218683 systemd[1688]: Queued start job for default target default.target. Jul 12 00:16:16.230829 systemd[1688]: Created slice app.slice - User Application Slice. Jul 12 00:16:16.230861 systemd[1688]: Reached target paths.target - Paths. Jul 12 00:16:16.230916 systemd[1688]: Reached target timers.target - Timers. Jul 12 00:16:16.232783 systemd[1688]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 12 00:16:16.245764 systemd[1688]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 12 00:16:16.245933 systemd[1688]: Reached target sockets.target - Sockets. Jul 12 00:16:16.245994 systemd[1688]: Reached target basic.target - Basic System. Jul 12 00:16:16.246053 systemd[1688]: Reached target default.target - Main User Target. Jul 12 00:16:16.246096 systemd[1688]: Startup finished in 200ms. Jul 12 00:16:16.246607 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 12 00:16:16.248727 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 12 00:16:16.317394 systemd[1]: Started sshd@1-10.0.0.95:22-10.0.0.1:46952.service - OpenSSH per-connection server daemon (10.0.0.1:46952). Jul 12 00:16:16.366883 sshd[1699]: Accepted publickey for core from 10.0.0.1 port 46952 ssh2: RSA SHA256:vyezqvaDT/l+5kUccKQO1QIecRdQxwI+fSccHDXDwwc Jul 12 00:16:16.368617 sshd-session[1699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:16:16.374338 systemd-logind[1538]: New session 2 of user core. Jul 12 00:16:16.393636 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 12 00:16:16.450434 sshd[1701]: Connection closed by 10.0.0.1 port 46952 Jul 12 00:16:16.450843 sshd-session[1699]: pam_unix(sshd:session): session closed for user core Jul 12 00:16:16.464817 systemd[1]: sshd@1-10.0.0.95:22-10.0.0.1:46952.service: Deactivated successfully. Jul 12 00:16:16.467362 systemd[1]: session-2.scope: Deactivated successfully. Jul 12 00:16:16.468198 systemd-logind[1538]: Session 2 logged out. Waiting for processes to exit. Jul 12 00:16:16.471279 systemd[1]: Started sshd@2-10.0.0.95:22-10.0.0.1:46954.service - OpenSSH per-connection server daemon (10.0.0.1:46954). Jul 12 00:16:16.472386 systemd-logind[1538]: Removed session 2. Jul 12 00:16:16.526764 sshd[1707]: Accepted publickey for core from 10.0.0.1 port 46954 ssh2: RSA SHA256:vyezqvaDT/l+5kUccKQO1QIecRdQxwI+fSccHDXDwwc Jul 12 00:16:16.528752 sshd-session[1707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:16:16.534198 systemd-logind[1538]: New session 3 of user core. Jul 12 00:16:16.548652 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 12 00:16:16.601886 sshd[1709]: Connection closed by 10.0.0.1 port 46954 Jul 12 00:16:16.602429 sshd-session[1707]: pam_unix(sshd:session): session closed for user core Jul 12 00:16:16.618715 systemd[1]: sshd@2-10.0.0.95:22-10.0.0.1:46954.service: Deactivated successfully. Jul 12 00:16:16.621138 systemd[1]: session-3.scope: Deactivated successfully. Jul 12 00:16:16.622194 systemd-logind[1538]: Session 3 logged out. Waiting for processes to exit. Jul 12 00:16:16.626644 systemd[1]: Started sshd@3-10.0.0.95:22-10.0.0.1:46964.service - OpenSSH per-connection server daemon (10.0.0.1:46964). Jul 12 00:16:16.627660 systemd-logind[1538]: Removed session 3. Jul 12 00:16:16.685362 sshd[1715]: Accepted publickey for core from 10.0.0.1 port 46964 ssh2: RSA SHA256:vyezqvaDT/l+5kUccKQO1QIecRdQxwI+fSccHDXDwwc Jul 12 00:16:16.687025 sshd-session[1715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:16:16.692893 systemd-logind[1538]: New session 4 of user core. Jul 12 00:16:16.710465 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 12 00:16:16.767716 sshd[1717]: Connection closed by 10.0.0.1 port 46964 Jul 12 00:16:16.768126 sshd-session[1715]: pam_unix(sshd:session): session closed for user core Jul 12 00:16:16.778733 systemd[1]: sshd@3-10.0.0.95:22-10.0.0.1:46964.service: Deactivated successfully. Jul 12 00:16:16.780967 systemd[1]: session-4.scope: Deactivated successfully. Jul 12 00:16:16.782046 systemd-logind[1538]: Session 4 logged out. Waiting for processes to exit. Jul 12 00:16:16.786944 systemd[1]: Started sshd@4-10.0.0.95:22-10.0.0.1:46966.service - OpenSSH per-connection server daemon (10.0.0.1:46966). Jul 12 00:16:16.787800 systemd-logind[1538]: Removed session 4. Jul 12 00:16:16.845345 sshd[1723]: Accepted publickey for core from 10.0.0.1 port 46966 ssh2: RSA SHA256:vyezqvaDT/l+5kUccKQO1QIecRdQxwI+fSccHDXDwwc Jul 12 00:16:16.847566 sshd-session[1723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:16:16.852727 systemd-logind[1538]: New session 5 of user core. Jul 12 00:16:16.872524 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 12 00:16:16.937137 sudo[1726]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 12 00:16:16.937495 sudo[1726]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 00:16:16.959473 sudo[1726]: pam_unix(sudo:session): session closed for user root Jul 12 00:16:16.961353 sshd[1725]: Connection closed by 10.0.0.1 port 46966 Jul 12 00:16:16.961787 sshd-session[1723]: pam_unix(sshd:session): session closed for user core Jul 12 00:16:16.980760 systemd[1]: sshd@4-10.0.0.95:22-10.0.0.1:46966.service: Deactivated successfully. Jul 12 00:16:16.983054 systemd[1]: session-5.scope: Deactivated successfully. Jul 12 00:16:16.984006 systemd-logind[1538]: Session 5 logged out. Waiting for processes to exit. Jul 12 00:16:16.987586 systemd[1]: Started sshd@5-10.0.0.95:22-10.0.0.1:46976.service - OpenSSH per-connection server daemon (10.0.0.1:46976). Jul 12 00:16:16.988388 systemd-logind[1538]: Removed session 5. Jul 12 00:16:17.042392 sshd[1732]: Accepted publickey for core from 10.0.0.1 port 46976 ssh2: RSA SHA256:vyezqvaDT/l+5kUccKQO1QIecRdQxwI+fSccHDXDwwc Jul 12 00:16:17.044412 sshd-session[1732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:16:17.050429 systemd-logind[1538]: New session 6 of user core. Jul 12 00:16:17.059401 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 12 00:16:17.114846 sudo[1736]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 12 00:16:17.115170 sudo[1736]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 00:16:17.125878 sudo[1736]: pam_unix(sudo:session): session closed for user root Jul 12 00:16:17.135274 sudo[1735]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 12 00:16:17.135695 sudo[1735]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 00:16:17.148820 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 12 00:16:17.210056 augenrules[1758]: No rules Jul 12 00:16:17.212864 systemd[1]: audit-rules.service: Deactivated successfully. Jul 12 00:16:17.213254 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 12 00:16:17.215760 sudo[1735]: pam_unix(sudo:session): session closed for user root Jul 12 00:16:17.217956 sshd[1734]: Connection closed by 10.0.0.1 port 46976 Jul 12 00:16:17.218407 sshd-session[1732]: pam_unix(sshd:session): session closed for user core Jul 12 00:16:17.227797 systemd[1]: sshd@5-10.0.0.95:22-10.0.0.1:46976.service: Deactivated successfully. Jul 12 00:16:17.230183 systemd[1]: session-6.scope: Deactivated successfully. Jul 12 00:16:17.231386 systemd-logind[1538]: Session 6 logged out. Waiting for processes to exit. Jul 12 00:16:17.234722 systemd[1]: Started sshd@6-10.0.0.95:22-10.0.0.1:46986.service - OpenSSH per-connection server daemon (10.0.0.1:46986). Jul 12 00:16:17.235401 systemd-logind[1538]: Removed session 6. Jul 12 00:16:17.297385 sshd[1767]: Accepted publickey for core from 10.0.0.1 port 46986 ssh2: RSA SHA256:vyezqvaDT/l+5kUccKQO1QIecRdQxwI+fSccHDXDwwc Jul 12 00:16:17.299376 sshd-session[1767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:16:17.304700 systemd-logind[1538]: New session 7 of user core. Jul 12 00:16:17.314550 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 12 00:16:17.370456 sudo[1770]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 12 00:16:17.370822 sudo[1770]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 00:16:18.090488 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 12 00:16:18.138872 (dockerd)[1791]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 12 00:16:18.717568 dockerd[1791]: time="2025-07-12T00:16:18.717485128Z" level=info msg="Starting up" Jul 12 00:16:18.719082 dockerd[1791]: time="2025-07-12T00:16:18.719045245Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 12 00:16:21.125822 dockerd[1791]: time="2025-07-12T00:16:21.125694881Z" level=info msg="Loading containers: start." Jul 12 00:16:21.466260 kernel: Initializing XFRM netlink socket Jul 12 00:16:22.021338 systemd-networkd[1461]: docker0: Link UP Jul 12 00:16:22.405642 dockerd[1791]: time="2025-07-12T00:16:22.405458509Z" level=info msg="Loading containers: done." Jul 12 00:16:22.426719 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2054828300-merged.mount: Deactivated successfully. Jul 12 00:16:22.518330 dockerd[1791]: time="2025-07-12T00:16:22.518256216Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 12 00:16:22.518515 dockerd[1791]: time="2025-07-12T00:16:22.518425673Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 12 00:16:22.518658 dockerd[1791]: time="2025-07-12T00:16:22.518619156Z" level=info msg="Initializing buildkit" Jul 12 00:16:22.644848 dockerd[1791]: time="2025-07-12T00:16:22.644764905Z" level=info msg="Completed buildkit initialization" Jul 12 00:16:22.651089 dockerd[1791]: time="2025-07-12T00:16:22.650996566Z" level=info msg="Daemon has completed initialization" Jul 12 00:16:22.651300 dockerd[1791]: time="2025-07-12T00:16:22.651132210Z" level=info msg="API listen on /run/docker.sock" Jul 12 00:16:22.651544 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 12 00:16:24.279748 containerd[1552]: time="2025-07-12T00:16:24.279653012Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\"" Jul 12 00:16:25.093565 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 12 00:16:25.095738 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:16:25.488484 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:16:25.492405 (kubelet)[2011]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 00:16:25.874831 kubelet[2011]: E0712 00:16:25.874759 2011 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:16:25.882190 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:16:25.882423 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:16:25.882876 systemd[1]: kubelet.service: Consumed 280ms CPU time, 109M memory peak. Jul 12 00:16:26.040098 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1273429442.mount: Deactivated successfully. Jul 12 00:16:27.868162 containerd[1552]: time="2025-07-12T00:16:27.867986662Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:16:27.928087 containerd[1552]: time="2025-07-12T00:16:27.927966368Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.2: active requests=0, bytes read=30079099" Jul 12 00:16:27.979161 containerd[1552]: time="2025-07-12T00:16:27.979064071Z" level=info msg="ImageCreate event name:\"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:16:28.113733 containerd[1552]: time="2025-07-12T00:16:28.113640967Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:16:28.114712 containerd[1552]: time="2025-07-12T00:16:28.114660419Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.2\" with image id \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\", size \"30075899\" in 3.834916647s" Jul 12 00:16:28.114799 containerd[1552]: time="2025-07-12T00:16:28.114716875Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\" returns image reference \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\"" Jul 12 00:16:28.115509 containerd[1552]: time="2025-07-12T00:16:28.115476861Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\"" Jul 12 00:16:31.032170 containerd[1552]: time="2025-07-12T00:16:31.032071641Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:16:31.033415 containerd[1552]: time="2025-07-12T00:16:31.033376329Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.2: active requests=0, bytes read=26018946" Jul 12 00:16:31.035571 containerd[1552]: time="2025-07-12T00:16:31.035509831Z" level=info msg="ImageCreate event name:\"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:16:31.040241 containerd[1552]: time="2025-07-12T00:16:31.038878882Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:16:31.042185 containerd[1552]: time="2025-07-12T00:16:31.042129891Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.2\" with image id \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\", size \"27646507\" in 2.926612043s" Jul 12 00:16:31.042185 containerd[1552]: time="2025-07-12T00:16:31.042170317Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\" returns image reference \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\"" Jul 12 00:16:31.042700 containerd[1552]: time="2025-07-12T00:16:31.042659214Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\"" Jul 12 00:16:32.738164 containerd[1552]: time="2025-07-12T00:16:32.738077370Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:16:32.738996 containerd[1552]: time="2025-07-12T00:16:32.738938766Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.2: active requests=0, bytes read=20155055" Jul 12 00:16:32.740283 containerd[1552]: time="2025-07-12T00:16:32.740247731Z" level=info msg="ImageCreate event name:\"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:16:32.743549 containerd[1552]: time="2025-07-12T00:16:32.743501966Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:16:32.744502 containerd[1552]: time="2025-07-12T00:16:32.744456988Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.2\" with image id \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\", size \"21782634\" in 1.70176357s" Jul 12 00:16:32.744502 containerd[1552]: time="2025-07-12T00:16:32.744490641Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\" returns image reference \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\"" Jul 12 00:16:32.745277 containerd[1552]: time="2025-07-12T00:16:32.745231762Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jul 12 00:16:34.501167 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3904554627.mount: Deactivated successfully. Jul 12 00:16:35.410027 containerd[1552]: time="2025-07-12T00:16:35.409959271Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:16:35.410697 containerd[1552]: time="2025-07-12T00:16:35.410668181Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.2: active requests=0, bytes read=31892746" Jul 12 00:16:35.411902 containerd[1552]: time="2025-07-12T00:16:35.411866288Z" level=info msg="ImageCreate event name:\"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:16:35.414594 containerd[1552]: time="2025-07-12T00:16:35.414545014Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:16:35.415056 containerd[1552]: time="2025-07-12T00:16:35.415019875Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.2\" with image id \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\", repo tag \"registry.k8s.io/kube-proxy:v1.33.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\", size \"31891765\" in 2.669755582s" Jul 12 00:16:35.415056 containerd[1552]: time="2025-07-12T00:16:35.415046625Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\"" Jul 12 00:16:35.415771 containerd[1552]: time="2025-07-12T00:16:35.415663272Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jul 12 00:16:36.093235 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 12 00:16:36.095164 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:16:36.345444 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:16:36.365660 (kubelet)[2098]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 00:16:36.629090 kubelet[2098]: E0712 00:16:36.628936 2098 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:16:36.634170 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:16:36.634388 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:16:36.634790 systemd[1]: kubelet.service: Consumed 291ms CPU time, 110.1M memory peak. Jul 12 00:16:36.827901 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3229164897.mount: Deactivated successfully. Jul 12 00:16:38.092387 containerd[1552]: time="2025-07-12T00:16:38.092319881Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:16:38.093269 containerd[1552]: time="2025-07-12T00:16:38.093233936Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Jul 12 00:16:38.094496 containerd[1552]: time="2025-07-12T00:16:38.094453524Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:16:38.097129 containerd[1552]: time="2025-07-12T00:16:38.097102153Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:16:38.098313 containerd[1552]: time="2025-07-12T00:16:38.098279271Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.682574742s" Jul 12 00:16:38.098357 containerd[1552]: time="2025-07-12T00:16:38.098315739Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jul 12 00:16:38.098968 containerd[1552]: time="2025-07-12T00:16:38.098807221Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 12 00:16:38.616076 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount202081614.mount: Deactivated successfully. Jul 12 00:16:38.623356 containerd[1552]: time="2025-07-12T00:16:38.623302732Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:16:38.624279 containerd[1552]: time="2025-07-12T00:16:38.624239961Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 12 00:16:38.625593 containerd[1552]: time="2025-07-12T00:16:38.625561099Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:16:38.627867 containerd[1552]: time="2025-07-12T00:16:38.627820819Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:16:38.630233 containerd[1552]: time="2025-07-12T00:16:38.629797357Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 530.94497ms" Jul 12 00:16:38.630233 containerd[1552]: time="2025-07-12T00:16:38.629900029Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 12 00:16:38.631150 containerd[1552]: time="2025-07-12T00:16:38.631100932Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jul 12 00:16:39.152595 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3998860591.mount: Deactivated successfully. Jul 12 00:16:42.706998 containerd[1552]: time="2025-07-12T00:16:42.706894993Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:16:42.708468 containerd[1552]: time="2025-07-12T00:16:42.708378496Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58247175" Jul 12 00:16:42.710517 containerd[1552]: time="2025-07-12T00:16:42.710460512Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:16:42.714352 containerd[1552]: time="2025-07-12T00:16:42.714258828Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:16:42.715441 containerd[1552]: time="2025-07-12T00:16:42.715391633Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 4.084233544s" Jul 12 00:16:42.715441 containerd[1552]: time="2025-07-12T00:16:42.715436357Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jul 12 00:16:45.621036 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:16:45.621323 systemd[1]: kubelet.service: Consumed 291ms CPU time, 110.1M memory peak. Jul 12 00:16:45.623908 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:16:45.653116 systemd[1]: Reload requested from client PID 2249 ('systemctl') (unit session-7.scope)... Jul 12 00:16:45.653132 systemd[1]: Reloading... Jul 12 00:16:45.802389 zram_generator::config[2297]: No configuration found. Jul 12 00:16:46.182856 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:16:46.317526 systemd[1]: Reloading finished in 664 ms. Jul 12 00:16:46.385905 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 12 00:16:46.386012 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 12 00:16:46.386377 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:16:46.386441 systemd[1]: kubelet.service: Consumed 183ms CPU time, 98.3M memory peak. Jul 12 00:16:46.388185 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:16:46.587880 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:16:46.603623 (kubelet)[2339]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 12 00:16:46.674195 kubelet[2339]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:16:46.674195 kubelet[2339]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 12 00:16:46.674195 kubelet[2339]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:16:46.674724 kubelet[2339]: I0712 00:16:46.674258 2339 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 12 00:16:47.033962 kubelet[2339]: I0712 00:16:47.033891 2339 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 12 00:16:47.033962 kubelet[2339]: I0712 00:16:47.033937 2339 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 12 00:16:47.034271 kubelet[2339]: I0712 00:16:47.034250 2339 server.go:956] "Client rotation is on, will bootstrap in background" Jul 12 00:16:47.102234 kubelet[2339]: E0712 00:16:47.102132 2339 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.95:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.95:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 12 00:16:47.103288 kubelet[2339]: I0712 00:16:47.103246 2339 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 12 00:16:47.113793 kubelet[2339]: I0712 00:16:47.113753 2339 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 12 00:16:47.121891 kubelet[2339]: I0712 00:16:47.121814 2339 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 12 00:16:47.122371 kubelet[2339]: I0712 00:16:47.122281 2339 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 12 00:16:47.122615 kubelet[2339]: I0712 00:16:47.122343 2339 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 12 00:16:47.122798 kubelet[2339]: I0712 00:16:47.122630 2339 topology_manager.go:138] "Creating topology manager with none policy" Jul 12 00:16:47.122798 kubelet[2339]: I0712 00:16:47.122645 2339 container_manager_linux.go:303] "Creating device plugin manager" Jul 12 00:16:47.123995 kubelet[2339]: I0712 00:16:47.123945 2339 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:16:47.128730 kubelet[2339]: I0712 00:16:47.128672 2339 kubelet.go:480] "Attempting to sync node with API server" Jul 12 00:16:47.128730 kubelet[2339]: I0712 00:16:47.128715 2339 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 12 00:16:47.128869 kubelet[2339]: I0712 00:16:47.128754 2339 kubelet.go:386] "Adding apiserver pod source" Jul 12 00:16:47.128869 kubelet[2339]: I0712 00:16:47.128835 2339 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 12 00:16:47.132661 kubelet[2339]: E0712 00:16:47.132554 2339 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.95:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.95:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 12 00:16:47.132866 kubelet[2339]: E0712 00:16:47.132822 2339 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.95:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.95:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 12 00:16:47.153919 kubelet[2339]: I0712 00:16:47.153855 2339 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 12 00:16:47.154690 kubelet[2339]: I0712 00:16:47.154639 2339 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 12 00:16:47.156831 kubelet[2339]: W0712 00:16:47.156785 2339 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 12 00:16:47.161065 kubelet[2339]: I0712 00:16:47.161020 2339 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 12 00:16:47.161206 kubelet[2339]: I0712 00:16:47.161114 2339 server.go:1289] "Started kubelet" Jul 12 00:16:47.162317 kubelet[2339]: I0712 00:16:47.162187 2339 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 12 00:16:47.163765 kubelet[2339]: I0712 00:16:47.163644 2339 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 12 00:16:47.163765 kubelet[2339]: I0712 00:16:47.163669 2339 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 12 00:16:47.163938 kubelet[2339]: I0712 00:16:47.163768 2339 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 12 00:16:47.165232 kubelet[2339]: I0712 00:16:47.164745 2339 server.go:317] "Adding debug handlers to kubelet server" Jul 12 00:16:47.166180 kubelet[2339]: I0712 00:16:47.165754 2339 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 12 00:16:47.173739 kubelet[2339]: E0712 00:16:47.171856 2339 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:16:47.173739 kubelet[2339]: I0712 00:16:47.171956 2339 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 12 00:16:47.173739 kubelet[2339]: I0712 00:16:47.172130 2339 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 12 00:16:47.173739 kubelet[2339]: I0712 00:16:47.172317 2339 reconciler.go:26] "Reconciler: start to sync state" Jul 12 00:16:47.173739 kubelet[2339]: E0712 00:16:47.172953 2339 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.95:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.95:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 12 00:16:47.174712 kubelet[2339]: E0712 00:16:47.173240 2339 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.95:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.95:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.185158d63b1138bd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-12 00:16:47.161063613 +0000 UTC m=+0.552101536,LastTimestamp:2025-07-12 00:16:47.161063613 +0000 UTC m=+0.552101536,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 12 00:16:47.175670 kubelet[2339]: I0712 00:16:47.175631 2339 factory.go:223] Registration of the systemd container factory successfully Jul 12 00:16:47.175800 kubelet[2339]: I0712 00:16:47.175766 2339 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 12 00:16:47.176627 kubelet[2339]: E0712 00:16:47.176593 2339 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 12 00:16:47.176727 kubelet[2339]: E0712 00:16:47.176677 2339 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.95:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.95:6443: connect: connection refused" interval="200ms" Jul 12 00:16:47.178340 kubelet[2339]: I0712 00:16:47.178314 2339 factory.go:223] Registration of the containerd container factory successfully Jul 12 00:16:47.193059 kubelet[2339]: I0712 00:16:47.193037 2339 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 12 00:16:47.193228 kubelet[2339]: I0712 00:16:47.193197 2339 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 12 00:16:47.193320 kubelet[2339]: I0712 00:16:47.193301 2339 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:16:47.272905 kubelet[2339]: E0712 00:16:47.272785 2339 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:16:47.373289 kubelet[2339]: E0712 00:16:47.373196 2339 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:16:47.377798 kubelet[2339]: E0712 00:16:47.377751 2339 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.95:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.95:6443: connect: connection refused" interval="400ms" Jul 12 00:16:47.474237 kubelet[2339]: E0712 00:16:47.474148 2339 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:16:47.574853 kubelet[2339]: E0712 00:16:47.574799 2339 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:16:47.675647 kubelet[2339]: E0712 00:16:47.675490 2339 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:16:47.724895 kubelet[2339]: I0712 00:16:47.724812 2339 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 12 00:16:47.726415 kubelet[2339]: I0712 00:16:47.726387 2339 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 12 00:16:47.726510 kubelet[2339]: I0712 00:16:47.726434 2339 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 12 00:16:47.726510 kubelet[2339]: I0712 00:16:47.726468 2339 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 12 00:16:47.726510 kubelet[2339]: I0712 00:16:47.726484 2339 kubelet.go:2436] "Starting kubelet main sync loop" Jul 12 00:16:47.726635 kubelet[2339]: E0712 00:16:47.726548 2339 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 12 00:16:47.727252 kubelet[2339]: E0712 00:16:47.727164 2339 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.95:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.95:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 12 00:16:47.759539 kubelet[2339]: I0712 00:16:47.759356 2339 policy_none.go:49] "None policy: Start" Jul 12 00:16:47.759539 kubelet[2339]: I0712 00:16:47.759550 2339 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 12 00:16:47.759752 kubelet[2339]: I0712 00:16:47.759589 2339 state_mem.go:35] "Initializing new in-memory state store" Jul 12 00:16:47.775944 kubelet[2339]: E0712 00:16:47.775843 2339 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:16:47.778602 kubelet[2339]: E0712 00:16:47.778547 2339 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.95:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.95:6443: connect: connection refused" interval="800ms" Jul 12 00:16:47.824413 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 12 00:16:47.827161 kubelet[2339]: E0712 00:16:47.827108 2339 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 12 00:16:47.838931 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 12 00:16:47.846132 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 12 00:16:47.866329 kubelet[2339]: E0712 00:16:47.866273 2339 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 12 00:16:47.866634 kubelet[2339]: I0712 00:16:47.866609 2339 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 12 00:16:47.866763 kubelet[2339]: I0712 00:16:47.866636 2339 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 12 00:16:47.867239 kubelet[2339]: I0712 00:16:47.867104 2339 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 12 00:16:47.869200 kubelet[2339]: E0712 00:16:47.869145 2339 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 12 00:16:47.869345 kubelet[2339]: E0712 00:16:47.869263 2339 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 12 00:16:47.968884 kubelet[2339]: I0712 00:16:47.968711 2339 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 12 00:16:47.969208 kubelet[2339]: E0712 00:16:47.969167 2339 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.95:6443/api/v1/nodes\": dial tcp 10.0.0.95:6443: connect: connection refused" node="localhost" Jul 12 00:16:48.077802 kubelet[2339]: I0712 00:16:48.077734 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4e312e0abd66638e65648d226d22d4be-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4e312e0abd66638e65648d226d22d4be\") " pod="kube-system/kube-apiserver-localhost" Jul 12 00:16:48.077802 kubelet[2339]: I0712 00:16:48.077794 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4e312e0abd66638e65648d226d22d4be-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4e312e0abd66638e65648d226d22d4be\") " pod="kube-system/kube-apiserver-localhost" Jul 12 00:16:48.078079 kubelet[2339]: I0712 00:16:48.077896 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4e312e0abd66638e65648d226d22d4be-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4e312e0abd66638e65648d226d22d4be\") " pod="kube-system/kube-apiserver-localhost" Jul 12 00:16:48.116961 kubelet[2339]: E0712 00:16:48.116784 2339 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.95:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.95:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.185158d63b1138bd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-12 00:16:47.161063613 +0000 UTC m=+0.552101536,LastTimestamp:2025-07-12 00:16:47.161063613 +0000 UTC m=+0.552101536,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 12 00:16:48.171090 kubelet[2339]: I0712 00:16:48.171030 2339 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 12 00:16:48.171616 kubelet[2339]: E0712 00:16:48.171553 2339 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.95:6443/api/v1/nodes\": dial tcp 10.0.0.95:6443: connect: connection refused" node="localhost" Jul 12 00:16:48.178114 kubelet[2339]: I0712 00:16:48.178074 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:16:48.178114 kubelet[2339]: I0712 00:16:48.178108 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:16:48.178229 kubelet[2339]: I0712 00:16:48.178132 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:16:48.178267 kubelet[2339]: I0712 00:16:48.178232 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:16:48.178317 kubelet[2339]: I0712 00:16:48.178281 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:16:48.179296 systemd[1]: Created slice kubepods-burstable-pod4e312e0abd66638e65648d226d22d4be.slice - libcontainer container kubepods-burstable-pod4e312e0abd66638e65648d226d22d4be.slice. Jul 12 00:16:48.193408 kubelet[2339]: E0712 00:16:48.193386 2339 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 12 00:16:48.193835 kubelet[2339]: E0712 00:16:48.193713 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:16:48.194399 containerd[1552]: time="2025-07-12T00:16:48.194361110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4e312e0abd66638e65648d226d22d4be,Namespace:kube-system,Attempt:0,}" Jul 12 00:16:48.270142 kubelet[2339]: E0712 00:16:48.270037 2339 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.95:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.95:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 12 00:16:48.312552 kubelet[2339]: E0712 00:16:48.312500 2339 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.95:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.95:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 12 00:16:48.410689 systemd[1]: Created slice kubepods-burstable-pod84b858ec27c8b2738b1d9ff9927e0dcb.slice - libcontainer container kubepods-burstable-pod84b858ec27c8b2738b1d9ff9927e0dcb.slice. Jul 12 00:16:48.430274 kubelet[2339]: E0712 00:16:48.429791 2339 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 12 00:16:48.430274 kubelet[2339]: E0712 00:16:48.430237 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:16:48.430894 containerd[1552]: time="2025-07-12T00:16:48.430862827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:84b858ec27c8b2738b1d9ff9927e0dcb,Namespace:kube-system,Attempt:0,}" Jul 12 00:16:48.433884 systemd[1]: Created slice kubepods-burstable-pod834ee54f1daa06092e339273649eb5ea.slice - libcontainer container kubepods-burstable-pod834ee54f1daa06092e339273649eb5ea.slice. Jul 12 00:16:48.436280 kubelet[2339]: E0712 00:16:48.436242 2339 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 12 00:16:48.466895 containerd[1552]: time="2025-07-12T00:16:48.466827531Z" level=info msg="connecting to shim 8adb67e5eac94a6a4ff3ddc2659348bca308d1bc6c25d53e73d757eec8e86e3e" address="unix:///run/containerd/s/8e6fcc2eec8fa85ef01f95fae286b1771f61905deb183c3a23658455c9ee35de" namespace=k8s.io protocol=ttrpc version=3 Jul 12 00:16:48.471627 containerd[1552]: time="2025-07-12T00:16:48.471572661Z" level=info msg="connecting to shim 410da810430c6707bd1fdc98ab5f9d86194458a91e256c2028613af0ed2bdc73" address="unix:///run/containerd/s/f6edd33eedb0b8d61c839e379b8bd6e2284b140fcde4d1e2c55f4c193886d37a" namespace=k8s.io protocol=ttrpc version=3 Jul 12 00:16:48.482261 kubelet[2339]: I0712 00:16:48.479513 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/834ee54f1daa06092e339273649eb5ea-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"834ee54f1daa06092e339273649eb5ea\") " pod="kube-system/kube-scheduler-localhost" Jul 12 00:16:48.529531 systemd[1]: Started cri-containerd-8adb67e5eac94a6a4ff3ddc2659348bca308d1bc6c25d53e73d757eec8e86e3e.scope - libcontainer container 8adb67e5eac94a6a4ff3ddc2659348bca308d1bc6c25d53e73d757eec8e86e3e. Jul 12 00:16:48.546605 systemd[1]: Started cri-containerd-410da810430c6707bd1fdc98ab5f9d86194458a91e256c2028613af0ed2bdc73.scope - libcontainer container 410da810430c6707bd1fdc98ab5f9d86194458a91e256c2028613af0ed2bdc73. Jul 12 00:16:48.573714 kubelet[2339]: I0712 00:16:48.573660 2339 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 12 00:16:48.574382 kubelet[2339]: E0712 00:16:48.574341 2339 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.95:6443/api/v1/nodes\": dial tcp 10.0.0.95:6443: connect: connection refused" node="localhost" Jul 12 00:16:48.579714 kubelet[2339]: E0712 00:16:48.579684 2339 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.95:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.95:6443: connect: connection refused" interval="1.6s" Jul 12 00:16:48.585511 kubelet[2339]: E0712 00:16:48.585475 2339 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.95:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.95:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 12 00:16:48.616889 containerd[1552]: time="2025-07-12T00:16:48.613423197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4e312e0abd66638e65648d226d22d4be,Namespace:kube-system,Attempt:0,} returns sandbox id \"8adb67e5eac94a6a4ff3ddc2659348bca308d1bc6c25d53e73d757eec8e86e3e\"" Jul 12 00:16:48.620924 kubelet[2339]: E0712 00:16:48.620612 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:16:48.640513 containerd[1552]: time="2025-07-12T00:16:48.640449577Z" level=info msg="CreateContainer within sandbox \"8adb67e5eac94a6a4ff3ddc2659348bca308d1bc6c25d53e73d757eec8e86e3e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 12 00:16:48.642333 containerd[1552]: time="2025-07-12T00:16:48.642281463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:84b858ec27c8b2738b1d9ff9927e0dcb,Namespace:kube-system,Attempt:0,} returns sandbox id \"410da810430c6707bd1fdc98ab5f9d86194458a91e256c2028613af0ed2bdc73\"" Jul 12 00:16:48.642976 kubelet[2339]: E0712 00:16:48.642934 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:16:48.650537 containerd[1552]: time="2025-07-12T00:16:48.650494770Z" level=info msg="CreateContainer within sandbox \"410da810430c6707bd1fdc98ab5f9d86194458a91e256c2028613af0ed2bdc73\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 12 00:16:48.670647 containerd[1552]: time="2025-07-12T00:16:48.670588493Z" level=info msg="Container d11447d45744d7f78022f5e1b76d6199cbd6ca6c783b827ef97223b07c208828: CDI devices from CRI Config.CDIDevices: []" Jul 12 00:16:48.674428 containerd[1552]: time="2025-07-12T00:16:48.674384457Z" level=info msg="Container 88882027631d3bf4b70bd38a258f0abb7cd19378236c9a2d22cce7d5c962a446: CDI devices from CRI Config.CDIDevices: []" Jul 12 00:16:48.684458 containerd[1552]: time="2025-07-12T00:16:48.684423899Z" level=info msg="CreateContainer within sandbox \"8adb67e5eac94a6a4ff3ddc2659348bca308d1bc6c25d53e73d757eec8e86e3e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d11447d45744d7f78022f5e1b76d6199cbd6ca6c783b827ef97223b07c208828\"" Jul 12 00:16:48.685083 containerd[1552]: time="2025-07-12T00:16:48.685031993Z" level=info msg="StartContainer for \"d11447d45744d7f78022f5e1b76d6199cbd6ca6c783b827ef97223b07c208828\"" Jul 12 00:16:48.686112 containerd[1552]: time="2025-07-12T00:16:48.686089848Z" level=info msg="connecting to shim d11447d45744d7f78022f5e1b76d6199cbd6ca6c783b827ef97223b07c208828" address="unix:///run/containerd/s/8e6fcc2eec8fa85ef01f95fae286b1771f61905deb183c3a23658455c9ee35de" protocol=ttrpc version=3 Jul 12 00:16:48.691793 containerd[1552]: time="2025-07-12T00:16:48.691746422Z" level=info msg="CreateContainer within sandbox \"410da810430c6707bd1fdc98ab5f9d86194458a91e256c2028613af0ed2bdc73\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"88882027631d3bf4b70bd38a258f0abb7cd19378236c9a2d22cce7d5c962a446\"" Jul 12 00:16:48.692451 containerd[1552]: time="2025-07-12T00:16:48.692408650Z" level=info msg="StartContainer for \"88882027631d3bf4b70bd38a258f0abb7cd19378236c9a2d22cce7d5c962a446\"" Jul 12 00:16:48.693943 containerd[1552]: time="2025-07-12T00:16:48.693917888Z" level=info msg="connecting to shim 88882027631d3bf4b70bd38a258f0abb7cd19378236c9a2d22cce7d5c962a446" address="unix:///run/containerd/s/f6edd33eedb0b8d61c839e379b8bd6e2284b140fcde4d1e2c55f4c193886d37a" protocol=ttrpc version=3 Jul 12 00:16:48.711412 systemd[1]: Started cri-containerd-d11447d45744d7f78022f5e1b76d6199cbd6ca6c783b827ef97223b07c208828.scope - libcontainer container d11447d45744d7f78022f5e1b76d6199cbd6ca6c783b827ef97223b07c208828. Jul 12 00:16:48.715991 systemd[1]: Started cri-containerd-88882027631d3bf4b70bd38a258f0abb7cd19378236c9a2d22cce7d5c962a446.scope - libcontainer container 88882027631d3bf4b70bd38a258f0abb7cd19378236c9a2d22cce7d5c962a446. Jul 12 00:16:48.736760 kubelet[2339]: E0712 00:16:48.736719 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:16:48.738123 containerd[1552]: time="2025-07-12T00:16:48.737769866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:834ee54f1daa06092e339273649eb5ea,Namespace:kube-system,Attempt:0,}" Jul 12 00:16:48.786725 containerd[1552]: time="2025-07-12T00:16:48.785986256Z" level=info msg="connecting to shim fa018f906fc04fa676b83b1ee4b20cbea4e364f7559bbfc54d7549fa5d16cd5f" address="unix:///run/containerd/s/9bd7c090ba2c744f83f2eb1b52ff9af7a3fcbabcefd2563da0478e3cda913a9a" namespace=k8s.io protocol=ttrpc version=3 Jul 12 00:16:48.806190 kubelet[2339]: E0712 00:16:48.806126 2339 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.95:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.95:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 12 00:16:48.813941 containerd[1552]: time="2025-07-12T00:16:48.813904543Z" level=info msg="StartContainer for \"88882027631d3bf4b70bd38a258f0abb7cd19378236c9a2d22cce7d5c962a446\" returns successfully" Jul 12 00:16:48.817542 containerd[1552]: time="2025-07-12T00:16:48.817409881Z" level=info msg="StartContainer for \"d11447d45744d7f78022f5e1b76d6199cbd6ca6c783b827ef97223b07c208828\" returns successfully" Jul 12 00:16:48.826783 systemd[1]: Started cri-containerd-fa018f906fc04fa676b83b1ee4b20cbea4e364f7559bbfc54d7549fa5d16cd5f.scope - libcontainer container fa018f906fc04fa676b83b1ee4b20cbea4e364f7559bbfc54d7549fa5d16cd5f. Jul 12 00:16:48.909701 containerd[1552]: time="2025-07-12T00:16:48.909648145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:834ee54f1daa06092e339273649eb5ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"fa018f906fc04fa676b83b1ee4b20cbea4e364f7559bbfc54d7549fa5d16cd5f\"" Jul 12 00:16:48.910595 kubelet[2339]: E0712 00:16:48.910551 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:16:48.916455 containerd[1552]: time="2025-07-12T00:16:48.916419413Z" level=info msg="CreateContainer within sandbox \"fa018f906fc04fa676b83b1ee4b20cbea4e364f7559bbfc54d7549fa5d16cd5f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 12 00:16:48.925966 containerd[1552]: time="2025-07-12T00:16:48.925844630Z" level=info msg="Container 36e1bcbfc6a3acc6e019c5f75dd31015ebbbd50881328599c4a1434e6497a76a: CDI devices from CRI Config.CDIDevices: []" Jul 12 00:16:48.936159 containerd[1552]: time="2025-07-12T00:16:48.936117098Z" level=info msg="CreateContainer within sandbox \"fa018f906fc04fa676b83b1ee4b20cbea4e364f7559bbfc54d7549fa5d16cd5f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"36e1bcbfc6a3acc6e019c5f75dd31015ebbbd50881328599c4a1434e6497a76a\"" Jul 12 00:16:48.936925 containerd[1552]: time="2025-07-12T00:16:48.936900908Z" level=info msg="StartContainer for \"36e1bcbfc6a3acc6e019c5f75dd31015ebbbd50881328599c4a1434e6497a76a\"" Jul 12 00:16:48.938293 containerd[1552]: time="2025-07-12T00:16:48.938267663Z" level=info msg="connecting to shim 36e1bcbfc6a3acc6e019c5f75dd31015ebbbd50881328599c4a1434e6497a76a" address="unix:///run/containerd/s/9bd7c090ba2c744f83f2eb1b52ff9af7a3fcbabcefd2563da0478e3cda913a9a" protocol=ttrpc version=3 Jul 12 00:16:48.972665 systemd[1]: Started cri-containerd-36e1bcbfc6a3acc6e019c5f75dd31015ebbbd50881328599c4a1434e6497a76a.scope - libcontainer container 36e1bcbfc6a3acc6e019c5f75dd31015ebbbd50881328599c4a1434e6497a76a. Jul 12 00:16:49.148520 containerd[1552]: time="2025-07-12T00:16:49.148405363Z" level=info msg="StartContainer for \"36e1bcbfc6a3acc6e019c5f75dd31015ebbbd50881328599c4a1434e6497a76a\" returns successfully" Jul 12 00:16:49.376656 kubelet[2339]: I0712 00:16:49.375980 2339 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 12 00:16:49.744873 kubelet[2339]: E0712 00:16:49.744819 2339 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 12 00:16:49.747408 kubelet[2339]: E0712 00:16:49.744987 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:16:49.748133 kubelet[2339]: E0712 00:16:49.748083 2339 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 12 00:16:49.748530 kubelet[2339]: E0712 00:16:49.748501 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:16:49.748884 kubelet[2339]: E0712 00:16:49.748856 2339 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 12 00:16:49.749030 kubelet[2339]: E0712 00:16:49.749001 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:16:50.751640 kubelet[2339]: E0712 00:16:50.751573 2339 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 12 00:16:50.752127 kubelet[2339]: E0712 00:16:50.751751 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:16:50.756847 kubelet[2339]: E0712 00:16:50.756793 2339 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 12 00:16:50.756988 kubelet[2339]: E0712 00:16:50.756966 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:16:50.758246 kubelet[2339]: E0712 00:16:50.757054 2339 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 12 00:16:50.758246 kubelet[2339]: E0712 00:16:50.757153 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:16:50.910422 kubelet[2339]: E0712 00:16:50.910361 2339 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 12 00:16:51.039674 kubelet[2339]: I0712 00:16:51.039506 2339 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 12 00:16:51.039674 kubelet[2339]: E0712 00:16:51.039568 2339 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 12 00:16:51.073342 kubelet[2339]: I0712 00:16:51.073283 2339 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 12 00:16:51.081262 kubelet[2339]: E0712 00:16:51.081177 2339 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 12 00:16:51.081262 kubelet[2339]: I0712 00:16:51.081268 2339 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 12 00:16:51.083532 kubelet[2339]: E0712 00:16:51.083493 2339 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jul 12 00:16:51.083532 kubelet[2339]: I0712 00:16:51.083517 2339 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 12 00:16:51.085236 kubelet[2339]: E0712 00:16:51.085174 2339 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 12 00:16:51.134685 kubelet[2339]: I0712 00:16:51.134626 2339 apiserver.go:52] "Watching apiserver" Jul 12 00:16:51.173345 kubelet[2339]: I0712 00:16:51.173293 2339 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 12 00:16:51.751851 kubelet[2339]: I0712 00:16:51.751811 2339 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 12 00:16:51.753904 kubelet[2339]: E0712 00:16:51.753876 2339 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 12 00:16:51.754072 kubelet[2339]: E0712 00:16:51.754043 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:16:52.307237 kubelet[2339]: I0712 00:16:52.306449 2339 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 12 00:16:52.314967 kubelet[2339]: E0712 00:16:52.314918 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:16:52.514572 kubelet[2339]: I0712 00:16:52.514530 2339 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 12 00:16:52.520668 kubelet[2339]: E0712 00:16:52.520616 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:16:52.754687 kubelet[2339]: I0712 00:16:52.754285 2339 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 12 00:16:52.754687 kubelet[2339]: E0712 00:16:52.754550 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:16:52.754687 kubelet[2339]: E0712 00:16:52.754607 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:16:52.872074 kubelet[2339]: E0712 00:16:52.871952 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:16:53.756008 kubelet[2339]: E0712 00:16:53.755959 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:16:54.059734 systemd[1]: Reload requested from client PID 2625 ('systemctl') (unit session-7.scope)... Jul 12 00:16:54.059757 systemd[1]: Reloading... Jul 12 00:16:54.152262 zram_generator::config[2668]: No configuration found. Jul 12 00:16:54.258186 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:16:54.394850 systemd[1]: Reloading finished in 334 ms. Jul 12 00:16:54.429188 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:16:54.448749 systemd[1]: kubelet.service: Deactivated successfully. Jul 12 00:16:54.449309 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:16:54.449381 systemd[1]: kubelet.service: Consumed 1.228s CPU time, 132.5M memory peak. Jul 12 00:16:54.452409 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:16:54.766015 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:16:54.780827 (kubelet)[2713]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 12 00:16:54.825166 kubelet[2713]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:16:54.825166 kubelet[2713]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 12 00:16:54.825166 kubelet[2713]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:16:54.825875 kubelet[2713]: I0712 00:16:54.825260 2713 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 12 00:16:54.834073 kubelet[2713]: I0712 00:16:54.834011 2713 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 12 00:16:54.834073 kubelet[2713]: I0712 00:16:54.834050 2713 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 12 00:16:54.834318 kubelet[2713]: I0712 00:16:54.834299 2713 server.go:956] "Client rotation is on, will bootstrap in background" Jul 12 00:16:54.835498 kubelet[2713]: I0712 00:16:54.835475 2713 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jul 12 00:16:54.837875 kubelet[2713]: I0712 00:16:54.837707 2713 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 12 00:16:54.845452 kubelet[2713]: I0712 00:16:54.844739 2713 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 12 00:16:54.852791 kubelet[2713]: I0712 00:16:54.852748 2713 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 12 00:16:54.853045 kubelet[2713]: I0712 00:16:54.852998 2713 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 12 00:16:54.853298 kubelet[2713]: I0712 00:16:54.853039 2713 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 12 00:16:54.853470 kubelet[2713]: I0712 00:16:54.853302 2713 topology_manager.go:138] "Creating topology manager with none policy" Jul 12 00:16:54.853470 kubelet[2713]: I0712 00:16:54.853314 2713 container_manager_linux.go:303] "Creating device plugin manager" Jul 12 00:16:54.853470 kubelet[2713]: I0712 00:16:54.853369 2713 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:16:54.853606 kubelet[2713]: I0712 00:16:54.853577 2713 kubelet.go:480] "Attempting to sync node with API server" Jul 12 00:16:54.853606 kubelet[2713]: I0712 00:16:54.853603 2713 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 12 00:16:54.854162 kubelet[2713]: I0712 00:16:54.854126 2713 kubelet.go:386] "Adding apiserver pod source" Jul 12 00:16:54.854162 kubelet[2713]: I0712 00:16:54.854156 2713 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 12 00:16:54.856675 kubelet[2713]: I0712 00:16:54.856638 2713 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 12 00:16:54.857109 kubelet[2713]: I0712 00:16:54.857083 2713 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 12 00:16:54.864657 kubelet[2713]: I0712 00:16:54.864568 2713 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 12 00:16:54.864657 kubelet[2713]: I0712 00:16:54.864656 2713 server.go:1289] "Started kubelet" Jul 12 00:16:54.866521 kubelet[2713]: I0712 00:16:54.866416 2713 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 12 00:16:54.867955 kubelet[2713]: E0712 00:16:54.867851 2713 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 12 00:16:54.868199 kubelet[2713]: I0712 00:16:54.868176 2713 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 12 00:16:54.869345 kubelet[2713]: I0712 00:16:54.869326 2713 server.go:317] "Adding debug handlers to kubelet server" Jul 12 00:16:54.870062 kubelet[2713]: I0712 00:16:54.869359 2713 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 12 00:16:54.870571 kubelet[2713]: I0712 00:16:54.870534 2713 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 12 00:16:54.870571 kubelet[2713]: I0712 00:16:54.870543 2713 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 12 00:16:54.873944 kubelet[2713]: I0712 00:16:54.873901 2713 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 12 00:16:54.875693 kubelet[2713]: I0712 00:16:54.875424 2713 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 12 00:16:54.875693 kubelet[2713]: I0712 00:16:54.875636 2713 reconciler.go:26] "Reconciler: start to sync state" Jul 12 00:16:54.876004 kubelet[2713]: I0712 00:16:54.875716 2713 factory.go:223] Registration of the systemd container factory successfully Jul 12 00:16:54.876053 kubelet[2713]: I0712 00:16:54.876015 2713 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 12 00:16:54.880414 kubelet[2713]: I0712 00:16:54.880293 2713 factory.go:223] Registration of the containerd container factory successfully Jul 12 00:16:54.893967 kubelet[2713]: I0712 00:16:54.893890 2713 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 12 00:16:54.896240 kubelet[2713]: I0712 00:16:54.896031 2713 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 12 00:16:54.896240 kubelet[2713]: I0712 00:16:54.896066 2713 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 12 00:16:54.896240 kubelet[2713]: I0712 00:16:54.896094 2713 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 12 00:16:54.896240 kubelet[2713]: I0712 00:16:54.896104 2713 kubelet.go:2436] "Starting kubelet main sync loop" Jul 12 00:16:54.896240 kubelet[2713]: E0712 00:16:54.896162 2713 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 12 00:16:54.923001 kubelet[2713]: I0712 00:16:54.922963 2713 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 12 00:16:54.923001 kubelet[2713]: I0712 00:16:54.922984 2713 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 12 00:16:54.923001 kubelet[2713]: I0712 00:16:54.923017 2713 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:16:54.923285 kubelet[2713]: I0712 00:16:54.923195 2713 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 12 00:16:54.923285 kubelet[2713]: I0712 00:16:54.923230 2713 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 12 00:16:54.923285 kubelet[2713]: I0712 00:16:54.923252 2713 policy_none.go:49] "None policy: Start" Jul 12 00:16:54.923285 kubelet[2713]: I0712 00:16:54.923264 2713 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 12 00:16:54.923285 kubelet[2713]: I0712 00:16:54.923277 2713 state_mem.go:35] "Initializing new in-memory state store" Jul 12 00:16:54.923416 kubelet[2713]: I0712 00:16:54.923383 2713 state_mem.go:75] "Updated machine memory state" Jul 12 00:16:54.931321 kubelet[2713]: E0712 00:16:54.930993 2713 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 12 00:16:54.931646 kubelet[2713]: I0712 00:16:54.931630 2713 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 12 00:16:54.931773 kubelet[2713]: I0712 00:16:54.931721 2713 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 12 00:16:54.932067 kubelet[2713]: I0712 00:16:54.932052 2713 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 12 00:16:54.934999 kubelet[2713]: E0712 00:16:54.934840 2713 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 12 00:16:54.998231 kubelet[2713]: I0712 00:16:54.998165 2713 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 12 00:16:54.998395 kubelet[2713]: I0712 00:16:54.998356 2713 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 12 00:16:54.998503 kubelet[2713]: I0712 00:16:54.998468 2713 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 12 00:16:55.010821 kubelet[2713]: E0712 00:16:55.010707 2713 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jul 12 00:16:55.010821 kubelet[2713]: E0712 00:16:55.010745 2713 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 12 00:16:55.011057 kubelet[2713]: E0712 00:16:55.010924 2713 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 12 00:16:55.044445 kubelet[2713]: I0712 00:16:55.044325 2713 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 12 00:16:55.074881 kubelet[2713]: I0712 00:16:55.074849 2713 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 12 00:16:55.074881 kubelet[2713]: I0712 00:16:55.075052 2713 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 12 00:16:55.077755 kubelet[2713]: I0712 00:16:55.076706 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:16:55.077755 kubelet[2713]: I0712 00:16:55.076755 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/834ee54f1daa06092e339273649eb5ea-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"834ee54f1daa06092e339273649eb5ea\") " pod="kube-system/kube-scheduler-localhost" Jul 12 00:16:55.077755 kubelet[2713]: I0712 00:16:55.076778 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4e312e0abd66638e65648d226d22d4be-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4e312e0abd66638e65648d226d22d4be\") " pod="kube-system/kube-apiserver-localhost" Jul 12 00:16:55.077755 kubelet[2713]: I0712 00:16:55.076796 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:16:55.077755 kubelet[2713]: I0712 00:16:55.076814 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:16:55.077962 kubelet[2713]: I0712 00:16:55.076829 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4e312e0abd66638e65648d226d22d4be-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4e312e0abd66638e65648d226d22d4be\") " pod="kube-system/kube-apiserver-localhost" Jul 12 00:16:55.077962 kubelet[2713]: I0712 00:16:55.076845 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4e312e0abd66638e65648d226d22d4be-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4e312e0abd66638e65648d226d22d4be\") " pod="kube-system/kube-apiserver-localhost" Jul 12 00:16:55.077962 kubelet[2713]: I0712 00:16:55.076860 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:16:55.077962 kubelet[2713]: I0712 00:16:55.076877 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:16:55.097439 sudo[2752]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 12 00:16:55.098461 sudo[2752]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 12 00:16:55.311678 kubelet[2713]: E0712 00:16:55.311513 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:16:55.313232 kubelet[2713]: E0712 00:16:55.312905 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:16:55.313232 kubelet[2713]: E0712 00:16:55.313085 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:16:55.650615 sudo[2752]: pam_unix(sudo:session): session closed for user root Jul 12 00:16:55.855717 kubelet[2713]: I0712 00:16:55.855654 2713 apiserver.go:52] "Watching apiserver" Jul 12 00:16:55.876388 kubelet[2713]: I0712 00:16:55.876329 2713 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 12 00:16:55.910134 kubelet[2713]: I0712 00:16:55.909275 2713 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 12 00:16:55.910134 kubelet[2713]: E0712 00:16:55.909320 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:16:55.910134 kubelet[2713]: E0712 00:16:55.909832 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:16:55.918102 kubelet[2713]: E0712 00:16:55.918039 2713 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 12 00:16:55.918317 kubelet[2713]: E0712 00:16:55.918296 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:16:55.931792 kubelet[2713]: I0712 00:16:55.931703 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.931680021 podStartE2EDuration="3.931680021s" podCreationTimestamp="2025-07-12 00:16:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:16:55.931447009 +0000 UTC m=+1.145902957" watchObservedRunningTime="2025-07-12 00:16:55.931680021 +0000 UTC m=+1.146135969" Jul 12 00:16:55.943303 kubelet[2713]: I0712 00:16:55.943141 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.943124587 podStartE2EDuration="3.943124587s" podCreationTimestamp="2025-07-12 00:16:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:16:55.940820709 +0000 UTC m=+1.155276657" watchObservedRunningTime="2025-07-12 00:16:55.943124587 +0000 UTC m=+1.157580525" Jul 12 00:16:55.951313 kubelet[2713]: I0712 00:16:55.951203 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.951175473 podStartE2EDuration="3.951175473s" podCreationTimestamp="2025-07-12 00:16:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:16:55.9511475 +0000 UTC m=+1.165603448" watchObservedRunningTime="2025-07-12 00:16:55.951175473 +0000 UTC m=+1.165631421" Jul 12 00:16:56.910959 kubelet[2713]: E0712 00:16:56.910889 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:16:56.911687 kubelet[2713]: E0712 00:16:56.911008 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:16:57.064360 update_engine[1542]: I20250712 00:16:57.064232 1542 update_attempter.cc:509] Updating boot flags... Jul 12 00:16:58.300043 sudo[1770]: pam_unix(sudo:session): session closed for user root Jul 12 00:16:58.301859 sshd[1769]: Connection closed by 10.0.0.1 port 46986 Jul 12 00:16:58.302962 sshd-session[1767]: pam_unix(sshd:session): session closed for user core Jul 12 00:16:58.307738 systemd[1]: sshd@6-10.0.0.95:22-10.0.0.1:46986.service: Deactivated successfully. Jul 12 00:16:58.310536 systemd[1]: session-7.scope: Deactivated successfully. Jul 12 00:16:58.310819 systemd[1]: session-7.scope: Consumed 6.656s CPU time, 256.9M memory peak. Jul 12 00:16:58.312367 systemd-logind[1538]: Session 7 logged out. Waiting for processes to exit. Jul 12 00:16:58.314047 systemd-logind[1538]: Removed session 7. Jul 12 00:16:59.249946 kubelet[2713]: I0712 00:16:59.249896 2713 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 12 00:16:59.250570 kubelet[2713]: I0712 00:16:59.250527 2713 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 12 00:16:59.250617 containerd[1552]: time="2025-07-12T00:16:59.250349580Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 12 00:16:59.478107 kubelet[2713]: E0712 00:16:59.478045 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:16:59.917814 kubelet[2713]: E0712 00:16:59.917735 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:00.166673 systemd[1]: Created slice kubepods-besteffort-poddefb2ef0_2208_497d_b863_6ff3d01bb48c.slice - libcontainer container kubepods-besteffort-poddefb2ef0_2208_497d_b863_6ff3d01bb48c.slice. Jul 12 00:17:00.186227 systemd[1]: Created slice kubepods-burstable-pod42e7da73_e41a_481e_b3a4_36563e26e585.slice - libcontainer container kubepods-burstable-pod42e7da73_e41a_481e_b3a4_36563e26e585.slice. Jul 12 00:17:00.211141 kubelet[2713]: I0712 00:17:00.211029 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/42e7da73-e41a-481e-b3a4-36563e26e585-cilium-cgroup\") pod \"cilium-lmkz4\" (UID: \"42e7da73-e41a-481e-b3a4-36563e26e585\") " pod="kube-system/cilium-lmkz4" Jul 12 00:17:00.211141 kubelet[2713]: I0712 00:17:00.211104 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/42e7da73-e41a-481e-b3a4-36563e26e585-cilium-config-path\") pod \"cilium-lmkz4\" (UID: \"42e7da73-e41a-481e-b3a4-36563e26e585\") " pod="kube-system/cilium-lmkz4" Jul 12 00:17:00.211141 kubelet[2713]: I0712 00:17:00.211130 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/42e7da73-e41a-481e-b3a4-36563e26e585-host-proc-sys-net\") pod \"cilium-lmkz4\" (UID: \"42e7da73-e41a-481e-b3a4-36563e26e585\") " pod="kube-system/cilium-lmkz4" Jul 12 00:17:00.211141 kubelet[2713]: I0712 00:17:00.211154 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/defb2ef0-2208-497d-b863-6ff3d01bb48c-kube-proxy\") pod \"kube-proxy-njldl\" (UID: \"defb2ef0-2208-497d-b863-6ff3d01bb48c\") " pod="kube-system/kube-proxy-njldl" Jul 12 00:17:00.211141 kubelet[2713]: I0712 00:17:00.211170 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/defb2ef0-2208-497d-b863-6ff3d01bb48c-lib-modules\") pod \"kube-proxy-njldl\" (UID: \"defb2ef0-2208-497d-b863-6ff3d01bb48c\") " pod="kube-system/kube-proxy-njldl" Jul 12 00:17:00.211613 kubelet[2713]: I0712 00:17:00.211187 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkwhp\" (UniqueName: \"kubernetes.io/projected/defb2ef0-2208-497d-b863-6ff3d01bb48c-kube-api-access-pkwhp\") pod \"kube-proxy-njldl\" (UID: \"defb2ef0-2208-497d-b863-6ff3d01bb48c\") " pod="kube-system/kube-proxy-njldl" Jul 12 00:17:00.211613 kubelet[2713]: I0712 00:17:00.211208 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/42e7da73-e41a-481e-b3a4-36563e26e585-etc-cni-netd\") pod \"cilium-lmkz4\" (UID: \"42e7da73-e41a-481e-b3a4-36563e26e585\") " pod="kube-system/cilium-lmkz4" Jul 12 00:17:00.211613 kubelet[2713]: I0712 00:17:00.211245 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/42e7da73-e41a-481e-b3a4-36563e26e585-clustermesh-secrets\") pod \"cilium-lmkz4\" (UID: \"42e7da73-e41a-481e-b3a4-36563e26e585\") " pod="kube-system/cilium-lmkz4" Jul 12 00:17:00.211613 kubelet[2713]: I0712 00:17:00.211269 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/defb2ef0-2208-497d-b863-6ff3d01bb48c-xtables-lock\") pod \"kube-proxy-njldl\" (UID: \"defb2ef0-2208-497d-b863-6ff3d01bb48c\") " pod="kube-system/kube-proxy-njldl" Jul 12 00:17:00.211613 kubelet[2713]: I0712 00:17:00.211294 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/42e7da73-e41a-481e-b3a4-36563e26e585-bpf-maps\") pod \"cilium-lmkz4\" (UID: \"42e7da73-e41a-481e-b3a4-36563e26e585\") " pod="kube-system/cilium-lmkz4" Jul 12 00:17:00.211613 kubelet[2713]: I0712 00:17:00.211315 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/42e7da73-e41a-481e-b3a4-36563e26e585-lib-modules\") pod \"cilium-lmkz4\" (UID: \"42e7da73-e41a-481e-b3a4-36563e26e585\") " pod="kube-system/cilium-lmkz4" Jul 12 00:17:00.211810 kubelet[2713]: I0712 00:17:00.211334 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/42e7da73-e41a-481e-b3a4-36563e26e585-host-proc-sys-kernel\") pod \"cilium-lmkz4\" (UID: \"42e7da73-e41a-481e-b3a4-36563e26e585\") " pod="kube-system/cilium-lmkz4" Jul 12 00:17:00.211810 kubelet[2713]: I0712 00:17:00.211355 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/42e7da73-e41a-481e-b3a4-36563e26e585-cilium-run\") pod \"cilium-lmkz4\" (UID: \"42e7da73-e41a-481e-b3a4-36563e26e585\") " pod="kube-system/cilium-lmkz4" Jul 12 00:17:00.211810 kubelet[2713]: I0712 00:17:00.211378 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/42e7da73-e41a-481e-b3a4-36563e26e585-hostproc\") pod \"cilium-lmkz4\" (UID: \"42e7da73-e41a-481e-b3a4-36563e26e585\") " pod="kube-system/cilium-lmkz4" Jul 12 00:17:00.211810 kubelet[2713]: I0712 00:17:00.211397 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/42e7da73-e41a-481e-b3a4-36563e26e585-cni-path\") pod \"cilium-lmkz4\" (UID: \"42e7da73-e41a-481e-b3a4-36563e26e585\") " pod="kube-system/cilium-lmkz4" Jul 12 00:17:00.211810 kubelet[2713]: I0712 00:17:00.211417 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/42e7da73-e41a-481e-b3a4-36563e26e585-xtables-lock\") pod \"cilium-lmkz4\" (UID: \"42e7da73-e41a-481e-b3a4-36563e26e585\") " pod="kube-system/cilium-lmkz4" Jul 12 00:17:00.211810 kubelet[2713]: I0712 00:17:00.211437 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/42e7da73-e41a-481e-b3a4-36563e26e585-hubble-tls\") pod \"cilium-lmkz4\" (UID: \"42e7da73-e41a-481e-b3a4-36563e26e585\") " pod="kube-system/cilium-lmkz4" Jul 12 00:17:00.212009 kubelet[2713]: I0712 00:17:00.211462 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmvr4\" (UniqueName: \"kubernetes.io/projected/42e7da73-e41a-481e-b3a4-36563e26e585-kube-api-access-fmvr4\") pod \"cilium-lmkz4\" (UID: \"42e7da73-e41a-481e-b3a4-36563e26e585\") " pod="kube-system/cilium-lmkz4" Jul 12 00:17:00.458412 systemd[1]: Created slice kubepods-besteffort-pod9e5407a7_cb53_431b_9ea9_7d0c1c718a48.slice - libcontainer container kubepods-besteffort-pod9e5407a7_cb53_431b_9ea9_7d0c1c718a48.slice. Jul 12 00:17:00.480827 kubelet[2713]: E0712 00:17:00.480765 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:00.482082 containerd[1552]: time="2025-07-12T00:17:00.481499423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-njldl,Uid:defb2ef0-2208-497d-b863-6ff3d01bb48c,Namespace:kube-system,Attempt:0,}" Jul 12 00:17:00.493822 kubelet[2713]: E0712 00:17:00.493733 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:00.494292 containerd[1552]: time="2025-07-12T00:17:00.494107105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lmkz4,Uid:42e7da73-e41a-481e-b3a4-36563e26e585,Namespace:kube-system,Attempt:0,}" Jul 12 00:17:00.514421 kubelet[2713]: I0712 00:17:00.514347 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9e5407a7-cb53-431b-9ea9-7d0c1c718a48-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-fvjst\" (UID: \"9e5407a7-cb53-431b-9ea9-7d0c1c718a48\") " pod="kube-system/cilium-operator-6c4d7847fc-fvjst" Jul 12 00:17:00.514421 kubelet[2713]: I0712 00:17:00.514411 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qw89m\" (UniqueName: \"kubernetes.io/projected/9e5407a7-cb53-431b-9ea9-7d0c1c718a48-kube-api-access-qw89m\") pod \"cilium-operator-6c4d7847fc-fvjst\" (UID: \"9e5407a7-cb53-431b-9ea9-7d0c1c718a48\") " pod="kube-system/cilium-operator-6c4d7847fc-fvjst" Jul 12 00:17:00.528852 containerd[1552]: time="2025-07-12T00:17:00.528759555Z" level=info msg="connecting to shim 1bbec4ba92a8263f16f27176e3afd508696bc8a82652b233c2c6fc2dbe74a119" address="unix:///run/containerd/s/61e911a7ee2849c5be23c5bc4c6df2e214c3e646be2d7f9e9f2be8d026491f86" namespace=k8s.io protocol=ttrpc version=3 Jul 12 00:17:00.532056 containerd[1552]: time="2025-07-12T00:17:00.531992957Z" level=info msg="connecting to shim ab8449a0b8bc80a92de52b420de56bf7f91710a34790788a8c0da301ca88334d" address="unix:///run/containerd/s/33fb91f7856c4933e9b1f6c4223b513d71b166b993c596da6a0ca601e85be699" namespace=k8s.io protocol=ttrpc version=3 Jul 12 00:17:00.623454 systemd[1]: Started cri-containerd-1bbec4ba92a8263f16f27176e3afd508696bc8a82652b233c2c6fc2dbe74a119.scope - libcontainer container 1bbec4ba92a8263f16f27176e3afd508696bc8a82652b233c2c6fc2dbe74a119. Jul 12 00:17:00.626486 systemd[1]: Started cri-containerd-ab8449a0b8bc80a92de52b420de56bf7f91710a34790788a8c0da301ca88334d.scope - libcontainer container ab8449a0b8bc80a92de52b420de56bf7f91710a34790788a8c0da301ca88334d. Jul 12 00:17:00.762920 kubelet[2713]: E0712 00:17:00.762759 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:00.763491 containerd[1552]: time="2025-07-12T00:17:00.763440141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-fvjst,Uid:9e5407a7-cb53-431b-9ea9-7d0c1c718a48,Namespace:kube-system,Attempt:0,}" Jul 12 00:17:00.782592 containerd[1552]: time="2025-07-12T00:17:00.782399859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-njldl,Uid:defb2ef0-2208-497d-b863-6ff3d01bb48c,Namespace:kube-system,Attempt:0,} returns sandbox id \"ab8449a0b8bc80a92de52b420de56bf7f91710a34790788a8c0da301ca88334d\"" Jul 12 00:17:00.784159 kubelet[2713]: E0712 00:17:00.784131 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:00.785673 containerd[1552]: time="2025-07-12T00:17:00.785577756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lmkz4,Uid:42e7da73-e41a-481e-b3a4-36563e26e585,Namespace:kube-system,Attempt:0,} returns sandbox id \"1bbec4ba92a8263f16f27176e3afd508696bc8a82652b233c2c6fc2dbe74a119\"" Jul 12 00:17:00.786124 kubelet[2713]: E0712 00:17:00.786102 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:00.787365 containerd[1552]: time="2025-07-12T00:17:00.787337618Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 12 00:17:00.795783 containerd[1552]: time="2025-07-12T00:17:00.795704590Z" level=info msg="CreateContainer within sandbox \"ab8449a0b8bc80a92de52b420de56bf7f91710a34790788a8c0da301ca88334d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 12 00:17:00.817408 containerd[1552]: time="2025-07-12T00:17:00.817331128Z" level=info msg="Container edfd0fcfa08c2c4fead6343fbb17a0f1fc8281f7ccf13809d6d4b99423a4f635: CDI devices from CRI Config.CDIDevices: []" Jul 12 00:17:00.827984 containerd[1552]: time="2025-07-12T00:17:00.827895370Z" level=info msg="CreateContainer within sandbox \"ab8449a0b8bc80a92de52b420de56bf7f91710a34790788a8c0da301ca88334d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"edfd0fcfa08c2c4fead6343fbb17a0f1fc8281f7ccf13809d6d4b99423a4f635\"" Jul 12 00:17:00.828885 containerd[1552]: time="2025-07-12T00:17:00.828841332Z" level=info msg="StartContainer for \"edfd0fcfa08c2c4fead6343fbb17a0f1fc8281f7ccf13809d6d4b99423a4f635\"" Jul 12 00:17:00.831015 containerd[1552]: time="2025-07-12T00:17:00.830970503Z" level=info msg="connecting to shim edfd0fcfa08c2c4fead6343fbb17a0f1fc8281f7ccf13809d6d4b99423a4f635" address="unix:///run/containerd/s/33fb91f7856c4933e9b1f6c4223b513d71b166b993c596da6a0ca601e85be699" protocol=ttrpc version=3 Jul 12 00:17:00.831195 containerd[1552]: time="2025-07-12T00:17:00.831075461Z" level=info msg="connecting to shim 53c552f5fcc2d66727567b5ef57c55281e4c615c086be9c54b847590eafd029a" address="unix:///run/containerd/s/46526a45de34bcf0d2d4a83c420c5b562e4d621180f12f4090d940f4643ad118" namespace=k8s.io protocol=ttrpc version=3 Jul 12 00:17:00.863561 systemd[1]: Started cri-containerd-edfd0fcfa08c2c4fead6343fbb17a0f1fc8281f7ccf13809d6d4b99423a4f635.scope - libcontainer container edfd0fcfa08c2c4fead6343fbb17a0f1fc8281f7ccf13809d6d4b99423a4f635. Jul 12 00:17:00.868108 systemd[1]: Started cri-containerd-53c552f5fcc2d66727567b5ef57c55281e4c615c086be9c54b847590eafd029a.scope - libcontainer container 53c552f5fcc2d66727567b5ef57c55281e4c615c086be9c54b847590eafd029a. Jul 12 00:17:01.135245 containerd[1552]: time="2025-07-12T00:17:01.135139481Z" level=info msg="StartContainer for \"edfd0fcfa08c2c4fead6343fbb17a0f1fc8281f7ccf13809d6d4b99423a4f635\" returns successfully" Jul 12 00:17:01.169441 containerd[1552]: time="2025-07-12T00:17:01.169396587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-fvjst,Uid:9e5407a7-cb53-431b-9ea9-7d0c1c718a48,Namespace:kube-system,Attempt:0,} returns sandbox id \"53c552f5fcc2d66727567b5ef57c55281e4c615c086be9c54b847590eafd029a\"" Jul 12 00:17:01.170085 kubelet[2713]: E0712 00:17:01.170059 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:02.139973 kubelet[2713]: E0712 00:17:02.139919 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:03.678890 kubelet[2713]: E0712 00:17:03.678854 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:03.693290 kubelet[2713]: I0712 00:17:03.693192 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-njldl" podStartSLOduration=3.693170189 podStartE2EDuration="3.693170189s" podCreationTimestamp="2025-07-12 00:17:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:17:02.150533491 +0000 UTC m=+7.364989439" watchObservedRunningTime="2025-07-12 00:17:03.693170189 +0000 UTC m=+8.907626137" Jul 12 00:17:04.143356 kubelet[2713]: E0712 00:17:04.143283 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:04.573722 kubelet[2713]: E0712 00:17:04.573571 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:05.145066 kubelet[2713]: E0712 00:17:05.144906 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:06.146959 kubelet[2713]: E0712 00:17:06.146912 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:12.299384 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3670494369.mount: Deactivated successfully. Jul 12 00:17:24.893395 containerd[1552]: time="2025-07-12T00:17:24.893307914Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:24.895523 containerd[1552]: time="2025-07-12T00:17:24.895469969Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jul 12 00:17:24.897050 containerd[1552]: time="2025-07-12T00:17:24.896990728Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:24.898274 containerd[1552]: time="2025-07-12T00:17:24.898231860Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 24.110860799s" Jul 12 00:17:24.898274 containerd[1552]: time="2025-07-12T00:17:24.898268450Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 12 00:17:24.899866 containerd[1552]: time="2025-07-12T00:17:24.899802092Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 12 00:17:24.912514 containerd[1552]: time="2025-07-12T00:17:24.912455538Z" level=info msg="CreateContainer within sandbox \"1bbec4ba92a8263f16f27176e3afd508696bc8a82652b233c2c6fc2dbe74a119\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 12 00:17:24.923883 containerd[1552]: time="2025-07-12T00:17:24.923817205Z" level=info msg="Container 922fb08d47ad4a35cb4b8a93b5a67b659462335a924e2c6339af76fd79d4396a: CDI devices from CRI Config.CDIDevices: []" Jul 12 00:17:24.930850 containerd[1552]: time="2025-07-12T00:17:24.930781126Z" level=info msg="CreateContainer within sandbox \"1bbec4ba92a8263f16f27176e3afd508696bc8a82652b233c2c6fc2dbe74a119\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"922fb08d47ad4a35cb4b8a93b5a67b659462335a924e2c6339af76fd79d4396a\"" Jul 12 00:17:24.931471 containerd[1552]: time="2025-07-12T00:17:24.931440585Z" level=info msg="StartContainer for \"922fb08d47ad4a35cb4b8a93b5a67b659462335a924e2c6339af76fd79d4396a\"" Jul 12 00:17:24.932583 containerd[1552]: time="2025-07-12T00:17:24.932548798Z" level=info msg="connecting to shim 922fb08d47ad4a35cb4b8a93b5a67b659462335a924e2c6339af76fd79d4396a" address="unix:///run/containerd/s/61e911a7ee2849c5be23c5bc4c6df2e214c3e646be2d7f9e9f2be8d026491f86" protocol=ttrpc version=3 Jul 12 00:17:24.955462 systemd[1]: Started cri-containerd-922fb08d47ad4a35cb4b8a93b5a67b659462335a924e2c6339af76fd79d4396a.scope - libcontainer container 922fb08d47ad4a35cb4b8a93b5a67b659462335a924e2c6339af76fd79d4396a. Jul 12 00:17:24.992775 containerd[1552]: time="2025-07-12T00:17:24.992716258Z" level=info msg="StartContainer for \"922fb08d47ad4a35cb4b8a93b5a67b659462335a924e2c6339af76fd79d4396a\" returns successfully" Jul 12 00:17:25.005817 systemd[1]: cri-containerd-922fb08d47ad4a35cb4b8a93b5a67b659462335a924e2c6339af76fd79d4396a.scope: Deactivated successfully. Jul 12 00:17:25.009034 containerd[1552]: time="2025-07-12T00:17:25.008992471Z" level=info msg="received exit event container_id:\"922fb08d47ad4a35cb4b8a93b5a67b659462335a924e2c6339af76fd79d4396a\" id:\"922fb08d47ad4a35cb4b8a93b5a67b659462335a924e2c6339af76fd79d4396a\" pid:3158 exited_at:{seconds:1752279445 nanos:8532327}" Jul 12 00:17:25.009132 containerd[1552]: time="2025-07-12T00:17:25.009096607Z" level=info msg="TaskExit event in podsandbox handler container_id:\"922fb08d47ad4a35cb4b8a93b5a67b659462335a924e2c6339af76fd79d4396a\" id:\"922fb08d47ad4a35cb4b8a93b5a67b659462335a924e2c6339af76fd79d4396a\" pid:3158 exited_at:{seconds:1752279445 nanos:8532327}" Jul 12 00:17:25.034801 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-922fb08d47ad4a35cb4b8a93b5a67b659462335a924e2c6339af76fd79d4396a-rootfs.mount: Deactivated successfully. Jul 12 00:17:25.182415 kubelet[2713]: E0712 00:17:25.182042 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:26.185498 kubelet[2713]: E0712 00:17:26.185234 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:26.190450 containerd[1552]: time="2025-07-12T00:17:26.190348788Z" level=info msg="CreateContainer within sandbox \"1bbec4ba92a8263f16f27176e3afd508696bc8a82652b233c2c6fc2dbe74a119\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 12 00:17:26.206027 containerd[1552]: time="2025-07-12T00:17:26.205971417Z" level=info msg="Container 8032a84ea6528c90d738b87efaf96e35dce3b5e88717945fbb26e6dbbbc4153f: CDI devices from CRI Config.CDIDevices: []" Jul 12 00:17:26.213816 containerd[1552]: time="2025-07-12T00:17:26.213762059Z" level=info msg="CreateContainer within sandbox \"1bbec4ba92a8263f16f27176e3afd508696bc8a82652b233c2c6fc2dbe74a119\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8032a84ea6528c90d738b87efaf96e35dce3b5e88717945fbb26e6dbbbc4153f\"" Jul 12 00:17:26.214359 containerd[1552]: time="2025-07-12T00:17:26.214325328Z" level=info msg="StartContainer for \"8032a84ea6528c90d738b87efaf96e35dce3b5e88717945fbb26e6dbbbc4153f\"" Jul 12 00:17:26.215551 containerd[1552]: time="2025-07-12T00:17:26.215518600Z" level=info msg="connecting to shim 8032a84ea6528c90d738b87efaf96e35dce3b5e88717945fbb26e6dbbbc4153f" address="unix:///run/containerd/s/61e911a7ee2849c5be23c5bc4c6df2e214c3e646be2d7f9e9f2be8d026491f86" protocol=ttrpc version=3 Jul 12 00:17:26.235441 systemd[1]: Started cri-containerd-8032a84ea6528c90d738b87efaf96e35dce3b5e88717945fbb26e6dbbbc4153f.scope - libcontainer container 8032a84ea6528c90d738b87efaf96e35dce3b5e88717945fbb26e6dbbbc4153f. Jul 12 00:17:26.270935 containerd[1552]: time="2025-07-12T00:17:26.270894408Z" level=info msg="StartContainer for \"8032a84ea6528c90d738b87efaf96e35dce3b5e88717945fbb26e6dbbbc4153f\" returns successfully" Jul 12 00:17:26.287773 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 12 00:17:26.288377 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 12 00:17:26.288738 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 12 00:17:26.291353 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 12 00:17:26.294065 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 12 00:17:26.295359 containerd[1552]: time="2025-07-12T00:17:26.294748738Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8032a84ea6528c90d738b87efaf96e35dce3b5e88717945fbb26e6dbbbc4153f\" id:\"8032a84ea6528c90d738b87efaf96e35dce3b5e88717945fbb26e6dbbbc4153f\" pid:3204 exited_at:{seconds:1752279446 nanos:294140285}" Jul 12 00:17:26.295359 containerd[1552]: time="2025-07-12T00:17:26.294838707Z" level=info msg="received exit event container_id:\"8032a84ea6528c90d738b87efaf96e35dce3b5e88717945fbb26e6dbbbc4153f\" id:\"8032a84ea6528c90d738b87efaf96e35dce3b5e88717945fbb26e6dbbbc4153f\" pid:3204 exited_at:{seconds:1752279446 nanos:294140285}" Jul 12 00:17:26.294905 systemd[1]: cri-containerd-8032a84ea6528c90d738b87efaf96e35dce3b5e88717945fbb26e6dbbbc4153f.scope: Deactivated successfully. Jul 12 00:17:26.325728 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 12 00:17:27.188655 kubelet[2713]: E0712 00:17:27.188613 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:27.206316 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8032a84ea6528c90d738b87efaf96e35dce3b5e88717945fbb26e6dbbbc4153f-rootfs.mount: Deactivated successfully. Jul 12 00:17:27.533688 containerd[1552]: time="2025-07-12T00:17:27.533559529Z" level=info msg="CreateContainer within sandbox \"1bbec4ba92a8263f16f27176e3afd508696bc8a82652b233c2c6fc2dbe74a119\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 12 00:17:27.835051 systemd[1]: Started sshd@7-10.0.0.95:22-10.0.0.1:46500.service - OpenSSH per-connection server daemon (10.0.0.1:46500). Jul 12 00:17:27.835768 containerd[1552]: time="2025-07-12T00:17:27.835124882Z" level=info msg="Container 72be9769b9610e074f53bfeaaedb556d159b2d9643faeb6da7b12b4930ed7010: CDI devices from CRI Config.CDIDevices: []" Jul 12 00:17:27.851244 containerd[1552]: time="2025-07-12T00:17:27.851164311Z" level=info msg="CreateContainer within sandbox \"1bbec4ba92a8263f16f27176e3afd508696bc8a82652b233c2c6fc2dbe74a119\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"72be9769b9610e074f53bfeaaedb556d159b2d9643faeb6da7b12b4930ed7010\"" Jul 12 00:17:27.851753 containerd[1552]: time="2025-07-12T00:17:27.851712461Z" level=info msg="StartContainer for \"72be9769b9610e074f53bfeaaedb556d159b2d9643faeb6da7b12b4930ed7010\"" Jul 12 00:17:27.853089 containerd[1552]: time="2025-07-12T00:17:27.853046779Z" level=info msg="connecting to shim 72be9769b9610e074f53bfeaaedb556d159b2d9643faeb6da7b12b4930ed7010" address="unix:///run/containerd/s/61e911a7ee2849c5be23c5bc4c6df2e214c3e646be2d7f9e9f2be8d026491f86" protocol=ttrpc version=3 Jul 12 00:17:27.881432 systemd[1]: Started cri-containerd-72be9769b9610e074f53bfeaaedb556d159b2d9643faeb6da7b12b4930ed7010.scope - libcontainer container 72be9769b9610e074f53bfeaaedb556d159b2d9643faeb6da7b12b4930ed7010. Jul 12 00:17:27.906143 sshd[3243]: Accepted publickey for core from 10.0.0.1 port 46500 ssh2: RSA SHA256:vyezqvaDT/l+5kUccKQO1QIecRdQxwI+fSccHDXDwwc Jul 12 00:17:27.907369 sshd-session[3243]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:17:27.915616 systemd-logind[1538]: New session 8 of user core. Jul 12 00:17:27.920449 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 12 00:17:27.937737 systemd[1]: cri-containerd-72be9769b9610e074f53bfeaaedb556d159b2d9643faeb6da7b12b4930ed7010.scope: Deactivated successfully. Jul 12 00:17:27.940261 containerd[1552]: time="2025-07-12T00:17:27.940151959Z" level=info msg="received exit event container_id:\"72be9769b9610e074f53bfeaaedb556d159b2d9643faeb6da7b12b4930ed7010\" id:\"72be9769b9610e074f53bfeaaedb556d159b2d9643faeb6da7b12b4930ed7010\" pid:3262 exited_at:{seconds:1752279447 nanos:939907048}" Jul 12 00:17:27.940556 containerd[1552]: time="2025-07-12T00:17:27.940506695Z" level=info msg="TaskExit event in podsandbox handler container_id:\"72be9769b9610e074f53bfeaaedb556d159b2d9643faeb6da7b12b4930ed7010\" id:\"72be9769b9610e074f53bfeaaedb556d159b2d9643faeb6da7b12b4930ed7010\" pid:3262 exited_at:{seconds:1752279447 nanos:939907048}" Jul 12 00:17:27.940974 containerd[1552]: time="2025-07-12T00:17:27.940952362Z" level=info msg="StartContainer for \"72be9769b9610e074f53bfeaaedb556d159b2d9643faeb6da7b12b4930ed7010\" returns successfully" Jul 12 00:17:27.973427 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-72be9769b9610e074f53bfeaaedb556d159b2d9643faeb6da7b12b4930ed7010-rootfs.mount: Deactivated successfully. Jul 12 00:17:28.085335 sshd[3278]: Connection closed by 10.0.0.1 port 46500 Jul 12 00:17:28.085959 sshd-session[3243]: pam_unix(sshd:session): session closed for user core Jul 12 00:17:28.090978 systemd[1]: sshd@7-10.0.0.95:22-10.0.0.1:46500.service: Deactivated successfully. Jul 12 00:17:28.093103 systemd[1]: session-8.scope: Deactivated successfully. Jul 12 00:17:28.095670 systemd-logind[1538]: Session 8 logged out. Waiting for processes to exit. Jul 12 00:17:28.097282 systemd-logind[1538]: Removed session 8. Jul 12 00:17:28.195642 kubelet[2713]: E0712 00:17:28.195601 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:28.204147 containerd[1552]: time="2025-07-12T00:17:28.202706749Z" level=info msg="CreateContainer within sandbox \"1bbec4ba92a8263f16f27176e3afd508696bc8a82652b233c2c6fc2dbe74a119\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 12 00:17:28.222664 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1862251109.mount: Deactivated successfully. Jul 12 00:17:28.224323 containerd[1552]: time="2025-07-12T00:17:28.224081079Z" level=info msg="Container 9b7c1c3513ddf12471dab497eb85320142dd7b575a6e14f1a6dc4126af541548: CDI devices from CRI Config.CDIDevices: []" Jul 12 00:17:28.244900 containerd[1552]: time="2025-07-12T00:17:28.244841126Z" level=info msg="CreateContainer within sandbox \"1bbec4ba92a8263f16f27176e3afd508696bc8a82652b233c2c6fc2dbe74a119\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9b7c1c3513ddf12471dab497eb85320142dd7b575a6e14f1a6dc4126af541548\"" Jul 12 00:17:28.246712 containerd[1552]: time="2025-07-12T00:17:28.246537383Z" level=info msg="StartContainer for \"9b7c1c3513ddf12471dab497eb85320142dd7b575a6e14f1a6dc4126af541548\"" Jul 12 00:17:28.248419 containerd[1552]: time="2025-07-12T00:17:28.248340370Z" level=info msg="connecting to shim 9b7c1c3513ddf12471dab497eb85320142dd7b575a6e14f1a6dc4126af541548" address="unix:///run/containerd/s/61e911a7ee2849c5be23c5bc4c6df2e214c3e646be2d7f9e9f2be8d026491f86" protocol=ttrpc version=3 Jul 12 00:17:28.275451 systemd[1]: Started cri-containerd-9b7c1c3513ddf12471dab497eb85320142dd7b575a6e14f1a6dc4126af541548.scope - libcontainer container 9b7c1c3513ddf12471dab497eb85320142dd7b575a6e14f1a6dc4126af541548. Jul 12 00:17:28.283088 containerd[1552]: time="2025-07-12T00:17:28.283023918Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:28.283940 containerd[1552]: time="2025-07-12T00:17:28.283860630Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jul 12 00:17:28.284905 containerd[1552]: time="2025-07-12T00:17:28.284866680Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:28.286494 containerd[1552]: time="2025-07-12T00:17:28.286395613Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.386554698s" Jul 12 00:17:28.286494 containerd[1552]: time="2025-07-12T00:17:28.286445237Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 12 00:17:28.292750 containerd[1552]: time="2025-07-12T00:17:28.292713163Z" level=info msg="CreateContainer within sandbox \"53c552f5fcc2d66727567b5ef57c55281e4c615c086be9c54b847590eafd029a\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 12 00:17:28.303685 containerd[1552]: time="2025-07-12T00:17:28.303639914Z" level=info msg="Container 20386a3d4e59683dec592df6192a3c49217d2a2901215baf94d3f4aa8e882477: CDI devices from CRI Config.CDIDevices: []" Jul 12 00:17:28.309636 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2874145036.mount: Deactivated successfully. Jul 12 00:17:28.313096 systemd[1]: cri-containerd-9b7c1c3513ddf12471dab497eb85320142dd7b575a6e14f1a6dc4126af541548.scope: Deactivated successfully. Jul 12 00:17:28.313787 containerd[1552]: time="2025-07-12T00:17:28.313735021Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9b7c1c3513ddf12471dab497eb85320142dd7b575a6e14f1a6dc4126af541548\" id:\"9b7c1c3513ddf12471dab497eb85320142dd7b575a6e14f1a6dc4126af541548\" pid:3322 exited_at:{seconds:1752279448 nanos:313360828}" Jul 12 00:17:28.315736 containerd[1552]: time="2025-07-12T00:17:28.315689093Z" level=info msg="received exit event container_id:\"9b7c1c3513ddf12471dab497eb85320142dd7b575a6e14f1a6dc4126af541548\" id:\"9b7c1c3513ddf12471dab497eb85320142dd7b575a6e14f1a6dc4126af541548\" pid:3322 exited_at:{seconds:1752279448 nanos:313360828}" Jul 12 00:17:28.317172 containerd[1552]: time="2025-07-12T00:17:28.317129329Z" level=info msg="CreateContainer within sandbox \"53c552f5fcc2d66727567b5ef57c55281e4c615c086be9c54b847590eafd029a\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"20386a3d4e59683dec592df6192a3c49217d2a2901215baf94d3f4aa8e882477\"" Jul 12 00:17:28.317995 containerd[1552]: time="2025-07-12T00:17:28.317772267Z" level=info msg="StartContainer for \"20386a3d4e59683dec592df6192a3c49217d2a2901215baf94d3f4aa8e882477\"" Jul 12 00:17:28.317995 containerd[1552]: time="2025-07-12T00:17:28.317787997Z" level=info msg="StartContainer for \"9b7c1c3513ddf12471dab497eb85320142dd7b575a6e14f1a6dc4126af541548\" returns successfully" Jul 12 00:17:28.319268 containerd[1552]: time="2025-07-12T00:17:28.319243010Z" level=info msg="connecting to shim 20386a3d4e59683dec592df6192a3c49217d2a2901215baf94d3f4aa8e882477" address="unix:///run/containerd/s/46526a45de34bcf0d2d4a83c420c5b562e4d621180f12f4090d940f4643ad118" protocol=ttrpc version=3 Jul 12 00:17:28.343497 systemd[1]: Started cri-containerd-20386a3d4e59683dec592df6192a3c49217d2a2901215baf94d3f4aa8e882477.scope - libcontainer container 20386a3d4e59683dec592df6192a3c49217d2a2901215baf94d3f4aa8e882477. Jul 12 00:17:29.099453 containerd[1552]: time="2025-07-12T00:17:29.099408408Z" level=info msg="StartContainer for \"20386a3d4e59683dec592df6192a3c49217d2a2901215baf94d3f4aa8e882477\" returns successfully" Jul 12 00:17:29.203995 kubelet[2713]: E0712 00:17:29.203932 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:29.210340 kubelet[2713]: E0712 00:17:29.210282 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:29.216228 containerd[1552]: time="2025-07-12T00:17:29.216148362Z" level=info msg="CreateContainer within sandbox \"1bbec4ba92a8263f16f27176e3afd508696bc8a82652b233c2c6fc2dbe74a119\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 12 00:17:29.218798 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9b7c1c3513ddf12471dab497eb85320142dd7b575a6e14f1a6dc4126af541548-rootfs.mount: Deactivated successfully. Jul 12 00:17:29.237263 containerd[1552]: time="2025-07-12T00:17:29.235766210Z" level=info msg="Container 2fe9c773752a57f5942823dbd0bddcd0ae3d4f53a132373119f727e4695aff8c: CDI devices from CRI Config.CDIDevices: []" Jul 12 00:17:29.250767 containerd[1552]: time="2025-07-12T00:17:29.250706799Z" level=info msg="CreateContainer within sandbox \"1bbec4ba92a8263f16f27176e3afd508696bc8a82652b233c2c6fc2dbe74a119\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2fe9c773752a57f5942823dbd0bddcd0ae3d4f53a132373119f727e4695aff8c\"" Jul 12 00:17:29.252724 containerd[1552]: time="2025-07-12T00:17:29.252675147Z" level=info msg="StartContainer for \"2fe9c773752a57f5942823dbd0bddcd0ae3d4f53a132373119f727e4695aff8c\"" Jul 12 00:17:29.256615 containerd[1552]: time="2025-07-12T00:17:29.256557581Z" level=info msg="connecting to shim 2fe9c773752a57f5942823dbd0bddcd0ae3d4f53a132373119f727e4695aff8c" address="unix:///run/containerd/s/61e911a7ee2849c5be23c5bc4c6df2e214c3e646be2d7f9e9f2be8d026491f86" protocol=ttrpc version=3 Jul 12 00:17:29.316153 kubelet[2713]: I0712 00:17:29.315492 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-fvjst" podStartSLOduration=2.19863342 podStartE2EDuration="29.315454784s" podCreationTimestamp="2025-07-12 00:17:00 +0000 UTC" firstStartedPulling="2025-07-12 00:17:01.170730622 +0000 UTC m=+6.385186570" lastFinishedPulling="2025-07-12 00:17:28.287551986 +0000 UTC m=+33.502007934" observedRunningTime="2025-07-12 00:17:29.305271282 +0000 UTC m=+34.519727240" watchObservedRunningTime="2025-07-12 00:17:29.315454784 +0000 UTC m=+34.529910732" Jul 12 00:17:29.327846 systemd[1]: Started cri-containerd-2fe9c773752a57f5942823dbd0bddcd0ae3d4f53a132373119f727e4695aff8c.scope - libcontainer container 2fe9c773752a57f5942823dbd0bddcd0ae3d4f53a132373119f727e4695aff8c. Jul 12 00:17:29.504080 containerd[1552]: time="2025-07-12T00:17:29.503946831Z" level=info msg="StartContainer for \"2fe9c773752a57f5942823dbd0bddcd0ae3d4f53a132373119f727e4695aff8c\" returns successfully" Jul 12 00:17:29.571895 containerd[1552]: time="2025-07-12T00:17:29.571849091Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2fe9c773752a57f5942823dbd0bddcd0ae3d4f53a132373119f727e4695aff8c\" id:\"891be1f5ec7881856dae264af9519c65da127ceda1f334471a2dd76105184595\" pid:3436 exited_at:{seconds:1752279449 nanos:571498753}" Jul 12 00:17:29.638496 kubelet[2713]: I0712 00:17:29.638454 2713 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 12 00:17:29.965519 systemd[1]: Created slice kubepods-burstable-pod01a115ff_bf94_4fcc_9186_8831d507d6c5.slice - libcontainer container kubepods-burstable-pod01a115ff_bf94_4fcc_9186_8831d507d6c5.slice. Jul 12 00:17:29.976474 systemd[1]: Created slice kubepods-burstable-pod1c9240a1_fc20_436f_af92_ada6bec24209.slice - libcontainer container kubepods-burstable-pod1c9240a1_fc20_436f_af92_ada6bec24209.slice. Jul 12 00:17:29.980534 kubelet[2713]: I0712 00:17:29.979440 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/01a115ff-bf94-4fcc-9186-8831d507d6c5-config-volume\") pod \"coredns-674b8bbfcf-vqclj\" (UID: \"01a115ff-bf94-4fcc-9186-8831d507d6c5\") " pod="kube-system/coredns-674b8bbfcf-vqclj" Jul 12 00:17:29.980534 kubelet[2713]: I0712 00:17:29.979472 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1c9240a1-fc20-436f-af92-ada6bec24209-config-volume\") pod \"coredns-674b8bbfcf-cgtms\" (UID: \"1c9240a1-fc20-436f-af92-ada6bec24209\") " pod="kube-system/coredns-674b8bbfcf-cgtms" Jul 12 00:17:29.980534 kubelet[2713]: I0712 00:17:29.979489 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5864\" (UniqueName: \"kubernetes.io/projected/1c9240a1-fc20-436f-af92-ada6bec24209-kube-api-access-t5864\") pod \"coredns-674b8bbfcf-cgtms\" (UID: \"1c9240a1-fc20-436f-af92-ada6bec24209\") " pod="kube-system/coredns-674b8bbfcf-cgtms" Jul 12 00:17:29.980534 kubelet[2713]: I0712 00:17:29.979505 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74tvl\" (UniqueName: \"kubernetes.io/projected/01a115ff-bf94-4fcc-9186-8831d507d6c5-kube-api-access-74tvl\") pod \"coredns-674b8bbfcf-vqclj\" (UID: \"01a115ff-bf94-4fcc-9186-8831d507d6c5\") " pod="kube-system/coredns-674b8bbfcf-vqclj" Jul 12 00:17:30.218911 kubelet[2713]: E0712 00:17:30.218791 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:30.219542 kubelet[2713]: E0712 00:17:30.219517 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:30.238424 kubelet[2713]: I0712 00:17:30.238361 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-lmkz4" podStartSLOduration=6.125344873 podStartE2EDuration="30.238342799s" podCreationTimestamp="2025-07-12 00:17:00 +0000 UTC" firstStartedPulling="2025-07-12 00:17:00.786636982 +0000 UTC m=+6.001092930" lastFinishedPulling="2025-07-12 00:17:24.899634908 +0000 UTC m=+30.114090856" observedRunningTime="2025-07-12 00:17:30.233645395 +0000 UTC m=+35.448101343" watchObservedRunningTime="2025-07-12 00:17:30.238342799 +0000 UTC m=+35.452798747" Jul 12 00:17:30.272078 kubelet[2713]: E0712 00:17:30.272025 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:30.272993 containerd[1552]: time="2025-07-12T00:17:30.272945042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vqclj,Uid:01a115ff-bf94-4fcc-9186-8831d507d6c5,Namespace:kube-system,Attempt:0,}" Jul 12 00:17:30.282839 kubelet[2713]: E0712 00:17:30.282448 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:30.283408 containerd[1552]: time="2025-07-12T00:17:30.283364666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cgtms,Uid:1c9240a1-fc20-436f-af92-ada6bec24209,Namespace:kube-system,Attempt:0,}" Jul 12 00:17:31.221041 kubelet[2713]: E0712 00:17:31.220983 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:32.223124 kubelet[2713]: E0712 00:17:32.223064 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:32.781930 systemd-networkd[1461]: cilium_host: Link UP Jul 12 00:17:32.782105 systemd-networkd[1461]: cilium_net: Link UP Jul 12 00:17:32.782537 systemd-networkd[1461]: cilium_net: Gained carrier Jul 12 00:17:32.782848 systemd-networkd[1461]: cilium_host: Gained carrier Jul 12 00:17:32.894313 systemd-networkd[1461]: cilium_vxlan: Link UP Jul 12 00:17:32.894327 systemd-networkd[1461]: cilium_vxlan: Gained carrier Jul 12 00:17:33.104369 systemd[1]: Started sshd@8-10.0.0.95:22-10.0.0.1:46512.service - OpenSSH per-connection server daemon (10.0.0.1:46512). Jul 12 00:17:33.120261 kernel: NET: Registered PF_ALG protocol family Jul 12 00:17:33.167790 sshd[3627]: Accepted publickey for core from 10.0.0.1 port 46512 ssh2: RSA SHA256:vyezqvaDT/l+5kUccKQO1QIecRdQxwI+fSccHDXDwwc Jul 12 00:17:33.170040 sshd-session[3627]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:17:33.176581 systemd-logind[1538]: New session 9 of user core. Jul 12 00:17:33.185536 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 12 00:17:33.325153 sshd[3640]: Connection closed by 10.0.0.1 port 46512 Jul 12 00:17:33.325559 sshd-session[3627]: pam_unix(sshd:session): session closed for user core Jul 12 00:17:33.331085 systemd[1]: sshd@8-10.0.0.95:22-10.0.0.1:46512.service: Deactivated successfully. Jul 12 00:17:33.333714 systemd[1]: session-9.scope: Deactivated successfully. Jul 12 00:17:33.335095 systemd-logind[1538]: Session 9 logged out. Waiting for processes to exit. Jul 12 00:17:33.336744 systemd-logind[1538]: Removed session 9. Jul 12 00:17:33.424432 systemd-networkd[1461]: cilium_host: Gained IPv6LL Jul 12 00:17:33.552468 systemd-networkd[1461]: cilium_net: Gained IPv6LL Jul 12 00:17:33.606465 kubelet[2713]: E0712 00:17:33.606428 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:33.846055 systemd-networkd[1461]: lxc_health: Link UP Jul 12 00:17:33.847392 systemd-networkd[1461]: lxc_health: Gained carrier Jul 12 00:17:34.337106 systemd-networkd[1461]: lxc0a546bee97fc: Link UP Jul 12 00:17:34.338265 kernel: eth0: renamed from tmpc5ac2 Jul 12 00:17:34.351027 systemd-networkd[1461]: lxc0a546bee97fc: Gained carrier Jul 12 00:17:34.351326 systemd-networkd[1461]: lxc139648f826ab: Link UP Jul 12 00:17:34.356250 kernel: eth0: renamed from tmp925ad Jul 12 00:17:34.358035 systemd-networkd[1461]: lxc139648f826ab: Gained carrier Jul 12 00:17:34.495795 kubelet[2713]: E0712 00:17:34.495760 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:34.640476 systemd-networkd[1461]: cilium_vxlan: Gained IPv6LL Jul 12 00:17:35.025458 systemd-networkd[1461]: lxc_health: Gained IPv6LL Jul 12 00:17:35.233967 kubelet[2713]: E0712 00:17:35.233928 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:35.536572 systemd-networkd[1461]: lxc139648f826ab: Gained IPv6LL Jul 12 00:17:35.664421 systemd-networkd[1461]: lxc0a546bee97fc: Gained IPv6LL Jul 12 00:17:38.341648 systemd[1]: Started sshd@9-10.0.0.95:22-10.0.0.1:44048.service - OpenSSH per-connection server daemon (10.0.0.1:44048). Jul 12 00:17:38.396796 sshd[3916]: Accepted publickey for core from 10.0.0.1 port 44048 ssh2: RSA SHA256:vyezqvaDT/l+5kUccKQO1QIecRdQxwI+fSccHDXDwwc Jul 12 00:17:38.398864 sshd-session[3916]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:17:38.404165 systemd-logind[1538]: New session 10 of user core. Jul 12 00:17:38.408392 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 12 00:17:38.566351 sshd[3920]: Connection closed by 10.0.0.1 port 44048 Jul 12 00:17:38.577613 sshd-session[3916]: pam_unix(sshd:session): session closed for user core Jul 12 00:17:38.582942 systemd[1]: sshd@9-10.0.0.95:22-10.0.0.1:44048.service: Deactivated successfully. Jul 12 00:17:38.585794 systemd[1]: session-10.scope: Deactivated successfully. Jul 12 00:17:38.586768 systemd-logind[1538]: Session 10 logged out. Waiting for processes to exit. Jul 12 00:17:38.588321 systemd-logind[1538]: Removed session 10. Jul 12 00:17:39.125129 containerd[1552]: time="2025-07-12T00:17:39.124784576Z" level=info msg="connecting to shim c5ac2c056038efa53e0a75c4e6507a5f534ce8514b199ce2a147ec4918b41108" address="unix:///run/containerd/s/a8b18eb28dcd216cb5b14a3399a040d0d276545889e478d3c67f03302e940acb" namespace=k8s.io protocol=ttrpc version=3 Jul 12 00:17:39.125658 containerd[1552]: time="2025-07-12T00:17:39.125614605Z" level=info msg="connecting to shim 925ad4aaa9b8e3f6a8fe48ff72b1f4b0b0c75b7f3dab8c7fd3f8e268da6988df" address="unix:///run/containerd/s/d345bc3e420c57590bf1e097c588140e6d150b623d77dbb5bce31c7ca5fc084a" namespace=k8s.io protocol=ttrpc version=3 Jul 12 00:17:39.159371 systemd[1]: Started cri-containerd-925ad4aaa9b8e3f6a8fe48ff72b1f4b0b0c75b7f3dab8c7fd3f8e268da6988df.scope - libcontainer container 925ad4aaa9b8e3f6a8fe48ff72b1f4b0b0c75b7f3dab8c7fd3f8e268da6988df. Jul 12 00:17:39.161076 systemd[1]: Started cri-containerd-c5ac2c056038efa53e0a75c4e6507a5f534ce8514b199ce2a147ec4918b41108.scope - libcontainer container c5ac2c056038efa53e0a75c4e6507a5f534ce8514b199ce2a147ec4918b41108. Jul 12 00:17:39.175297 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 00:17:39.176795 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 00:17:39.217101 containerd[1552]: time="2025-07-12T00:17:39.216998024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cgtms,Uid:1c9240a1-fc20-436f-af92-ada6bec24209,Namespace:kube-system,Attempt:0,} returns sandbox id \"925ad4aaa9b8e3f6a8fe48ff72b1f4b0b0c75b7f3dab8c7fd3f8e268da6988df\"" Jul 12 00:17:39.223009 kubelet[2713]: E0712 00:17:39.222966 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:39.242363 containerd[1552]: time="2025-07-12T00:17:39.242287544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vqclj,Uid:01a115ff-bf94-4fcc-9186-8831d507d6c5,Namespace:kube-system,Attempt:0,} returns sandbox id \"c5ac2c056038efa53e0a75c4e6507a5f534ce8514b199ce2a147ec4918b41108\"" Jul 12 00:17:39.243086 kubelet[2713]: E0712 00:17:39.243066 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:39.243286 containerd[1552]: time="2025-07-12T00:17:39.243077708Z" level=info msg="CreateContainer within sandbox \"925ad4aaa9b8e3f6a8fe48ff72b1f4b0b0c75b7f3dab8c7fd3f8e268da6988df\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 12 00:17:39.248521 containerd[1552]: time="2025-07-12T00:17:39.248475612Z" level=info msg="CreateContainer within sandbox \"c5ac2c056038efa53e0a75c4e6507a5f534ce8514b199ce2a147ec4918b41108\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 12 00:17:39.269357 containerd[1552]: time="2025-07-12T00:17:39.269233274Z" level=info msg="Container 4fe6e3efbfd39c73bc9e9ef89b2af642881f9d64f713f6662fb6a02bb6f67717: CDI devices from CRI Config.CDIDevices: []" Jul 12 00:17:39.273238 containerd[1552]: time="2025-07-12T00:17:39.273177068Z" level=info msg="Container 86de42f70555abbdf2bde7311b16fb6f8d0c62d2ba6dbda1f8e7f5a7a84577f5: CDI devices from CRI Config.CDIDevices: []" Jul 12 00:17:39.279050 containerd[1552]: time="2025-07-12T00:17:39.278990914Z" level=info msg="CreateContainer within sandbox \"925ad4aaa9b8e3f6a8fe48ff72b1f4b0b0c75b7f3dab8c7fd3f8e268da6988df\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4fe6e3efbfd39c73bc9e9ef89b2af642881f9d64f713f6662fb6a02bb6f67717\"" Jul 12 00:17:39.279723 containerd[1552]: time="2025-07-12T00:17:39.279654190Z" level=info msg="StartContainer for \"4fe6e3efbfd39c73bc9e9ef89b2af642881f9d64f713f6662fb6a02bb6f67717\"" Jul 12 00:17:39.285526 containerd[1552]: time="2025-07-12T00:17:39.285465791Z" level=info msg="CreateContainer within sandbox \"c5ac2c056038efa53e0a75c4e6507a5f534ce8514b199ce2a147ec4918b41108\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"86de42f70555abbdf2bde7311b16fb6f8d0c62d2ba6dbda1f8e7f5a7a84577f5\"" Jul 12 00:17:39.286202 containerd[1552]: time="2025-07-12T00:17:39.286142531Z" level=info msg="StartContainer for \"86de42f70555abbdf2bde7311b16fb6f8d0c62d2ba6dbda1f8e7f5a7a84577f5\"" Jul 12 00:17:39.288000 containerd[1552]: time="2025-07-12T00:17:39.287950435Z" level=info msg="connecting to shim 86de42f70555abbdf2bde7311b16fb6f8d0c62d2ba6dbda1f8e7f5a7a84577f5" address="unix:///run/containerd/s/a8b18eb28dcd216cb5b14a3399a040d0d276545889e478d3c67f03302e940acb" protocol=ttrpc version=3 Jul 12 00:17:39.299718 containerd[1552]: time="2025-07-12T00:17:39.299623912Z" level=info msg="connecting to shim 4fe6e3efbfd39c73bc9e9ef89b2af642881f9d64f713f6662fb6a02bb6f67717" address="unix:///run/containerd/s/d345bc3e420c57590bf1e097c588140e6d150b623d77dbb5bce31c7ca5fc084a" protocol=ttrpc version=3 Jul 12 00:17:39.311673 systemd[1]: Started cri-containerd-86de42f70555abbdf2bde7311b16fb6f8d0c62d2ba6dbda1f8e7f5a7a84577f5.scope - libcontainer container 86de42f70555abbdf2bde7311b16fb6f8d0c62d2ba6dbda1f8e7f5a7a84577f5. Jul 12 00:17:39.332364 systemd[1]: Started cri-containerd-4fe6e3efbfd39c73bc9e9ef89b2af642881f9d64f713f6662fb6a02bb6f67717.scope - libcontainer container 4fe6e3efbfd39c73bc9e9ef89b2af642881f9d64f713f6662fb6a02bb6f67717. Jul 12 00:17:39.419831 containerd[1552]: time="2025-07-12T00:17:39.419720790Z" level=info msg="StartContainer for \"86de42f70555abbdf2bde7311b16fb6f8d0c62d2ba6dbda1f8e7f5a7a84577f5\" returns successfully" Jul 12 00:17:39.419831 containerd[1552]: time="2025-07-12T00:17:39.419784971Z" level=info msg="StartContainer for \"4fe6e3efbfd39c73bc9e9ef89b2af642881f9d64f713f6662fb6a02bb6f67717\" returns successfully" Jul 12 00:17:40.262306 kubelet[2713]: E0712 00:17:40.261929 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:40.263008 kubelet[2713]: E0712 00:17:40.262982 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:40.297263 kubelet[2713]: I0712 00:17:40.296710 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-cgtms" podStartSLOduration=40.296682643 podStartE2EDuration="40.296682643s" podCreationTimestamp="2025-07-12 00:17:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:17:40.291379989 +0000 UTC m=+45.505835927" watchObservedRunningTime="2025-07-12 00:17:40.296682643 +0000 UTC m=+45.511138591" Jul 12 00:17:40.309804 kubelet[2713]: I0712 00:17:40.309729 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-vqclj" podStartSLOduration=40.309705813 podStartE2EDuration="40.309705813s" podCreationTimestamp="2025-07-12 00:17:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:17:40.308935127 +0000 UTC m=+45.523391075" watchObservedRunningTime="2025-07-12 00:17:40.309705813 +0000 UTC m=+45.524161761" Jul 12 00:17:41.258592 kubelet[2713]: E0712 00:17:41.258197 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:41.258592 kubelet[2713]: E0712 00:17:41.258465 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:42.260539 kubelet[2713]: E0712 00:17:42.260474 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:42.260539 kubelet[2713]: E0712 00:17:42.260485 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:43.262302 kubelet[2713]: E0712 00:17:43.262266 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:43.582758 systemd[1]: Started sshd@10-10.0.0.95:22-10.0.0.1:44058.service - OpenSSH per-connection server daemon (10.0.0.1:44058). Jul 12 00:17:43.637081 sshd[4108]: Accepted publickey for core from 10.0.0.1 port 44058 ssh2: RSA SHA256:vyezqvaDT/l+5kUccKQO1QIecRdQxwI+fSccHDXDwwc Jul 12 00:17:43.639014 sshd-session[4108]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:17:43.644151 systemd-logind[1538]: New session 11 of user core. Jul 12 00:17:43.654399 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 12 00:17:43.769707 sshd[4110]: Connection closed by 10.0.0.1 port 44058 Jul 12 00:17:43.770008 sshd-session[4108]: pam_unix(sshd:session): session closed for user core Jul 12 00:17:43.774564 systemd[1]: sshd@10-10.0.0.95:22-10.0.0.1:44058.service: Deactivated successfully. Jul 12 00:17:43.776761 systemd[1]: session-11.scope: Deactivated successfully. Jul 12 00:17:43.777649 systemd-logind[1538]: Session 11 logged out. Waiting for processes to exit. Jul 12 00:17:43.778873 systemd-logind[1538]: Removed session 11. Jul 12 00:17:48.785077 systemd[1]: Started sshd@11-10.0.0.95:22-10.0.0.1:45364.service - OpenSSH per-connection server daemon (10.0.0.1:45364). Jul 12 00:17:48.837179 sshd[4125]: Accepted publickey for core from 10.0.0.1 port 45364 ssh2: RSA SHA256:vyezqvaDT/l+5kUccKQO1QIecRdQxwI+fSccHDXDwwc Jul 12 00:17:48.838791 sshd-session[4125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:17:48.843152 systemd-logind[1538]: New session 12 of user core. Jul 12 00:17:48.851339 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 12 00:17:48.969949 sshd[4127]: Connection closed by 10.0.0.1 port 45364 Jul 12 00:17:48.970426 sshd-session[4125]: pam_unix(sshd:session): session closed for user core Jul 12 00:17:48.986385 systemd[1]: sshd@11-10.0.0.95:22-10.0.0.1:45364.service: Deactivated successfully. Jul 12 00:17:48.988705 systemd[1]: session-12.scope: Deactivated successfully. Jul 12 00:17:48.989541 systemd-logind[1538]: Session 12 logged out. Waiting for processes to exit. Jul 12 00:17:48.993021 systemd[1]: Started sshd@12-10.0.0.95:22-10.0.0.1:45374.service - OpenSSH per-connection server daemon (10.0.0.1:45374). Jul 12 00:17:48.993737 systemd-logind[1538]: Removed session 12. Jul 12 00:17:49.037030 sshd[4141]: Accepted publickey for core from 10.0.0.1 port 45374 ssh2: RSA SHA256:vyezqvaDT/l+5kUccKQO1QIecRdQxwI+fSccHDXDwwc Jul 12 00:17:49.039037 sshd-session[4141]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:17:49.044474 systemd-logind[1538]: New session 13 of user core. Jul 12 00:17:49.054384 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 12 00:17:49.263100 sshd[4143]: Connection closed by 10.0.0.1 port 45374 Jul 12 00:17:49.265439 sshd-session[4141]: pam_unix(sshd:session): session closed for user core Jul 12 00:17:49.278506 systemd[1]: sshd@12-10.0.0.95:22-10.0.0.1:45374.service: Deactivated successfully. Jul 12 00:17:49.280759 systemd[1]: session-13.scope: Deactivated successfully. Jul 12 00:17:49.282233 systemd-logind[1538]: Session 13 logged out. Waiting for processes to exit. Jul 12 00:17:49.285921 systemd-logind[1538]: Removed session 13. Jul 12 00:17:49.288385 systemd[1]: Started sshd@13-10.0.0.95:22-10.0.0.1:45378.service - OpenSSH per-connection server daemon (10.0.0.1:45378). Jul 12 00:17:49.345729 sshd[4154]: Accepted publickey for core from 10.0.0.1 port 45378 ssh2: RSA SHA256:vyezqvaDT/l+5kUccKQO1QIecRdQxwI+fSccHDXDwwc Jul 12 00:17:49.348140 sshd-session[4154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:17:49.353874 systemd-logind[1538]: New session 14 of user core. Jul 12 00:17:49.364594 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 12 00:17:49.717097 sshd[4156]: Connection closed by 10.0.0.1 port 45378 Jul 12 00:17:49.717960 sshd-session[4154]: pam_unix(sshd:session): session closed for user core Jul 12 00:17:49.724006 systemd[1]: sshd@13-10.0.0.95:22-10.0.0.1:45378.service: Deactivated successfully. Jul 12 00:17:49.726100 systemd[1]: session-14.scope: Deactivated successfully. Jul 12 00:17:49.727252 systemd-logind[1538]: Session 14 logged out. Waiting for processes to exit. Jul 12 00:17:49.731665 systemd-logind[1538]: Removed session 14. Jul 12 00:17:54.735437 systemd[1]: Started sshd@14-10.0.0.95:22-10.0.0.1:45386.service - OpenSSH per-connection server daemon (10.0.0.1:45386). Jul 12 00:17:54.801199 sshd[4171]: Accepted publickey for core from 10.0.0.1 port 45386 ssh2: RSA SHA256:vyezqvaDT/l+5kUccKQO1QIecRdQxwI+fSccHDXDwwc Jul 12 00:17:54.803347 sshd-session[4171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:17:54.808753 systemd-logind[1538]: New session 15 of user core. Jul 12 00:17:54.818472 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 12 00:17:54.956170 sshd[4173]: Connection closed by 10.0.0.1 port 45386 Jul 12 00:17:54.957069 sshd-session[4171]: pam_unix(sshd:session): session closed for user core Jul 12 00:17:54.961173 systemd[1]: sshd@14-10.0.0.95:22-10.0.0.1:45386.service: Deactivated successfully. Jul 12 00:17:54.963812 systemd[1]: session-15.scope: Deactivated successfully. Jul 12 00:17:54.966739 systemd-logind[1538]: Session 15 logged out. Waiting for processes to exit. Jul 12 00:17:54.968662 systemd-logind[1538]: Removed session 15. Jul 12 00:17:59.976251 systemd[1]: Started sshd@15-10.0.0.95:22-10.0.0.1:43934.service - OpenSSH per-connection server daemon (10.0.0.1:43934). Jul 12 00:18:00.035803 sshd[4188]: Accepted publickey for core from 10.0.0.1 port 43934 ssh2: RSA SHA256:vyezqvaDT/l+5kUccKQO1QIecRdQxwI+fSccHDXDwwc Jul 12 00:18:00.037940 sshd-session[4188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:18:00.043556 systemd-logind[1538]: New session 16 of user core. Jul 12 00:18:00.051432 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 12 00:18:00.219933 sshd[4190]: Connection closed by 10.0.0.1 port 43934 Jul 12 00:18:00.220341 sshd-session[4188]: pam_unix(sshd:session): session closed for user core Jul 12 00:18:00.224704 systemd[1]: sshd@15-10.0.0.95:22-10.0.0.1:43934.service: Deactivated successfully. Jul 12 00:18:00.227260 systemd[1]: session-16.scope: Deactivated successfully. Jul 12 00:18:00.229153 systemd-logind[1538]: Session 16 logged out. Waiting for processes to exit. Jul 12 00:18:00.230891 systemd-logind[1538]: Removed session 16. Jul 12 00:18:05.238660 systemd[1]: Started sshd@16-10.0.0.95:22-10.0.0.1:43936.service - OpenSSH per-connection server daemon (10.0.0.1:43936). Jul 12 00:18:05.287282 sshd[4206]: Accepted publickey for core from 10.0.0.1 port 43936 ssh2: RSA SHA256:vyezqvaDT/l+5kUccKQO1QIecRdQxwI+fSccHDXDwwc Jul 12 00:18:05.289183 sshd-session[4206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:18:05.294372 systemd-logind[1538]: New session 17 of user core. Jul 12 00:18:05.308491 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 12 00:18:05.416164 sshd[4208]: Connection closed by 10.0.0.1 port 43936 Jul 12 00:18:05.416530 sshd-session[4206]: pam_unix(sshd:session): session closed for user core Jul 12 00:18:05.430199 systemd[1]: sshd@16-10.0.0.95:22-10.0.0.1:43936.service: Deactivated successfully. Jul 12 00:18:05.432071 systemd[1]: session-17.scope: Deactivated successfully. Jul 12 00:18:05.432885 systemd-logind[1538]: Session 17 logged out. Waiting for processes to exit. Jul 12 00:18:05.435701 systemd[1]: Started sshd@17-10.0.0.95:22-10.0.0.1:43940.service - OpenSSH per-connection server daemon (10.0.0.1:43940). Jul 12 00:18:05.436454 systemd-logind[1538]: Removed session 17. Jul 12 00:18:05.489108 sshd[4221]: Accepted publickey for core from 10.0.0.1 port 43940 ssh2: RSA SHA256:vyezqvaDT/l+5kUccKQO1QIecRdQxwI+fSccHDXDwwc Jul 12 00:18:05.490995 sshd-session[4221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:18:05.495819 systemd-logind[1538]: New session 18 of user core. Jul 12 00:18:05.509418 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 12 00:18:06.613644 sshd[4223]: Connection closed by 10.0.0.1 port 43940 Jul 12 00:18:06.614259 sshd-session[4221]: pam_unix(sshd:session): session closed for user core Jul 12 00:18:06.624562 systemd[1]: sshd@17-10.0.0.95:22-10.0.0.1:43940.service: Deactivated successfully. Jul 12 00:18:06.626576 systemd[1]: session-18.scope: Deactivated successfully. Jul 12 00:18:06.627714 systemd-logind[1538]: Session 18 logged out. Waiting for processes to exit. Jul 12 00:18:06.630786 systemd[1]: Started sshd@18-10.0.0.95:22-10.0.0.1:38826.service - OpenSSH per-connection server daemon (10.0.0.1:38826). Jul 12 00:18:06.631811 systemd-logind[1538]: Removed session 18. Jul 12 00:18:06.693065 sshd[4234]: Accepted publickey for core from 10.0.0.1 port 38826 ssh2: RSA SHA256:vyezqvaDT/l+5kUccKQO1QIecRdQxwI+fSccHDXDwwc Jul 12 00:18:06.694942 sshd-session[4234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:18:06.700094 systemd-logind[1538]: New session 19 of user core. Jul 12 00:18:06.718575 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 12 00:18:07.519658 sshd[4236]: Connection closed by 10.0.0.1 port 38826 Jul 12 00:18:07.520624 sshd-session[4234]: pam_unix(sshd:session): session closed for user core Jul 12 00:18:07.531639 systemd[1]: sshd@18-10.0.0.95:22-10.0.0.1:38826.service: Deactivated successfully. Jul 12 00:18:07.534548 systemd[1]: session-19.scope: Deactivated successfully. Jul 12 00:18:07.537819 systemd-logind[1538]: Session 19 logged out. Waiting for processes to exit. Jul 12 00:18:07.543928 systemd[1]: Started sshd@19-10.0.0.95:22-10.0.0.1:38830.service - OpenSSH per-connection server daemon (10.0.0.1:38830). Jul 12 00:18:07.545655 systemd-logind[1538]: Removed session 19. Jul 12 00:18:07.601706 sshd[4262]: Accepted publickey for core from 10.0.0.1 port 38830 ssh2: RSA SHA256:vyezqvaDT/l+5kUccKQO1QIecRdQxwI+fSccHDXDwwc Jul 12 00:18:07.604167 sshd-session[4262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:18:07.610082 systemd-logind[1538]: New session 20 of user core. Jul 12 00:18:07.620412 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 12 00:18:07.994527 sshd[4264]: Connection closed by 10.0.0.1 port 38830 Jul 12 00:18:07.995396 sshd-session[4262]: pam_unix(sshd:session): session closed for user core Jul 12 00:18:08.006260 systemd[1]: sshd@19-10.0.0.95:22-10.0.0.1:38830.service: Deactivated successfully. Jul 12 00:18:08.009945 systemd[1]: session-20.scope: Deactivated successfully. Jul 12 00:18:08.011145 systemd-logind[1538]: Session 20 logged out. Waiting for processes to exit. Jul 12 00:18:08.016372 systemd[1]: Started sshd@20-10.0.0.95:22-10.0.0.1:38838.service - OpenSSH per-connection server daemon (10.0.0.1:38838). Jul 12 00:18:08.017658 systemd-logind[1538]: Removed session 20. Jul 12 00:18:08.065686 sshd[4275]: Accepted publickey for core from 10.0.0.1 port 38838 ssh2: RSA SHA256:vyezqvaDT/l+5kUccKQO1QIecRdQxwI+fSccHDXDwwc Jul 12 00:18:08.067490 sshd-session[4275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:18:08.073114 systemd-logind[1538]: New session 21 of user core. Jul 12 00:18:08.080363 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 12 00:18:08.201454 sshd[4277]: Connection closed by 10.0.0.1 port 38838 Jul 12 00:18:08.201770 sshd-session[4275]: pam_unix(sshd:session): session closed for user core Jul 12 00:18:08.206192 systemd[1]: sshd@20-10.0.0.95:22-10.0.0.1:38838.service: Deactivated successfully. Jul 12 00:18:08.208329 systemd[1]: session-21.scope: Deactivated successfully. Jul 12 00:18:08.209339 systemd-logind[1538]: Session 21 logged out. Waiting for processes to exit. Jul 12 00:18:08.210766 systemd-logind[1538]: Removed session 21. Jul 12 00:18:13.224515 systemd[1]: Started sshd@21-10.0.0.95:22-10.0.0.1:38840.service - OpenSSH per-connection server daemon (10.0.0.1:38840). Jul 12 00:18:13.290067 sshd[4291]: Accepted publickey for core from 10.0.0.1 port 38840 ssh2: RSA SHA256:vyezqvaDT/l+5kUccKQO1QIecRdQxwI+fSccHDXDwwc Jul 12 00:18:13.292361 sshd-session[4291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:18:13.298476 systemd-logind[1538]: New session 22 of user core. Jul 12 00:18:13.306381 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 12 00:18:13.459016 sshd[4293]: Connection closed by 10.0.0.1 port 38840 Jul 12 00:18:13.459486 sshd-session[4291]: pam_unix(sshd:session): session closed for user core Jul 12 00:18:13.465246 systemd[1]: sshd@21-10.0.0.95:22-10.0.0.1:38840.service: Deactivated successfully. Jul 12 00:18:13.467520 systemd[1]: session-22.scope: Deactivated successfully. Jul 12 00:18:13.468646 systemd-logind[1538]: Session 22 logged out. Waiting for processes to exit. Jul 12 00:18:13.470104 systemd-logind[1538]: Removed session 22. Jul 12 00:18:15.897990 kubelet[2713]: E0712 00:18:15.897835 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:18:16.897870 kubelet[2713]: E0712 00:18:16.897817 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:18:18.474134 systemd[1]: Started sshd@22-10.0.0.95:22-10.0.0.1:40322.service - OpenSSH per-connection server daemon (10.0.0.1:40322). Jul 12 00:18:18.538112 sshd[4309]: Accepted publickey for core from 10.0.0.1 port 40322 ssh2: RSA SHA256:vyezqvaDT/l+5kUccKQO1QIecRdQxwI+fSccHDXDwwc Jul 12 00:18:18.539701 sshd-session[4309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:18:18.544440 systemd-logind[1538]: New session 23 of user core. Jul 12 00:18:18.558362 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 12 00:18:18.716180 sshd[4311]: Connection closed by 10.0.0.1 port 40322 Jul 12 00:18:18.716651 sshd-session[4309]: pam_unix(sshd:session): session closed for user core Jul 12 00:18:18.722524 systemd[1]: sshd@22-10.0.0.95:22-10.0.0.1:40322.service: Deactivated successfully. Jul 12 00:18:18.725533 systemd[1]: session-23.scope: Deactivated successfully. Jul 12 00:18:18.726842 systemd-logind[1538]: Session 23 logged out. Waiting for processes to exit. Jul 12 00:18:18.728887 systemd-logind[1538]: Removed session 23. Jul 12 00:18:21.896808 kubelet[2713]: E0712 00:18:21.896749 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:18:23.733668 systemd[1]: Started sshd@23-10.0.0.95:22-10.0.0.1:40330.service - OpenSSH per-connection server daemon (10.0.0.1:40330). Jul 12 00:18:23.784849 sshd[4324]: Accepted publickey for core from 10.0.0.1 port 40330 ssh2: RSA SHA256:vyezqvaDT/l+5kUccKQO1QIecRdQxwI+fSccHDXDwwc Jul 12 00:18:23.786377 sshd-session[4324]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:18:23.790999 systemd-logind[1538]: New session 24 of user core. Jul 12 00:18:23.801354 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 12 00:18:23.910279 sshd[4326]: Connection closed by 10.0.0.1 port 40330 Jul 12 00:18:23.910770 sshd-session[4324]: pam_unix(sshd:session): session closed for user core Jul 12 00:18:23.923051 systemd[1]: sshd@23-10.0.0.95:22-10.0.0.1:40330.service: Deactivated successfully. Jul 12 00:18:23.925249 systemd[1]: session-24.scope: Deactivated successfully. Jul 12 00:18:23.926113 systemd-logind[1538]: Session 24 logged out. Waiting for processes to exit. Jul 12 00:18:23.929204 systemd[1]: Started sshd@24-10.0.0.95:22-10.0.0.1:40336.service - OpenSSH per-connection server daemon (10.0.0.1:40336). Jul 12 00:18:23.930478 systemd-logind[1538]: Removed session 24. Jul 12 00:18:23.986140 sshd[4340]: Accepted publickey for core from 10.0.0.1 port 40336 ssh2: RSA SHA256:vyezqvaDT/l+5kUccKQO1QIecRdQxwI+fSccHDXDwwc Jul 12 00:18:23.987639 sshd-session[4340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:18:23.992453 systemd-logind[1538]: New session 25 of user core. Jul 12 00:18:24.000370 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 12 00:18:25.906085 containerd[1552]: time="2025-07-12T00:18:25.905542772Z" level=info msg="StopContainer for \"20386a3d4e59683dec592df6192a3c49217d2a2901215baf94d3f4aa8e882477\" with timeout 30 (s)" Jul 12 00:18:25.913180 containerd[1552]: time="2025-07-12T00:18:25.913117317Z" level=info msg="Stop container \"20386a3d4e59683dec592df6192a3c49217d2a2901215baf94d3f4aa8e882477\" with signal terminated" Jul 12 00:18:25.929451 systemd[1]: cri-containerd-20386a3d4e59683dec592df6192a3c49217d2a2901215baf94d3f4aa8e882477.scope: Deactivated successfully. Jul 12 00:18:25.931363 containerd[1552]: time="2025-07-12T00:18:25.930962682Z" level=info msg="received exit event container_id:\"20386a3d4e59683dec592df6192a3c49217d2a2901215baf94d3f4aa8e882477\" id:\"20386a3d4e59683dec592df6192a3c49217d2a2901215baf94d3f4aa8e882477\" pid:3359 exited_at:{seconds:1752279505 nanos:930642493}" Jul 12 00:18:25.931363 containerd[1552]: time="2025-07-12T00:18:25.931088672Z" level=info msg="TaskExit event in podsandbox handler container_id:\"20386a3d4e59683dec592df6192a3c49217d2a2901215baf94d3f4aa8e882477\" id:\"20386a3d4e59683dec592df6192a3c49217d2a2901215baf94d3f4aa8e882477\" pid:3359 exited_at:{seconds:1752279505 nanos:930642493}" Jul 12 00:18:25.937924 containerd[1552]: time="2025-07-12T00:18:25.937793733Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 12 00:18:25.938883 containerd[1552]: time="2025-07-12T00:18:25.938853058Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2fe9c773752a57f5942823dbd0bddcd0ae3d4f53a132373119f727e4695aff8c\" id:\"da678e33a6314179cec8a385ee65be40e852c9afdfb500afb8625e295cfbf226\" pid:4365 exited_at:{seconds:1752279505 nanos:938501890}" Jul 12 00:18:25.942697 containerd[1552]: time="2025-07-12T00:18:25.942651641Z" level=info msg="StopContainer for \"2fe9c773752a57f5942823dbd0bddcd0ae3d4f53a132373119f727e4695aff8c\" with timeout 2 (s)" Jul 12 00:18:25.942943 containerd[1552]: time="2025-07-12T00:18:25.942916364Z" level=info msg="Stop container \"2fe9c773752a57f5942823dbd0bddcd0ae3d4f53a132373119f727e4695aff8c\" with signal terminated" Jul 12 00:18:25.951587 systemd-networkd[1461]: lxc_health: Link DOWN Jul 12 00:18:25.952064 systemd-networkd[1461]: lxc_health: Lost carrier Jul 12 00:18:25.961270 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-20386a3d4e59683dec592df6192a3c49217d2a2901215baf94d3f4aa8e882477-rootfs.mount: Deactivated successfully. Jul 12 00:18:25.972781 systemd[1]: cri-containerd-2fe9c773752a57f5942823dbd0bddcd0ae3d4f53a132373119f727e4695aff8c.scope: Deactivated successfully. Jul 12 00:18:25.973788 systemd[1]: cri-containerd-2fe9c773752a57f5942823dbd0bddcd0ae3d4f53a132373119f727e4695aff8c.scope: Consumed 7.189s CPU time, 122M memory peak, 276K read from disk, 13.3M written to disk. Jul 12 00:18:25.975310 containerd[1552]: time="2025-07-12T00:18:25.975253840Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2fe9c773752a57f5942823dbd0bddcd0ae3d4f53a132373119f727e4695aff8c\" id:\"2fe9c773752a57f5942823dbd0bddcd0ae3d4f53a132373119f727e4695aff8c\" pid:3396 exited_at:{seconds:1752279505 nanos:973931035}" Jul 12 00:18:25.975651 containerd[1552]: time="2025-07-12T00:18:25.975389788Z" level=info msg="received exit event container_id:\"2fe9c773752a57f5942823dbd0bddcd0ae3d4f53a132373119f727e4695aff8c\" id:\"2fe9c773752a57f5942823dbd0bddcd0ae3d4f53a132373119f727e4695aff8c\" pid:3396 exited_at:{seconds:1752279505 nanos:973931035}" Jul 12 00:18:25.980518 containerd[1552]: time="2025-07-12T00:18:25.980434441Z" level=info msg="StopContainer for \"20386a3d4e59683dec592df6192a3c49217d2a2901215baf94d3f4aa8e882477\" returns successfully" Jul 12 00:18:25.981529 containerd[1552]: time="2025-07-12T00:18:25.981468178Z" level=info msg="StopPodSandbox for \"53c552f5fcc2d66727567b5ef57c55281e4c615c086be9c54b847590eafd029a\"" Jul 12 00:18:25.981754 containerd[1552]: time="2025-07-12T00:18:25.981729204Z" level=info msg="Container to stop \"20386a3d4e59683dec592df6192a3c49217d2a2901215baf94d3f4aa8e882477\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:18:25.991022 systemd[1]: cri-containerd-53c552f5fcc2d66727567b5ef57c55281e4c615c086be9c54b847590eafd029a.scope: Deactivated successfully. Jul 12 00:18:25.992680 containerd[1552]: time="2025-07-12T00:18:25.992637809Z" level=info msg="TaskExit event in podsandbox handler container_id:\"53c552f5fcc2d66727567b5ef57c55281e4c615c086be9c54b847590eafd029a\" id:\"53c552f5fcc2d66727567b5ef57c55281e4c615c086be9c54b847590eafd029a\" pid:2959 exit_status:137 exited_at:{seconds:1752279505 nanos:991251943}" Jul 12 00:18:26.006328 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2fe9c773752a57f5942823dbd0bddcd0ae3d4f53a132373119f727e4695aff8c-rootfs.mount: Deactivated successfully. Jul 12 00:18:26.017687 containerd[1552]: time="2025-07-12T00:18:26.017643001Z" level=info msg="StopContainer for \"2fe9c773752a57f5942823dbd0bddcd0ae3d4f53a132373119f727e4695aff8c\" returns successfully" Jul 12 00:18:26.021556 containerd[1552]: time="2025-07-12T00:18:26.021512127Z" level=info msg="StopPodSandbox for \"1bbec4ba92a8263f16f27176e3afd508696bc8a82652b233c2c6fc2dbe74a119\"" Jul 12 00:18:26.021699 containerd[1552]: time="2025-07-12T00:18:26.021607378Z" level=info msg="Container to stop \"922fb08d47ad4a35cb4b8a93b5a67b659462335a924e2c6339af76fd79d4396a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:18:26.021699 containerd[1552]: time="2025-07-12T00:18:26.021621825Z" level=info msg="Container to stop \"8032a84ea6528c90d738b87efaf96e35dce3b5e88717945fbb26e6dbbbc4153f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:18:26.021699 containerd[1552]: time="2025-07-12T00:18:26.021632265Z" level=info msg="Container to stop \"72be9769b9610e074f53bfeaaedb556d159b2d9643faeb6da7b12b4930ed7010\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:18:26.021699 containerd[1552]: time="2025-07-12T00:18:26.021645150Z" level=info msg="Container to stop \"9b7c1c3513ddf12471dab497eb85320142dd7b575a6e14f1a6dc4126af541548\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:18:26.021699 containerd[1552]: time="2025-07-12T00:18:26.021656952Z" level=info msg="Container to stop \"2fe9c773752a57f5942823dbd0bddcd0ae3d4f53a132373119f727e4695aff8c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:18:26.027482 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-53c552f5fcc2d66727567b5ef57c55281e4c615c086be9c54b847590eafd029a-rootfs.mount: Deactivated successfully. Jul 12 00:18:26.033243 containerd[1552]: time="2025-07-12T00:18:26.033120235Z" level=info msg="shim disconnected" id=53c552f5fcc2d66727567b5ef57c55281e4c615c086be9c54b847590eafd029a namespace=k8s.io Jul 12 00:18:26.034065 containerd[1552]: time="2025-07-12T00:18:26.033947868Z" level=warning msg="cleaning up after shim disconnected" id=53c552f5fcc2d66727567b5ef57c55281e4c615c086be9c54b847590eafd029a namespace=k8s.io Jul 12 00:18:26.034526 systemd[1]: cri-containerd-1bbec4ba92a8263f16f27176e3afd508696bc8a82652b233c2c6fc2dbe74a119.scope: Deactivated successfully. Jul 12 00:18:26.060238 containerd[1552]: time="2025-07-12T00:18:26.033983606Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:18:26.061437 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1bbec4ba92a8263f16f27176e3afd508696bc8a82652b233c2c6fc2dbe74a119-rootfs.mount: Deactivated successfully. Jul 12 00:18:26.067452 containerd[1552]: time="2025-07-12T00:18:26.067404332Z" level=info msg="shim disconnected" id=1bbec4ba92a8263f16f27176e3afd508696bc8a82652b233c2c6fc2dbe74a119 namespace=k8s.io Jul 12 00:18:26.067452 containerd[1552]: time="2025-07-12T00:18:26.067447283Z" level=warning msg="cleaning up after shim disconnected" id=1bbec4ba92a8263f16f27176e3afd508696bc8a82652b233c2c6fc2dbe74a119 namespace=k8s.io Jul 12 00:18:26.067684 containerd[1552]: time="2025-07-12T00:18:26.067455169Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:18:26.090126 containerd[1552]: time="2025-07-12T00:18:26.089765613Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1bbec4ba92a8263f16f27176e3afd508696bc8a82652b233c2c6fc2dbe74a119\" id:\"1bbec4ba92a8263f16f27176e3afd508696bc8a82652b233c2c6fc2dbe74a119\" pid:2880 exit_status:137 exited_at:{seconds:1752279506 nanos:37595743}" Jul 12 00:18:26.092338 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-53c552f5fcc2d66727567b5ef57c55281e4c615c086be9c54b847590eafd029a-shm.mount: Deactivated successfully. Jul 12 00:18:26.093366 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1bbec4ba92a8263f16f27176e3afd508696bc8a82652b233c2c6fc2dbe74a119-shm.mount: Deactivated successfully. Jul 12 00:18:26.095166 containerd[1552]: time="2025-07-12T00:18:26.095069346Z" level=info msg="received exit event sandbox_id:\"1bbec4ba92a8263f16f27176e3afd508696bc8a82652b233c2c6fc2dbe74a119\" exit_status:137 exited_at:{seconds:1752279506 nanos:37595743}" Jul 12 00:18:26.095904 containerd[1552]: time="2025-07-12T00:18:26.095322858Z" level=info msg="received exit event sandbox_id:\"53c552f5fcc2d66727567b5ef57c55281e4c615c086be9c54b847590eafd029a\" exit_status:137 exited_at:{seconds:1752279505 nanos:991251943}" Jul 12 00:18:26.105075 containerd[1552]: time="2025-07-12T00:18:26.104984386Z" level=info msg="TearDown network for sandbox \"53c552f5fcc2d66727567b5ef57c55281e4c615c086be9c54b847590eafd029a\" successfully" Jul 12 00:18:26.105075 containerd[1552]: time="2025-07-12T00:18:26.105049780Z" level=info msg="StopPodSandbox for \"53c552f5fcc2d66727567b5ef57c55281e4c615c086be9c54b847590eafd029a\" returns successfully" Jul 12 00:18:26.108020 containerd[1552]: time="2025-07-12T00:18:26.107968539Z" level=info msg="TearDown network for sandbox \"1bbec4ba92a8263f16f27176e3afd508696bc8a82652b233c2c6fc2dbe74a119\" successfully" Jul 12 00:18:26.108020 containerd[1552]: time="2025-07-12T00:18:26.108009507Z" level=info msg="StopPodSandbox for \"1bbec4ba92a8263f16f27176e3afd508696bc8a82652b233c2c6fc2dbe74a119\" returns successfully" Jul 12 00:18:26.135361 kubelet[2713]: I0712 00:18:26.135299 2713 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fmvr4\" (UniqueName: \"kubernetes.io/projected/42e7da73-e41a-481e-b3a4-36563e26e585-kube-api-access-fmvr4\") pod \"42e7da73-e41a-481e-b3a4-36563e26e585\" (UID: \"42e7da73-e41a-481e-b3a4-36563e26e585\") " Jul 12 00:18:26.135361 kubelet[2713]: I0712 00:18:26.135342 2713 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/42e7da73-e41a-481e-b3a4-36563e26e585-bpf-maps\") pod \"42e7da73-e41a-481e-b3a4-36563e26e585\" (UID: \"42e7da73-e41a-481e-b3a4-36563e26e585\") " Jul 12 00:18:26.135361 kubelet[2713]: I0712 00:18:26.135359 2713 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/42e7da73-e41a-481e-b3a4-36563e26e585-cilium-run\") pod \"42e7da73-e41a-481e-b3a4-36563e26e585\" (UID: \"42e7da73-e41a-481e-b3a4-36563e26e585\") " Jul 12 00:18:26.135361 kubelet[2713]: I0712 00:18:26.135373 2713 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/42e7da73-e41a-481e-b3a4-36563e26e585-cni-path\") pod \"42e7da73-e41a-481e-b3a4-36563e26e585\" (UID: \"42e7da73-e41a-481e-b3a4-36563e26e585\") " Jul 12 00:18:26.135886 kubelet[2713]: I0712 00:18:26.135399 2713 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/42e7da73-e41a-481e-b3a4-36563e26e585-clustermesh-secrets\") pod \"42e7da73-e41a-481e-b3a4-36563e26e585\" (UID: \"42e7da73-e41a-481e-b3a4-36563e26e585\") " Jul 12 00:18:26.135886 kubelet[2713]: I0712 00:18:26.135457 2713 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42e7da73-e41a-481e-b3a4-36563e26e585-cni-path" (OuterVolumeSpecName: "cni-path") pod "42e7da73-e41a-481e-b3a4-36563e26e585" (UID: "42e7da73-e41a-481e-b3a4-36563e26e585"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:18:26.135886 kubelet[2713]: I0712 00:18:26.135496 2713 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42e7da73-e41a-481e-b3a4-36563e26e585-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "42e7da73-e41a-481e-b3a4-36563e26e585" (UID: "42e7da73-e41a-481e-b3a4-36563e26e585"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:18:26.135886 kubelet[2713]: I0712 00:18:26.135482 2713 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42e7da73-e41a-481e-b3a4-36563e26e585-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "42e7da73-e41a-481e-b3a4-36563e26e585" (UID: "42e7da73-e41a-481e-b3a4-36563e26e585"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:18:26.135886 kubelet[2713]: I0712 00:18:26.135520 2713 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/42e7da73-e41a-481e-b3a4-36563e26e585-lib-modules\") pod \"42e7da73-e41a-481e-b3a4-36563e26e585\" (UID: \"42e7da73-e41a-481e-b3a4-36563e26e585\") " Jul 12 00:18:26.136039 kubelet[2713]: I0712 00:18:26.135538 2713 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9e5407a7-cb53-431b-9ea9-7d0c1c718a48-cilium-config-path\") pod \"9e5407a7-cb53-431b-9ea9-7d0c1c718a48\" (UID: \"9e5407a7-cb53-431b-9ea9-7d0c1c718a48\") " Jul 12 00:18:26.136039 kubelet[2713]: I0712 00:18:26.135552 2713 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/42e7da73-e41a-481e-b3a4-36563e26e585-cilium-cgroup\") pod \"42e7da73-e41a-481e-b3a4-36563e26e585\" (UID: \"42e7da73-e41a-481e-b3a4-36563e26e585\") " Jul 12 00:18:26.136039 kubelet[2713]: I0712 00:18:26.135581 2713 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42e7da73-e41a-481e-b3a4-36563e26e585-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "42e7da73-e41a-481e-b3a4-36563e26e585" (UID: "42e7da73-e41a-481e-b3a4-36563e26e585"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:18:26.136039 kubelet[2713]: I0712 00:18:26.135597 2713 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42e7da73-e41a-481e-b3a4-36563e26e585-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "42e7da73-e41a-481e-b3a4-36563e26e585" (UID: "42e7da73-e41a-481e-b3a4-36563e26e585"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:18:26.136289 kubelet[2713]: I0712 00:18:26.136264 2713 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/42e7da73-e41a-481e-b3a4-36563e26e585-host-proc-sys-net\") pod \"42e7da73-e41a-481e-b3a4-36563e26e585\" (UID: \"42e7da73-e41a-481e-b3a4-36563e26e585\") " Jul 12 00:18:26.136346 kubelet[2713]: I0712 00:18:26.136300 2713 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/42e7da73-e41a-481e-b3a4-36563e26e585-host-proc-sys-kernel\") pod \"42e7da73-e41a-481e-b3a4-36563e26e585\" (UID: \"42e7da73-e41a-481e-b3a4-36563e26e585\") " Jul 12 00:18:26.136346 kubelet[2713]: I0712 00:18:26.136318 2713 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/42e7da73-e41a-481e-b3a4-36563e26e585-hostproc\") pod \"42e7da73-e41a-481e-b3a4-36563e26e585\" (UID: \"42e7da73-e41a-481e-b3a4-36563e26e585\") " Jul 12 00:18:26.136346 kubelet[2713]: I0712 00:18:26.136339 2713 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qw89m\" (UniqueName: \"kubernetes.io/projected/9e5407a7-cb53-431b-9ea9-7d0c1c718a48-kube-api-access-qw89m\") pod \"9e5407a7-cb53-431b-9ea9-7d0c1c718a48\" (UID: \"9e5407a7-cb53-431b-9ea9-7d0c1c718a48\") " Jul 12 00:18:26.136420 kubelet[2713]: I0712 00:18:26.136355 2713 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/42e7da73-e41a-481e-b3a4-36563e26e585-hubble-tls\") pod \"42e7da73-e41a-481e-b3a4-36563e26e585\" (UID: \"42e7da73-e41a-481e-b3a4-36563e26e585\") " Jul 12 00:18:26.136420 kubelet[2713]: I0712 00:18:26.136371 2713 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/42e7da73-e41a-481e-b3a4-36563e26e585-cilium-config-path\") pod \"42e7da73-e41a-481e-b3a4-36563e26e585\" (UID: \"42e7da73-e41a-481e-b3a4-36563e26e585\") " Jul 12 00:18:26.136420 kubelet[2713]: I0712 00:18:26.136386 2713 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/42e7da73-e41a-481e-b3a4-36563e26e585-xtables-lock\") pod \"42e7da73-e41a-481e-b3a4-36563e26e585\" (UID: \"42e7da73-e41a-481e-b3a4-36563e26e585\") " Jul 12 00:18:26.136420 kubelet[2713]: I0712 00:18:26.136402 2713 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/42e7da73-e41a-481e-b3a4-36563e26e585-etc-cni-netd\") pod \"42e7da73-e41a-481e-b3a4-36563e26e585\" (UID: \"42e7da73-e41a-481e-b3a4-36563e26e585\") " Jul 12 00:18:26.136513 kubelet[2713]: I0712 00:18:26.136435 2713 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/42e7da73-e41a-481e-b3a4-36563e26e585-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 12 00:18:26.136513 kubelet[2713]: I0712 00:18:26.136445 2713 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/42e7da73-e41a-481e-b3a4-36563e26e585-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 12 00:18:26.136513 kubelet[2713]: I0712 00:18:26.136454 2713 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/42e7da73-e41a-481e-b3a4-36563e26e585-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 12 00:18:26.136513 kubelet[2713]: I0712 00:18:26.136461 2713 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/42e7da73-e41a-481e-b3a4-36563e26e585-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 12 00:18:26.136513 kubelet[2713]: I0712 00:18:26.136468 2713 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/42e7da73-e41a-481e-b3a4-36563e26e585-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 12 00:18:26.136513 kubelet[2713]: I0712 00:18:26.136492 2713 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42e7da73-e41a-481e-b3a4-36563e26e585-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "42e7da73-e41a-481e-b3a4-36563e26e585" (UID: "42e7da73-e41a-481e-b3a4-36563e26e585"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:18:26.136513 kubelet[2713]: I0712 00:18:26.136514 2713 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42e7da73-e41a-481e-b3a4-36563e26e585-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "42e7da73-e41a-481e-b3a4-36563e26e585" (UID: "42e7da73-e41a-481e-b3a4-36563e26e585"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:18:26.136663 kubelet[2713]: I0712 00:18:26.136540 2713 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42e7da73-e41a-481e-b3a4-36563e26e585-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "42e7da73-e41a-481e-b3a4-36563e26e585" (UID: "42e7da73-e41a-481e-b3a4-36563e26e585"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:18:26.136663 kubelet[2713]: I0712 00:18:26.136552 2713 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42e7da73-e41a-481e-b3a4-36563e26e585-hostproc" (OuterVolumeSpecName: "hostproc") pod "42e7da73-e41a-481e-b3a4-36563e26e585" (UID: "42e7da73-e41a-481e-b3a4-36563e26e585"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:18:26.138705 kubelet[2713]: I0712 00:18:26.138660 2713 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e5407a7-cb53-431b-9ea9-7d0c1c718a48-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9e5407a7-cb53-431b-9ea9-7d0c1c718a48" (UID: "9e5407a7-cb53-431b-9ea9-7d0c1c718a48"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 12 00:18:26.141852 kubelet[2713]: I0712 00:18:26.141803 2713 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42e7da73-e41a-481e-b3a4-36563e26e585-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "42e7da73-e41a-481e-b3a4-36563e26e585" (UID: "42e7da73-e41a-481e-b3a4-36563e26e585"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 12 00:18:26.151477 kubelet[2713]: I0712 00:18:26.151398 2713 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42e7da73-e41a-481e-b3a4-36563e26e585-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "42e7da73-e41a-481e-b3a4-36563e26e585" (UID: "42e7da73-e41a-481e-b3a4-36563e26e585"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:18:26.153876 kubelet[2713]: I0712 00:18:26.153800 2713 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42e7da73-e41a-481e-b3a4-36563e26e585-kube-api-access-fmvr4" (OuterVolumeSpecName: "kube-api-access-fmvr4") pod "42e7da73-e41a-481e-b3a4-36563e26e585" (UID: "42e7da73-e41a-481e-b3a4-36563e26e585"). InnerVolumeSpecName "kube-api-access-fmvr4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 12 00:18:26.153876 kubelet[2713]: I0712 00:18:26.153831 2713 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42e7da73-e41a-481e-b3a4-36563e26e585-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "42e7da73-e41a-481e-b3a4-36563e26e585" (UID: "42e7da73-e41a-481e-b3a4-36563e26e585"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 12 00:18:26.154270 kubelet[2713]: I0712 00:18:26.154246 2713 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42e7da73-e41a-481e-b3a4-36563e26e585-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "42e7da73-e41a-481e-b3a4-36563e26e585" (UID: "42e7da73-e41a-481e-b3a4-36563e26e585"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 12 00:18:26.154968 kubelet[2713]: I0712 00:18:26.154930 2713 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e5407a7-cb53-431b-9ea9-7d0c1c718a48-kube-api-access-qw89m" (OuterVolumeSpecName: "kube-api-access-qw89m") pod "9e5407a7-cb53-431b-9ea9-7d0c1c718a48" (UID: "9e5407a7-cb53-431b-9ea9-7d0c1c718a48"). InnerVolumeSpecName "kube-api-access-qw89m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 12 00:18:26.237475 kubelet[2713]: I0712 00:18:26.237333 2713 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/42e7da73-e41a-481e-b3a4-36563e26e585-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 12 00:18:26.237475 kubelet[2713]: I0712 00:18:26.237373 2713 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fmvr4\" (UniqueName: \"kubernetes.io/projected/42e7da73-e41a-481e-b3a4-36563e26e585-kube-api-access-fmvr4\") on node \"localhost\" DevicePath \"\"" Jul 12 00:18:26.237475 kubelet[2713]: I0712 00:18:26.237384 2713 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/42e7da73-e41a-481e-b3a4-36563e26e585-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 12 00:18:26.237475 kubelet[2713]: I0712 00:18:26.237392 2713 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9e5407a7-cb53-431b-9ea9-7d0c1c718a48-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 12 00:18:26.237475 kubelet[2713]: I0712 00:18:26.237400 2713 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/42e7da73-e41a-481e-b3a4-36563e26e585-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 12 00:18:26.237475 kubelet[2713]: I0712 00:18:26.237408 2713 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/42e7da73-e41a-481e-b3a4-36563e26e585-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 12 00:18:26.237475 kubelet[2713]: I0712 00:18:26.237417 2713 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/42e7da73-e41a-481e-b3a4-36563e26e585-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 12 00:18:26.237475 kubelet[2713]: I0712 00:18:26.237424 2713 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qw89m\" (UniqueName: \"kubernetes.io/projected/9e5407a7-cb53-431b-9ea9-7d0c1c718a48-kube-api-access-qw89m\") on node \"localhost\" DevicePath \"\"" Jul 12 00:18:26.237776 kubelet[2713]: I0712 00:18:26.237432 2713 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/42e7da73-e41a-481e-b3a4-36563e26e585-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 12 00:18:26.237776 kubelet[2713]: I0712 00:18:26.237470 2713 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/42e7da73-e41a-481e-b3a4-36563e26e585-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 12 00:18:26.237776 kubelet[2713]: I0712 00:18:26.237478 2713 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/42e7da73-e41a-481e-b3a4-36563e26e585-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 12 00:18:26.356079 kubelet[2713]: I0712 00:18:26.355974 2713 scope.go:117] "RemoveContainer" containerID="2fe9c773752a57f5942823dbd0bddcd0ae3d4f53a132373119f727e4695aff8c" Jul 12 00:18:26.359021 containerd[1552]: time="2025-07-12T00:18:26.358870740Z" level=info msg="RemoveContainer for \"2fe9c773752a57f5942823dbd0bddcd0ae3d4f53a132373119f727e4695aff8c\"" Jul 12 00:18:26.363618 systemd[1]: Removed slice kubepods-burstable-pod42e7da73_e41a_481e_b3a4_36563e26e585.slice - libcontainer container kubepods-burstable-pod42e7da73_e41a_481e_b3a4_36563e26e585.slice. Jul 12 00:18:26.363725 systemd[1]: kubepods-burstable-pod42e7da73_e41a_481e_b3a4_36563e26e585.slice: Consumed 7.321s CPU time, 122.3M memory peak, 284K read from disk, 13.3M written to disk. Jul 12 00:18:26.370058 systemd[1]: Removed slice kubepods-besteffort-pod9e5407a7_cb53_431b_9ea9_7d0c1c718a48.slice - libcontainer container kubepods-besteffort-pod9e5407a7_cb53_431b_9ea9_7d0c1c718a48.slice. Jul 12 00:18:26.414614 containerd[1552]: time="2025-07-12T00:18:26.414566202Z" level=info msg="RemoveContainer for \"2fe9c773752a57f5942823dbd0bddcd0ae3d4f53a132373119f727e4695aff8c\" returns successfully" Jul 12 00:18:26.414969 kubelet[2713]: I0712 00:18:26.414909 2713 scope.go:117] "RemoveContainer" containerID="9b7c1c3513ddf12471dab497eb85320142dd7b575a6e14f1a6dc4126af541548" Jul 12 00:18:26.417240 containerd[1552]: time="2025-07-12T00:18:26.417022282Z" level=info msg="RemoveContainer for \"9b7c1c3513ddf12471dab497eb85320142dd7b575a6e14f1a6dc4126af541548\"" Jul 12 00:18:26.425669 containerd[1552]: time="2025-07-12T00:18:26.425621911Z" level=info msg="RemoveContainer for \"9b7c1c3513ddf12471dab497eb85320142dd7b575a6e14f1a6dc4126af541548\" returns successfully" Jul 12 00:18:26.426624 kubelet[2713]: I0712 00:18:26.426312 2713 scope.go:117] "RemoveContainer" containerID="72be9769b9610e074f53bfeaaedb556d159b2d9643faeb6da7b12b4930ed7010" Jul 12 00:18:26.429276 containerd[1552]: time="2025-07-12T00:18:26.429191877Z" level=info msg="RemoveContainer for \"72be9769b9610e074f53bfeaaedb556d159b2d9643faeb6da7b12b4930ed7010\"" Jul 12 00:18:26.436451 containerd[1552]: time="2025-07-12T00:18:26.436397336Z" level=info msg="RemoveContainer for \"72be9769b9610e074f53bfeaaedb556d159b2d9643faeb6da7b12b4930ed7010\" returns successfully" Jul 12 00:18:26.436715 kubelet[2713]: I0712 00:18:26.436679 2713 scope.go:117] "RemoveContainer" containerID="8032a84ea6528c90d738b87efaf96e35dce3b5e88717945fbb26e6dbbbc4153f" Jul 12 00:18:26.438331 containerd[1552]: time="2025-07-12T00:18:26.438278492Z" level=info msg="RemoveContainer for \"8032a84ea6528c90d738b87efaf96e35dce3b5e88717945fbb26e6dbbbc4153f\"" Jul 12 00:18:26.442898 containerd[1552]: time="2025-07-12T00:18:26.442841286Z" level=info msg="RemoveContainer for \"8032a84ea6528c90d738b87efaf96e35dce3b5e88717945fbb26e6dbbbc4153f\" returns successfully" Jul 12 00:18:26.443081 kubelet[2713]: I0712 00:18:26.443052 2713 scope.go:117] "RemoveContainer" containerID="922fb08d47ad4a35cb4b8a93b5a67b659462335a924e2c6339af76fd79d4396a" Jul 12 00:18:26.444381 containerd[1552]: time="2025-07-12T00:18:26.444324065Z" level=info msg="RemoveContainer for \"922fb08d47ad4a35cb4b8a93b5a67b659462335a924e2c6339af76fd79d4396a\"" Jul 12 00:18:26.448506 containerd[1552]: time="2025-07-12T00:18:26.448447564Z" level=info msg="RemoveContainer for \"922fb08d47ad4a35cb4b8a93b5a67b659462335a924e2c6339af76fd79d4396a\" returns successfully" Jul 12 00:18:26.448645 kubelet[2713]: I0712 00:18:26.448607 2713 scope.go:117] "RemoveContainer" containerID="2fe9c773752a57f5942823dbd0bddcd0ae3d4f53a132373119f727e4695aff8c" Jul 12 00:18:26.448821 containerd[1552]: time="2025-07-12T00:18:26.448779505Z" level=error msg="ContainerStatus for \"2fe9c773752a57f5942823dbd0bddcd0ae3d4f53a132373119f727e4695aff8c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2fe9c773752a57f5942823dbd0bddcd0ae3d4f53a132373119f727e4695aff8c\": not found" Jul 12 00:18:26.452411 kubelet[2713]: E0712 00:18:26.452365 2713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2fe9c773752a57f5942823dbd0bddcd0ae3d4f53a132373119f727e4695aff8c\": not found" containerID="2fe9c773752a57f5942823dbd0bddcd0ae3d4f53a132373119f727e4695aff8c" Jul 12 00:18:26.452502 kubelet[2713]: I0712 00:18:26.452400 2713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2fe9c773752a57f5942823dbd0bddcd0ae3d4f53a132373119f727e4695aff8c"} err="failed to get container status \"2fe9c773752a57f5942823dbd0bddcd0ae3d4f53a132373119f727e4695aff8c\": rpc error: code = NotFound desc = an error occurred when try to find container \"2fe9c773752a57f5942823dbd0bddcd0ae3d4f53a132373119f727e4695aff8c\": not found" Jul 12 00:18:26.452502 kubelet[2713]: I0712 00:18:26.452438 2713 scope.go:117] "RemoveContainer" containerID="9b7c1c3513ddf12471dab497eb85320142dd7b575a6e14f1a6dc4126af541548" Jul 12 00:18:26.452737 containerd[1552]: time="2025-07-12T00:18:26.452682575Z" level=error msg="ContainerStatus for \"9b7c1c3513ddf12471dab497eb85320142dd7b575a6e14f1a6dc4126af541548\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9b7c1c3513ddf12471dab497eb85320142dd7b575a6e14f1a6dc4126af541548\": not found" Jul 12 00:18:26.452990 kubelet[2713]: E0712 00:18:26.452928 2713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9b7c1c3513ddf12471dab497eb85320142dd7b575a6e14f1a6dc4126af541548\": not found" containerID="9b7c1c3513ddf12471dab497eb85320142dd7b575a6e14f1a6dc4126af541548" Jul 12 00:18:26.453044 kubelet[2713]: I0712 00:18:26.452997 2713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9b7c1c3513ddf12471dab497eb85320142dd7b575a6e14f1a6dc4126af541548"} err="failed to get container status \"9b7c1c3513ddf12471dab497eb85320142dd7b575a6e14f1a6dc4126af541548\": rpc error: code = NotFound desc = an error occurred when try to find container \"9b7c1c3513ddf12471dab497eb85320142dd7b575a6e14f1a6dc4126af541548\": not found" Jul 12 00:18:26.453044 kubelet[2713]: I0712 00:18:26.453037 2713 scope.go:117] "RemoveContainer" containerID="72be9769b9610e074f53bfeaaedb556d159b2d9643faeb6da7b12b4930ed7010" Jul 12 00:18:26.453300 containerd[1552]: time="2025-07-12T00:18:26.453267307Z" level=error msg="ContainerStatus for \"72be9769b9610e074f53bfeaaedb556d159b2d9643faeb6da7b12b4930ed7010\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"72be9769b9610e074f53bfeaaedb556d159b2d9643faeb6da7b12b4930ed7010\": not found" Jul 12 00:18:26.453430 kubelet[2713]: E0712 00:18:26.453385 2713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"72be9769b9610e074f53bfeaaedb556d159b2d9643faeb6da7b12b4930ed7010\": not found" containerID="72be9769b9610e074f53bfeaaedb556d159b2d9643faeb6da7b12b4930ed7010" Jul 12 00:18:26.453479 kubelet[2713]: I0712 00:18:26.453431 2713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"72be9769b9610e074f53bfeaaedb556d159b2d9643faeb6da7b12b4930ed7010"} err="failed to get container status \"72be9769b9610e074f53bfeaaedb556d159b2d9643faeb6da7b12b4930ed7010\": rpc error: code = NotFound desc = an error occurred when try to find container \"72be9769b9610e074f53bfeaaedb556d159b2d9643faeb6da7b12b4930ed7010\": not found" Jul 12 00:18:26.453479 kubelet[2713]: I0712 00:18:26.453452 2713 scope.go:117] "RemoveContainer" containerID="8032a84ea6528c90d738b87efaf96e35dce3b5e88717945fbb26e6dbbbc4153f" Jul 12 00:18:26.453621 containerd[1552]: time="2025-07-12T00:18:26.453591493Z" level=error msg="ContainerStatus for \"8032a84ea6528c90d738b87efaf96e35dce3b5e88717945fbb26e6dbbbc4153f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8032a84ea6528c90d738b87efaf96e35dce3b5e88717945fbb26e6dbbbc4153f\": not found" Jul 12 00:18:26.453702 kubelet[2713]: E0712 00:18:26.453685 2713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8032a84ea6528c90d738b87efaf96e35dce3b5e88717945fbb26e6dbbbc4153f\": not found" containerID="8032a84ea6528c90d738b87efaf96e35dce3b5e88717945fbb26e6dbbbc4153f" Jul 12 00:18:26.453744 kubelet[2713]: I0712 00:18:26.453704 2713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8032a84ea6528c90d738b87efaf96e35dce3b5e88717945fbb26e6dbbbc4153f"} err="failed to get container status \"8032a84ea6528c90d738b87efaf96e35dce3b5e88717945fbb26e6dbbbc4153f\": rpc error: code = NotFound desc = an error occurred when try to find container \"8032a84ea6528c90d738b87efaf96e35dce3b5e88717945fbb26e6dbbbc4153f\": not found" Jul 12 00:18:26.453744 kubelet[2713]: I0712 00:18:26.453720 2713 scope.go:117] "RemoveContainer" containerID="922fb08d47ad4a35cb4b8a93b5a67b659462335a924e2c6339af76fd79d4396a" Jul 12 00:18:26.453879 containerd[1552]: time="2025-07-12T00:18:26.453850736Z" level=error msg="ContainerStatus for \"922fb08d47ad4a35cb4b8a93b5a67b659462335a924e2c6339af76fd79d4396a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"922fb08d47ad4a35cb4b8a93b5a67b659462335a924e2c6339af76fd79d4396a\": not found" Jul 12 00:18:26.454012 kubelet[2713]: E0712 00:18:26.453984 2713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"922fb08d47ad4a35cb4b8a93b5a67b659462335a924e2c6339af76fd79d4396a\": not found" containerID="922fb08d47ad4a35cb4b8a93b5a67b659462335a924e2c6339af76fd79d4396a" Jul 12 00:18:26.454049 kubelet[2713]: I0712 00:18:26.454010 2713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"922fb08d47ad4a35cb4b8a93b5a67b659462335a924e2c6339af76fd79d4396a"} err="failed to get container status \"922fb08d47ad4a35cb4b8a93b5a67b659462335a924e2c6339af76fd79d4396a\": rpc error: code = NotFound desc = an error occurred when try to find container \"922fb08d47ad4a35cb4b8a93b5a67b659462335a924e2c6339af76fd79d4396a\": not found" Jul 12 00:18:26.454049 kubelet[2713]: I0712 00:18:26.454028 2713 scope.go:117] "RemoveContainer" containerID="20386a3d4e59683dec592df6192a3c49217d2a2901215baf94d3f4aa8e882477" Jul 12 00:18:26.455454 containerd[1552]: time="2025-07-12T00:18:26.455411674Z" level=info msg="RemoveContainer for \"20386a3d4e59683dec592df6192a3c49217d2a2901215baf94d3f4aa8e882477\"" Jul 12 00:18:26.461318 containerd[1552]: time="2025-07-12T00:18:26.461259631Z" level=info msg="RemoveContainer for \"20386a3d4e59683dec592df6192a3c49217d2a2901215baf94d3f4aa8e882477\" returns successfully" Jul 12 00:18:26.461555 kubelet[2713]: I0712 00:18:26.461516 2713 scope.go:117] "RemoveContainer" containerID="20386a3d4e59683dec592df6192a3c49217d2a2901215baf94d3f4aa8e882477" Jul 12 00:18:26.461842 containerd[1552]: time="2025-07-12T00:18:26.461796502Z" level=error msg="ContainerStatus for \"20386a3d4e59683dec592df6192a3c49217d2a2901215baf94d3f4aa8e882477\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"20386a3d4e59683dec592df6192a3c49217d2a2901215baf94d3f4aa8e882477\": not found" Jul 12 00:18:26.461982 kubelet[2713]: E0712 00:18:26.461944 2713 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"20386a3d4e59683dec592df6192a3c49217d2a2901215baf94d3f4aa8e882477\": not found" containerID="20386a3d4e59683dec592df6192a3c49217d2a2901215baf94d3f4aa8e882477" Jul 12 00:18:26.462031 kubelet[2713]: I0712 00:18:26.461980 2713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"20386a3d4e59683dec592df6192a3c49217d2a2901215baf94d3f4aa8e882477"} err="failed to get container status \"20386a3d4e59683dec592df6192a3c49217d2a2901215baf94d3f4aa8e882477\": rpc error: code = NotFound desc = an error occurred when try to find container \"20386a3d4e59683dec592df6192a3c49217d2a2901215baf94d3f4aa8e882477\": not found" Jul 12 00:18:26.900247 kubelet[2713]: I0712 00:18:26.900179 2713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42e7da73-e41a-481e-b3a4-36563e26e585" path="/var/lib/kubelet/pods/42e7da73-e41a-481e-b3a4-36563e26e585/volumes" Jul 12 00:18:26.901167 kubelet[2713]: I0712 00:18:26.901129 2713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e5407a7-cb53-431b-9ea9-7d0c1c718a48" path="/var/lib/kubelet/pods/9e5407a7-cb53-431b-9ea9-7d0c1c718a48/volumes" Jul 12 00:18:26.960529 systemd[1]: var-lib-kubelet-pods-9e5407a7\x2dcb53\x2d431b\x2d9ea9\x2d7d0c1c718a48-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqw89m.mount: Deactivated successfully. Jul 12 00:18:26.960681 systemd[1]: var-lib-kubelet-pods-42e7da73\x2de41a\x2d481e\x2db3a4\x2d36563e26e585-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfmvr4.mount: Deactivated successfully. Jul 12 00:18:26.960799 systemd[1]: var-lib-kubelet-pods-42e7da73\x2de41a\x2d481e\x2db3a4\x2d36563e26e585-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 12 00:18:26.960887 systemd[1]: var-lib-kubelet-pods-42e7da73\x2de41a\x2d481e\x2db3a4\x2d36563e26e585-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 12 00:18:27.631079 sshd[4342]: Connection closed by 10.0.0.1 port 40336 Jul 12 00:18:27.631942 sshd-session[4340]: pam_unix(sshd:session): session closed for user core Jul 12 00:18:27.641613 systemd[1]: sshd@24-10.0.0.95:22-10.0.0.1:40336.service: Deactivated successfully. Jul 12 00:18:27.644197 systemd[1]: session-25.scope: Deactivated successfully. Jul 12 00:18:27.645578 systemd-logind[1538]: Session 25 logged out. Waiting for processes to exit. Jul 12 00:18:27.648899 systemd[1]: Started sshd@25-10.0.0.95:22-10.0.0.1:33146.service - OpenSSH per-connection server daemon (10.0.0.1:33146). Jul 12 00:18:27.649790 systemd-logind[1538]: Removed session 25. Jul 12 00:18:27.706342 sshd[4493]: Accepted publickey for core from 10.0.0.1 port 33146 ssh2: RSA SHA256:vyezqvaDT/l+5kUccKQO1QIecRdQxwI+fSccHDXDwwc Jul 12 00:18:27.708320 sshd-session[4493]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:18:27.714286 systemd-logind[1538]: New session 26 of user core. Jul 12 00:18:27.724508 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 12 00:18:28.897782 kubelet[2713]: E0712 00:18:28.897737 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:18:29.457237 sshd[4495]: Connection closed by 10.0.0.1 port 33146 Jul 12 00:18:29.457265 sshd-session[4493]: pam_unix(sshd:session): session closed for user core Jul 12 00:18:29.473193 systemd[1]: sshd@25-10.0.0.95:22-10.0.0.1:33146.service: Deactivated successfully. Jul 12 00:18:29.475777 systemd[1]: session-26.scope: Deactivated successfully. Jul 12 00:18:29.477285 systemd-logind[1538]: Session 26 logged out. Waiting for processes to exit. Jul 12 00:18:29.480658 systemd[1]: Started sshd@26-10.0.0.95:22-10.0.0.1:33160.service - OpenSSH per-connection server daemon (10.0.0.1:33160). Jul 12 00:18:29.481875 systemd-logind[1538]: Removed session 26. Jul 12 00:18:29.531837 sshd[4507]: Accepted publickey for core from 10.0.0.1 port 33160 ssh2: RSA SHA256:vyezqvaDT/l+5kUccKQO1QIecRdQxwI+fSccHDXDwwc Jul 12 00:18:29.533271 sshd-session[4507]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:18:29.538462 systemd-logind[1538]: New session 27 of user core. Jul 12 00:18:29.548345 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 12 00:18:29.599748 sshd[4509]: Connection closed by 10.0.0.1 port 33160 Jul 12 00:18:29.600527 sshd-session[4507]: pam_unix(sshd:session): session closed for user core Jul 12 00:18:29.610231 systemd[1]: sshd@26-10.0.0.95:22-10.0.0.1:33160.service: Deactivated successfully. Jul 12 00:18:29.612659 systemd[1]: session-27.scope: Deactivated successfully. Jul 12 00:18:29.613678 systemd-logind[1538]: Session 27 logged out. Waiting for processes to exit. Jul 12 00:18:29.618194 systemd[1]: Started sshd@27-10.0.0.95:22-10.0.0.1:33162.service - OpenSSH per-connection server daemon (10.0.0.1:33162). Jul 12 00:18:29.619108 systemd-logind[1538]: Removed session 27. Jul 12 00:18:29.643129 systemd[1]: Created slice kubepods-burstable-podd365215c_663a_4422_afd2_cfba442e4a2b.slice - libcontainer container kubepods-burstable-podd365215c_663a_4422_afd2_cfba442e4a2b.slice. Jul 12 00:18:29.659982 kubelet[2713]: I0712 00:18:29.659931 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d365215c-663a-4422-afd2-cfba442e4a2b-clustermesh-secrets\") pod \"cilium-7ghp2\" (UID: \"d365215c-663a-4422-afd2-cfba442e4a2b\") " pod="kube-system/cilium-7ghp2" Jul 12 00:18:29.660187 kubelet[2713]: I0712 00:18:29.660151 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d365215c-663a-4422-afd2-cfba442e4a2b-cilium-config-path\") pod \"cilium-7ghp2\" (UID: \"d365215c-663a-4422-afd2-cfba442e4a2b\") " pod="kube-system/cilium-7ghp2" Jul 12 00:18:29.660187 kubelet[2713]: I0712 00:18:29.660180 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d365215c-663a-4422-afd2-cfba442e4a2b-host-proc-sys-kernel\") pod \"cilium-7ghp2\" (UID: \"d365215c-663a-4422-afd2-cfba442e4a2b\") " pod="kube-system/cilium-7ghp2" Jul 12 00:18:29.660187 kubelet[2713]: I0712 00:18:29.660198 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d365215c-663a-4422-afd2-cfba442e4a2b-cilium-cgroup\") pod \"cilium-7ghp2\" (UID: \"d365215c-663a-4422-afd2-cfba442e4a2b\") " pod="kube-system/cilium-7ghp2" Jul 12 00:18:29.660187 kubelet[2713]: I0712 00:18:29.660227 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d365215c-663a-4422-afd2-cfba442e4a2b-lib-modules\") pod \"cilium-7ghp2\" (UID: \"d365215c-663a-4422-afd2-cfba442e4a2b\") " pod="kube-system/cilium-7ghp2" Jul 12 00:18:29.660520 kubelet[2713]: I0712 00:18:29.660241 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d365215c-663a-4422-afd2-cfba442e4a2b-cilium-ipsec-secrets\") pod \"cilium-7ghp2\" (UID: \"d365215c-663a-4422-afd2-cfba442e4a2b\") " pod="kube-system/cilium-7ghp2" Jul 12 00:18:29.660520 kubelet[2713]: I0712 00:18:29.660254 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d365215c-663a-4422-afd2-cfba442e4a2b-hubble-tls\") pod \"cilium-7ghp2\" (UID: \"d365215c-663a-4422-afd2-cfba442e4a2b\") " pod="kube-system/cilium-7ghp2" Jul 12 00:18:29.660520 kubelet[2713]: I0712 00:18:29.660276 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d365215c-663a-4422-afd2-cfba442e4a2b-cilium-run\") pod \"cilium-7ghp2\" (UID: \"d365215c-663a-4422-afd2-cfba442e4a2b\") " pod="kube-system/cilium-7ghp2" Jul 12 00:18:29.660520 kubelet[2713]: I0712 00:18:29.660428 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d365215c-663a-4422-afd2-cfba442e4a2b-xtables-lock\") pod \"cilium-7ghp2\" (UID: \"d365215c-663a-4422-afd2-cfba442e4a2b\") " pod="kube-system/cilium-7ghp2" Jul 12 00:18:29.660520 kubelet[2713]: I0712 00:18:29.660464 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ph8x5\" (UniqueName: \"kubernetes.io/projected/d365215c-663a-4422-afd2-cfba442e4a2b-kube-api-access-ph8x5\") pod \"cilium-7ghp2\" (UID: \"d365215c-663a-4422-afd2-cfba442e4a2b\") " pod="kube-system/cilium-7ghp2" Jul 12 00:18:29.660520 kubelet[2713]: I0712 00:18:29.660482 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d365215c-663a-4422-afd2-cfba442e4a2b-cni-path\") pod \"cilium-7ghp2\" (UID: \"d365215c-663a-4422-afd2-cfba442e4a2b\") " pod="kube-system/cilium-7ghp2" Jul 12 00:18:29.660705 kubelet[2713]: I0712 00:18:29.660498 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d365215c-663a-4422-afd2-cfba442e4a2b-host-proc-sys-net\") pod \"cilium-7ghp2\" (UID: \"d365215c-663a-4422-afd2-cfba442e4a2b\") " pod="kube-system/cilium-7ghp2" Jul 12 00:18:29.660705 kubelet[2713]: I0712 00:18:29.660521 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d365215c-663a-4422-afd2-cfba442e4a2b-bpf-maps\") pod \"cilium-7ghp2\" (UID: \"d365215c-663a-4422-afd2-cfba442e4a2b\") " pod="kube-system/cilium-7ghp2" Jul 12 00:18:29.660705 kubelet[2713]: I0712 00:18:29.660542 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d365215c-663a-4422-afd2-cfba442e4a2b-hostproc\") pod \"cilium-7ghp2\" (UID: \"d365215c-663a-4422-afd2-cfba442e4a2b\") " pod="kube-system/cilium-7ghp2" Jul 12 00:18:29.660705 kubelet[2713]: I0712 00:18:29.660561 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d365215c-663a-4422-afd2-cfba442e4a2b-etc-cni-netd\") pod \"cilium-7ghp2\" (UID: \"d365215c-663a-4422-afd2-cfba442e4a2b\") " pod="kube-system/cilium-7ghp2" Jul 12 00:18:29.679591 sshd[4516]: Accepted publickey for core from 10.0.0.1 port 33162 ssh2: RSA SHA256:vyezqvaDT/l+5kUccKQO1QIecRdQxwI+fSccHDXDwwc Jul 12 00:18:29.681869 sshd-session[4516]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:18:29.687289 systemd-logind[1538]: New session 28 of user core. Jul 12 00:18:29.702565 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 12 00:18:29.946549 kubelet[2713]: E0712 00:18:29.946475 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:18:29.947286 containerd[1552]: time="2025-07-12T00:18:29.947144928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7ghp2,Uid:d365215c-663a-4422-afd2-cfba442e4a2b,Namespace:kube-system,Attempt:0,}" Jul 12 00:18:29.971787 kubelet[2713]: E0712 00:18:29.971733 2713 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 12 00:18:30.039006 containerd[1552]: time="2025-07-12T00:18:30.038950842Z" level=info msg="connecting to shim 41113a03afed95cdf571edd0e92183c0e80e051e8132990677faf7cecdf13944" address="unix:///run/containerd/s/f8cd7281c9832910b57e154254a0c7f5665550b419d87f6f295cbbfa5fe23aac" namespace=k8s.io protocol=ttrpc version=3 Jul 12 00:18:30.065557 systemd[1]: Started cri-containerd-41113a03afed95cdf571edd0e92183c0e80e051e8132990677faf7cecdf13944.scope - libcontainer container 41113a03afed95cdf571edd0e92183c0e80e051e8132990677faf7cecdf13944. Jul 12 00:18:30.094622 containerd[1552]: time="2025-07-12T00:18:30.094581082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7ghp2,Uid:d365215c-663a-4422-afd2-cfba442e4a2b,Namespace:kube-system,Attempt:0,} returns sandbox id \"41113a03afed95cdf571edd0e92183c0e80e051e8132990677faf7cecdf13944\"" Jul 12 00:18:30.095423 kubelet[2713]: E0712 00:18:30.095393 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:18:30.101810 containerd[1552]: time="2025-07-12T00:18:30.101766778Z" level=info msg="CreateContainer within sandbox \"41113a03afed95cdf571edd0e92183c0e80e051e8132990677faf7cecdf13944\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 12 00:18:30.130738 containerd[1552]: time="2025-07-12T00:18:30.130682458Z" level=info msg="Container c3119b510968f3b8cd0bb291ade23e6d8de60179f227e7a518fe63f09036b6fd: CDI devices from CRI Config.CDIDevices: []" Jul 12 00:18:30.137702 containerd[1552]: time="2025-07-12T00:18:30.137648747Z" level=info msg="CreateContainer within sandbox \"41113a03afed95cdf571edd0e92183c0e80e051e8132990677faf7cecdf13944\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c3119b510968f3b8cd0bb291ade23e6d8de60179f227e7a518fe63f09036b6fd\"" Jul 12 00:18:30.138375 containerd[1552]: time="2025-07-12T00:18:30.138331753Z" level=info msg="StartContainer for \"c3119b510968f3b8cd0bb291ade23e6d8de60179f227e7a518fe63f09036b6fd\"" Jul 12 00:18:30.139684 containerd[1552]: time="2025-07-12T00:18:30.139646751Z" level=info msg="connecting to shim c3119b510968f3b8cd0bb291ade23e6d8de60179f227e7a518fe63f09036b6fd" address="unix:///run/containerd/s/f8cd7281c9832910b57e154254a0c7f5665550b419d87f6f295cbbfa5fe23aac" protocol=ttrpc version=3 Jul 12 00:18:30.169376 systemd[1]: Started cri-containerd-c3119b510968f3b8cd0bb291ade23e6d8de60179f227e7a518fe63f09036b6fd.scope - libcontainer container c3119b510968f3b8cd0bb291ade23e6d8de60179f227e7a518fe63f09036b6fd. Jul 12 00:18:30.204851 containerd[1552]: time="2025-07-12T00:18:30.204727287Z" level=info msg="StartContainer for \"c3119b510968f3b8cd0bb291ade23e6d8de60179f227e7a518fe63f09036b6fd\" returns successfully" Jul 12 00:18:30.213580 systemd[1]: cri-containerd-c3119b510968f3b8cd0bb291ade23e6d8de60179f227e7a518fe63f09036b6fd.scope: Deactivated successfully. Jul 12 00:18:30.214865 containerd[1552]: time="2025-07-12T00:18:30.214823250Z" level=info msg="received exit event container_id:\"c3119b510968f3b8cd0bb291ade23e6d8de60179f227e7a518fe63f09036b6fd\" id:\"c3119b510968f3b8cd0bb291ade23e6d8de60179f227e7a518fe63f09036b6fd\" pid:4589 exited_at:{seconds:1752279510 nanos:214513842}" Jul 12 00:18:30.215020 containerd[1552]: time="2025-07-12T00:18:30.214978033Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c3119b510968f3b8cd0bb291ade23e6d8de60179f227e7a518fe63f09036b6fd\" id:\"c3119b510968f3b8cd0bb291ade23e6d8de60179f227e7a518fe63f09036b6fd\" pid:4589 exited_at:{seconds:1752279510 nanos:214513842}" Jul 12 00:18:30.370426 kubelet[2713]: E0712 00:18:30.370358 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:18:30.377051 containerd[1552]: time="2025-07-12T00:18:30.376414367Z" level=info msg="CreateContainer within sandbox \"41113a03afed95cdf571edd0e92183c0e80e051e8132990677faf7cecdf13944\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 12 00:18:30.385081 containerd[1552]: time="2025-07-12T00:18:30.384850256Z" level=info msg="Container 9643f44d031bb1677580a1135e4b4d17054110657ad57f9c07a3e7564b04a00d: CDI devices from CRI Config.CDIDevices: []" Jul 12 00:18:30.393102 containerd[1552]: time="2025-07-12T00:18:30.393041361Z" level=info msg="CreateContainer within sandbox \"41113a03afed95cdf571edd0e92183c0e80e051e8132990677faf7cecdf13944\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"9643f44d031bb1677580a1135e4b4d17054110657ad57f9c07a3e7564b04a00d\"" Jul 12 00:18:30.393608 containerd[1552]: time="2025-07-12T00:18:30.393571518Z" level=info msg="StartContainer for \"9643f44d031bb1677580a1135e4b4d17054110657ad57f9c07a3e7564b04a00d\"" Jul 12 00:18:30.394692 containerd[1552]: time="2025-07-12T00:18:30.394651849Z" level=info msg="connecting to shim 9643f44d031bb1677580a1135e4b4d17054110657ad57f9c07a3e7564b04a00d" address="unix:///run/containerd/s/f8cd7281c9832910b57e154254a0c7f5665550b419d87f6f295cbbfa5fe23aac" protocol=ttrpc version=3 Jul 12 00:18:30.426370 systemd[1]: Started cri-containerd-9643f44d031bb1677580a1135e4b4d17054110657ad57f9c07a3e7564b04a00d.scope - libcontainer container 9643f44d031bb1677580a1135e4b4d17054110657ad57f9c07a3e7564b04a00d. Jul 12 00:18:30.461897 containerd[1552]: time="2025-07-12T00:18:30.461749276Z" level=info msg="StartContainer for \"9643f44d031bb1677580a1135e4b4d17054110657ad57f9c07a3e7564b04a00d\" returns successfully" Jul 12 00:18:30.467680 systemd[1]: cri-containerd-9643f44d031bb1677580a1135e4b4d17054110657ad57f9c07a3e7564b04a00d.scope: Deactivated successfully. Jul 12 00:18:30.468900 containerd[1552]: time="2025-07-12T00:18:30.468273195Z" level=info msg="received exit event container_id:\"9643f44d031bb1677580a1135e4b4d17054110657ad57f9c07a3e7564b04a00d\" id:\"9643f44d031bb1677580a1135e4b4d17054110657ad57f9c07a3e7564b04a00d\" pid:4635 exited_at:{seconds:1752279510 nanos:467912860}" Jul 12 00:18:30.469201 containerd[1552]: time="2025-07-12T00:18:30.469131374Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9643f44d031bb1677580a1135e4b4d17054110657ad57f9c07a3e7564b04a00d\" id:\"9643f44d031bb1677580a1135e4b4d17054110657ad57f9c07a3e7564b04a00d\" pid:4635 exited_at:{seconds:1752279510 nanos:467912860}" Jul 12 00:18:31.374303 kubelet[2713]: E0712 00:18:31.374269 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:18:31.575904 containerd[1552]: time="2025-07-12T00:18:31.575861276Z" level=info msg="CreateContainer within sandbox \"41113a03afed95cdf571edd0e92183c0e80e051e8132990677faf7cecdf13944\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 12 00:18:31.645781 containerd[1552]: time="2025-07-12T00:18:31.645650810Z" level=info msg="Container 1e42991c711986ac633ad059a45d8326fddda724e107e2df657fddc33032f7ab: CDI devices from CRI Config.CDIDevices: []" Jul 12 00:18:31.675060 containerd[1552]: time="2025-07-12T00:18:31.674989189Z" level=info msg="CreateContainer within sandbox \"41113a03afed95cdf571edd0e92183c0e80e051e8132990677faf7cecdf13944\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1e42991c711986ac633ad059a45d8326fddda724e107e2df657fddc33032f7ab\"" Jul 12 00:18:31.675720 containerd[1552]: time="2025-07-12T00:18:31.675684729Z" level=info msg="StartContainer for \"1e42991c711986ac633ad059a45d8326fddda724e107e2df657fddc33032f7ab\"" Jul 12 00:18:31.677428 containerd[1552]: time="2025-07-12T00:18:31.677395668Z" level=info msg="connecting to shim 1e42991c711986ac633ad059a45d8326fddda724e107e2df657fddc33032f7ab" address="unix:///run/containerd/s/f8cd7281c9832910b57e154254a0c7f5665550b419d87f6f295cbbfa5fe23aac" protocol=ttrpc version=3 Jul 12 00:18:31.706502 systemd[1]: Started cri-containerd-1e42991c711986ac633ad059a45d8326fddda724e107e2df657fddc33032f7ab.scope - libcontainer container 1e42991c711986ac633ad059a45d8326fddda724e107e2df657fddc33032f7ab. Jul 12 00:18:31.758384 containerd[1552]: time="2025-07-12T00:18:31.758323397Z" level=info msg="StartContainer for \"1e42991c711986ac633ad059a45d8326fddda724e107e2df657fddc33032f7ab\" returns successfully" Jul 12 00:18:31.760637 containerd[1552]: time="2025-07-12T00:18:31.760561356Z" level=info msg="received exit event container_id:\"1e42991c711986ac633ad059a45d8326fddda724e107e2df657fddc33032f7ab\" id:\"1e42991c711986ac633ad059a45d8326fddda724e107e2df657fddc33032f7ab\" pid:4683 exited_at:{seconds:1752279511 nanos:760275614}" Jul 12 00:18:31.760906 containerd[1552]: time="2025-07-12T00:18:31.760586234Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1e42991c711986ac633ad059a45d8326fddda724e107e2df657fddc33032f7ab\" id:\"1e42991c711986ac633ad059a45d8326fddda724e107e2df657fddc33032f7ab\" pid:4683 exited_at:{seconds:1752279511 nanos:760275614}" Jul 12 00:18:31.761054 systemd[1]: cri-containerd-1e42991c711986ac633ad059a45d8326fddda724e107e2df657fddc33032f7ab.scope: Deactivated successfully. Jul 12 00:18:31.785425 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1e42991c711986ac633ad059a45d8326fddda724e107e2df657fddc33032f7ab-rootfs.mount: Deactivated successfully. Jul 12 00:18:32.380986 kubelet[2713]: E0712 00:18:32.380921 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:18:32.387433 containerd[1552]: time="2025-07-12T00:18:32.387357898Z" level=info msg="CreateContainer within sandbox \"41113a03afed95cdf571edd0e92183c0e80e051e8132990677faf7cecdf13944\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 12 00:18:32.401202 containerd[1552]: time="2025-07-12T00:18:32.401075784Z" level=info msg="Container bdd715f7f3d10af604c15df96b062a9e6d470a409fb6a093f8cbcdc638f7350d: CDI devices from CRI Config.CDIDevices: []" Jul 12 00:18:32.403416 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount93286335.mount: Deactivated successfully. Jul 12 00:18:32.410352 containerd[1552]: time="2025-07-12T00:18:32.410303118Z" level=info msg="CreateContainer within sandbox \"41113a03afed95cdf571edd0e92183c0e80e051e8132990677faf7cecdf13944\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"bdd715f7f3d10af604c15df96b062a9e6d470a409fb6a093f8cbcdc638f7350d\"" Jul 12 00:18:32.410960 containerd[1552]: time="2025-07-12T00:18:32.410912765Z" level=info msg="StartContainer for \"bdd715f7f3d10af604c15df96b062a9e6d470a409fb6a093f8cbcdc638f7350d\"" Jul 12 00:18:32.411947 containerd[1552]: time="2025-07-12T00:18:32.411898335Z" level=info msg="connecting to shim bdd715f7f3d10af604c15df96b062a9e6d470a409fb6a093f8cbcdc638f7350d" address="unix:///run/containerd/s/f8cd7281c9832910b57e154254a0c7f5665550b419d87f6f295cbbfa5fe23aac" protocol=ttrpc version=3 Jul 12 00:18:32.440428 systemd[1]: Started cri-containerd-bdd715f7f3d10af604c15df96b062a9e6d470a409fb6a093f8cbcdc638f7350d.scope - libcontainer container bdd715f7f3d10af604c15df96b062a9e6d470a409fb6a093f8cbcdc638f7350d. Jul 12 00:18:32.473726 systemd[1]: cri-containerd-bdd715f7f3d10af604c15df96b062a9e6d470a409fb6a093f8cbcdc638f7350d.scope: Deactivated successfully. Jul 12 00:18:32.475122 containerd[1552]: time="2025-07-12T00:18:32.475077731Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bdd715f7f3d10af604c15df96b062a9e6d470a409fb6a093f8cbcdc638f7350d\" id:\"bdd715f7f3d10af604c15df96b062a9e6d470a409fb6a093f8cbcdc638f7350d\" pid:4721 exited_at:{seconds:1752279512 nanos:473957675}" Jul 12 00:18:32.477874 containerd[1552]: time="2025-07-12T00:18:32.477844912Z" level=info msg="received exit event container_id:\"bdd715f7f3d10af604c15df96b062a9e6d470a409fb6a093f8cbcdc638f7350d\" id:\"bdd715f7f3d10af604c15df96b062a9e6d470a409fb6a093f8cbcdc638f7350d\" pid:4721 exited_at:{seconds:1752279512 nanos:473957675}" Jul 12 00:18:32.489874 containerd[1552]: time="2025-07-12T00:18:32.489821775Z" level=info msg="StartContainer for \"bdd715f7f3d10af604c15df96b062a9e6d470a409fb6a093f8cbcdc638f7350d\" returns successfully" Jul 12 00:18:32.504740 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bdd715f7f3d10af604c15df96b062a9e6d470a409fb6a093f8cbcdc638f7350d-rootfs.mount: Deactivated successfully. Jul 12 00:18:33.386327 kubelet[2713]: E0712 00:18:33.386268 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:18:33.391972 containerd[1552]: time="2025-07-12T00:18:33.391894388Z" level=info msg="CreateContainer within sandbox \"41113a03afed95cdf571edd0e92183c0e80e051e8132990677faf7cecdf13944\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 12 00:18:33.405482 containerd[1552]: time="2025-07-12T00:18:33.405422468Z" level=info msg="Container 12712d3ae54f2d4142aec7b6c60a23bbfaa9ae2e676bddfc9b0f0432a2a32484: CDI devices from CRI Config.CDIDevices: []" Jul 12 00:18:33.414367 containerd[1552]: time="2025-07-12T00:18:33.414320653Z" level=info msg="CreateContainer within sandbox \"41113a03afed95cdf571edd0e92183c0e80e051e8132990677faf7cecdf13944\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"12712d3ae54f2d4142aec7b6c60a23bbfaa9ae2e676bddfc9b0f0432a2a32484\"" Jul 12 00:18:33.414873 containerd[1552]: time="2025-07-12T00:18:33.414838075Z" level=info msg="StartContainer for \"12712d3ae54f2d4142aec7b6c60a23bbfaa9ae2e676bddfc9b0f0432a2a32484\"" Jul 12 00:18:33.417231 containerd[1552]: time="2025-07-12T00:18:33.416429414Z" level=info msg="connecting to shim 12712d3ae54f2d4142aec7b6c60a23bbfaa9ae2e676bddfc9b0f0432a2a32484" address="unix:///run/containerd/s/f8cd7281c9832910b57e154254a0c7f5665550b419d87f6f295cbbfa5fe23aac" protocol=ttrpc version=3 Jul 12 00:18:33.441376 systemd[1]: Started cri-containerd-12712d3ae54f2d4142aec7b6c60a23bbfaa9ae2e676bddfc9b0f0432a2a32484.scope - libcontainer container 12712d3ae54f2d4142aec7b6c60a23bbfaa9ae2e676bddfc9b0f0432a2a32484. Jul 12 00:18:33.485077 containerd[1552]: time="2025-07-12T00:18:33.485003237Z" level=info msg="StartContainer for \"12712d3ae54f2d4142aec7b6c60a23bbfaa9ae2e676bddfc9b0f0432a2a32484\" returns successfully" Jul 12 00:18:33.599913 containerd[1552]: time="2025-07-12T00:18:33.599852379Z" level=info msg="TaskExit event in podsandbox handler container_id:\"12712d3ae54f2d4142aec7b6c60a23bbfaa9ae2e676bddfc9b0f0432a2a32484\" id:\"81ead9352fd8482f42e83cba3716daa234031874d9e5220bbe1c5b5e19ad97e3\" pid:4792 exited_at:{seconds:1752279513 nanos:598774203}" Jul 12 00:18:33.981274 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Jul 12 00:18:34.391865 kubelet[2713]: E0712 00:18:34.391833 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:18:35.947743 kubelet[2713]: E0712 00:18:35.947696 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:18:36.143873 containerd[1552]: time="2025-07-12T00:18:36.143820056Z" level=info msg="TaskExit event in podsandbox handler container_id:\"12712d3ae54f2d4142aec7b6c60a23bbfaa9ae2e676bddfc9b0f0432a2a32484\" id:\"cc3fc05500c5a7dfa5ea7fbffb1c3a01c7a3b6e2063b19b1dcfc4e278af3ceef\" pid:4999 exit_status:1 exited_at:{seconds:1752279516 nanos:143466115}" Jul 12 00:18:37.286044 systemd-networkd[1461]: lxc_health: Link UP Jul 12 00:18:37.289814 systemd-networkd[1461]: lxc_health: Gained carrier Jul 12 00:18:37.948673 kubelet[2713]: E0712 00:18:37.948633 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:18:37.969173 kubelet[2713]: I0712 00:18:37.969087 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-7ghp2" podStartSLOduration=8.969066844 podStartE2EDuration="8.969066844s" podCreationTimestamp="2025-07-12 00:18:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:18:34.823239831 +0000 UTC m=+100.037695779" watchObservedRunningTime="2025-07-12 00:18:37.969066844 +0000 UTC m=+103.183522793" Jul 12 00:18:38.260281 containerd[1552]: time="2025-07-12T00:18:38.259894507Z" level=info msg="TaskExit event in podsandbox handler container_id:\"12712d3ae54f2d4142aec7b6c60a23bbfaa9ae2e676bddfc9b0f0432a2a32484\" id:\"729d0a1476e31e5bdcdf2f5e99df4bfa3105e1a0b17f5fcd084749e024a7eed1\" pid:5316 exited_at:{seconds:1752279518 nanos:259298658}" Jul 12 00:18:38.399684 kubelet[2713]: E0712 00:18:38.399644 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:18:39.216629 systemd-networkd[1461]: lxc_health: Gained IPv6LL Jul 12 00:18:39.402142 kubelet[2713]: E0712 00:18:39.402086 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:18:40.386269 containerd[1552]: time="2025-07-12T00:18:40.386095309Z" level=info msg="TaskExit event in podsandbox handler container_id:\"12712d3ae54f2d4142aec7b6c60a23bbfaa9ae2e676bddfc9b0f0432a2a32484\" id:\"699a3c66561442993f55aa98f91067808a39b70249434e618c1dc61cc1630d82\" pid:5353 exited_at:{seconds:1752279520 nanos:385717393}" Jul 12 00:18:42.554972 containerd[1552]: time="2025-07-12T00:18:42.554914649Z" level=info msg="TaskExit event in podsandbox handler container_id:\"12712d3ae54f2d4142aec7b6c60a23bbfaa9ae2e676bddfc9b0f0432a2a32484\" id:\"3450db9eac94f6bb2f9b407069e0e6ef1f48d6809a5cf9bdd7899cada8e89c17\" pid:5385 exited_at:{seconds:1752279522 nanos:554497057}" Jul 12 00:18:44.642058 containerd[1552]: time="2025-07-12T00:18:44.641996802Z" level=info msg="TaskExit event in podsandbox handler container_id:\"12712d3ae54f2d4142aec7b6c60a23bbfaa9ae2e676bddfc9b0f0432a2a32484\" id:\"e7cc1a6abe69b26b845e1a00b42c5544b8a7b5974de622b685c3b14588d6be2c\" pid:5410 exited_at:{seconds:1752279524 nanos:641617935}" Jul 12 00:18:44.679835 sshd[4518]: Connection closed by 10.0.0.1 port 33162 Jul 12 00:18:44.681352 sshd-session[4516]: pam_unix(sshd:session): session closed for user core Jul 12 00:18:44.686808 systemd[1]: sshd@27-10.0.0.95:22-10.0.0.1:33162.service: Deactivated successfully. Jul 12 00:18:44.689034 systemd[1]: session-28.scope: Deactivated successfully. Jul 12 00:18:44.690186 systemd-logind[1538]: Session 28 logged out. Waiting for processes to exit. Jul 12 00:18:44.691921 systemd-logind[1538]: Removed session 28.