Jul 12 10:22:50.814010 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Sat Jul 12 08:25:04 -00 2025 Jul 12 10:22:50.814039 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4aa07c6f7fdf02f2e05d879e4d058ee0cec0fba29acc0516234352104ac4e6c4 Jul 12 10:22:50.814051 kernel: BIOS-provided physical RAM map: Jul 12 10:22:50.814058 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 12 10:22:50.814064 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jul 12 10:22:50.814071 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jul 12 10:22:50.814078 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jul 12 10:22:50.814085 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jul 12 10:22:50.814094 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Jul 12 10:22:50.814100 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jul 12 10:22:50.814107 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Jul 12 10:22:50.814114 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jul 12 10:22:50.814120 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jul 12 10:22:50.814127 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jul 12 10:22:50.814137 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jul 12 10:22:50.814144 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jul 12 10:22:50.814152 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jul 12 10:22:50.814159 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jul 12 10:22:50.814166 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jul 12 10:22:50.814173 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jul 12 10:22:50.814180 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jul 12 10:22:50.814187 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jul 12 10:22:50.814194 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jul 12 10:22:50.814201 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 12 10:22:50.814208 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jul 12 10:22:50.814217 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 12 10:22:50.814224 kernel: NX (Execute Disable) protection: active Jul 12 10:22:50.814231 kernel: APIC: Static calls initialized Jul 12 10:22:50.814239 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Jul 12 10:22:50.814246 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Jul 12 10:22:50.814253 kernel: extended physical RAM map: Jul 12 10:22:50.814260 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 12 10:22:50.814267 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Jul 12 10:22:50.814275 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jul 12 10:22:50.814282 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Jul 12 10:22:50.814289 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jul 12 10:22:50.814298 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Jul 12 10:22:50.814306 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jul 12 10:22:50.814313 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Jul 12 10:22:50.814320 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Jul 12 10:22:50.814330 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Jul 12 10:22:50.814337 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Jul 12 10:22:50.814347 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Jul 12 10:22:50.814354 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jul 12 10:22:50.814362 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jul 12 10:22:50.814369 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jul 12 10:22:50.814377 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jul 12 10:22:50.814384 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jul 12 10:22:50.814392 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jul 12 10:22:50.814399 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jul 12 10:22:50.814407 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jul 12 10:22:50.814414 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jul 12 10:22:50.814424 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jul 12 10:22:50.814431 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jul 12 10:22:50.814438 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jul 12 10:22:50.814446 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 12 10:22:50.814453 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jul 12 10:22:50.814460 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 12 10:22:50.814468 kernel: efi: EFI v2.7 by EDK II Jul 12 10:22:50.814475 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Jul 12 10:22:50.814483 kernel: random: crng init done Jul 12 10:22:50.814490 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Jul 12 10:22:50.814498 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Jul 12 10:22:50.814507 kernel: secureboot: Secure boot disabled Jul 12 10:22:50.814514 kernel: SMBIOS 2.8 present. Jul 12 10:22:50.814522 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Jul 12 10:22:50.814529 kernel: DMI: Memory slots populated: 1/1 Jul 12 10:22:50.814537 kernel: Hypervisor detected: KVM Jul 12 10:22:50.814544 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 12 10:22:50.814551 kernel: kvm-clock: using sched offset of 4864631222 cycles Jul 12 10:22:50.814560 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 12 10:22:50.814575 kernel: tsc: Detected 2794.746 MHz processor Jul 12 10:22:50.814583 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 12 10:22:50.814593 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 12 10:22:50.814600 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jul 12 10:22:50.814608 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jul 12 10:22:50.814615 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 12 10:22:50.814623 kernel: Using GB pages for direct mapping Jul 12 10:22:50.814630 kernel: ACPI: Early table checksum verification disabled Jul 12 10:22:50.814638 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jul 12 10:22:50.814645 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jul 12 10:22:50.814653 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 10:22:50.814663 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 10:22:50.814670 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jul 12 10:22:50.814690 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 10:22:50.814698 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 10:22:50.814706 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 10:22:50.814713 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 10:22:50.814721 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jul 12 10:22:50.814728 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jul 12 10:22:50.814736 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Jul 12 10:22:50.814746 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jul 12 10:22:50.814754 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jul 12 10:22:50.814772 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jul 12 10:22:50.814780 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jul 12 10:22:50.814788 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jul 12 10:22:50.814795 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jul 12 10:22:50.814803 kernel: No NUMA configuration found Jul 12 10:22:50.814811 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Jul 12 10:22:50.814818 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Jul 12 10:22:50.814828 kernel: Zone ranges: Jul 12 10:22:50.814836 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 12 10:22:50.814843 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Jul 12 10:22:50.814851 kernel: Normal empty Jul 12 10:22:50.814858 kernel: Device empty Jul 12 10:22:50.814866 kernel: Movable zone start for each node Jul 12 10:22:50.814873 kernel: Early memory node ranges Jul 12 10:22:50.814880 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jul 12 10:22:50.814888 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jul 12 10:22:50.814895 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jul 12 10:22:50.814905 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Jul 12 10:22:50.814912 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Jul 12 10:22:50.814932 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Jul 12 10:22:50.814940 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Jul 12 10:22:50.814947 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Jul 12 10:22:50.814955 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Jul 12 10:22:50.814972 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 12 10:22:50.814989 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jul 12 10:22:50.815023 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jul 12 10:22:50.815031 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 12 10:22:50.815039 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Jul 12 10:22:50.815047 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Jul 12 10:22:50.815057 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jul 12 10:22:50.815074 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Jul 12 10:22:50.815099 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Jul 12 10:22:50.815108 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 12 10:22:50.815116 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 12 10:22:50.815127 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 12 10:22:50.815134 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 12 10:22:50.815143 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 12 10:22:50.815150 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 12 10:22:50.815158 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 12 10:22:50.815166 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 12 10:22:50.815174 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 12 10:22:50.815182 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 12 10:22:50.815189 kernel: TSC deadline timer available Jul 12 10:22:50.815199 kernel: CPU topo: Max. logical packages: 1 Jul 12 10:22:50.815207 kernel: CPU topo: Max. logical dies: 1 Jul 12 10:22:50.815215 kernel: CPU topo: Max. dies per package: 1 Jul 12 10:22:50.815222 kernel: CPU topo: Max. threads per core: 1 Jul 12 10:22:50.815230 kernel: CPU topo: Num. cores per package: 4 Jul 12 10:22:50.815238 kernel: CPU topo: Num. threads per package: 4 Jul 12 10:22:50.815245 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jul 12 10:22:50.815253 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 12 10:22:50.815261 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 12 10:22:50.815270 kernel: kvm-guest: setup PV sched yield Jul 12 10:22:50.815278 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Jul 12 10:22:50.815286 kernel: Booting paravirtualized kernel on KVM Jul 12 10:22:50.815294 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 12 10:22:50.815302 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jul 12 10:22:50.815310 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jul 12 10:22:50.815318 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jul 12 10:22:50.815325 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 12 10:22:50.815333 kernel: kvm-guest: PV spinlocks enabled Jul 12 10:22:50.815343 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 12 10:22:50.815352 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4aa07c6f7fdf02f2e05d879e4d058ee0cec0fba29acc0516234352104ac4e6c4 Jul 12 10:22:50.815360 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 12 10:22:50.815368 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 12 10:22:50.815376 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 12 10:22:50.815383 kernel: Fallback order for Node 0: 0 Jul 12 10:22:50.815391 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Jul 12 10:22:50.815399 kernel: Policy zone: DMA32 Jul 12 10:22:50.815409 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 12 10:22:50.815417 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 12 10:22:50.815424 kernel: ftrace: allocating 40097 entries in 157 pages Jul 12 10:22:50.815432 kernel: ftrace: allocated 157 pages with 5 groups Jul 12 10:22:50.815440 kernel: Dynamic Preempt: voluntary Jul 12 10:22:50.815448 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 12 10:22:50.815456 kernel: rcu: RCU event tracing is enabled. Jul 12 10:22:50.815464 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 12 10:22:50.815472 kernel: Trampoline variant of Tasks RCU enabled. Jul 12 10:22:50.815482 kernel: Rude variant of Tasks RCU enabled. Jul 12 10:22:50.815490 kernel: Tracing variant of Tasks RCU enabled. Jul 12 10:22:50.815498 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 12 10:22:50.815506 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 12 10:22:50.815514 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 12 10:22:50.815522 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 12 10:22:50.815529 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 12 10:22:50.815537 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 12 10:22:50.815545 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 12 10:22:50.815555 kernel: Console: colour dummy device 80x25 Jul 12 10:22:50.815570 kernel: printk: legacy console [ttyS0] enabled Jul 12 10:22:50.815578 kernel: ACPI: Core revision 20240827 Jul 12 10:22:50.815586 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 12 10:22:50.815594 kernel: APIC: Switch to symmetric I/O mode setup Jul 12 10:22:50.815602 kernel: x2apic enabled Jul 12 10:22:50.815610 kernel: APIC: Switched APIC routing to: physical x2apic Jul 12 10:22:50.815618 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jul 12 10:22:50.815626 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jul 12 10:22:50.815634 kernel: kvm-guest: setup PV IPIs Jul 12 10:22:50.815644 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 12 10:22:50.815653 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848ddd4e75, max_idle_ns: 440795346320 ns Jul 12 10:22:50.815661 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794746) Jul 12 10:22:50.815669 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 12 10:22:50.815692 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 12 10:22:50.815701 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 12 10:22:50.815709 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 12 10:22:50.815717 kernel: Spectre V2 : Mitigation: Retpolines Jul 12 10:22:50.815728 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 12 10:22:50.815736 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 12 10:22:50.815744 kernel: RETBleed: Mitigation: untrained return thunk Jul 12 10:22:50.815752 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 12 10:22:50.815760 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 12 10:22:50.815768 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jul 12 10:22:50.815777 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jul 12 10:22:50.815785 kernel: x86/bugs: return thunk changed Jul 12 10:22:50.815792 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jul 12 10:22:50.815802 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 12 10:22:50.815810 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 12 10:22:50.815818 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 12 10:22:50.815826 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 12 10:22:50.815834 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 12 10:22:50.815842 kernel: Freeing SMP alternatives memory: 32K Jul 12 10:22:50.815850 kernel: pid_max: default: 32768 minimum: 301 Jul 12 10:22:50.815858 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 12 10:22:50.815866 kernel: landlock: Up and running. Jul 12 10:22:50.815876 kernel: SELinux: Initializing. Jul 12 10:22:50.815884 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 12 10:22:50.815892 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 12 10:22:50.815900 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 12 10:22:50.815908 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 12 10:22:50.815916 kernel: ... version: 0 Jul 12 10:22:50.815924 kernel: ... bit width: 48 Jul 12 10:22:50.815932 kernel: ... generic registers: 6 Jul 12 10:22:50.815939 kernel: ... value mask: 0000ffffffffffff Jul 12 10:22:50.815949 kernel: ... max period: 00007fffffffffff Jul 12 10:22:50.815957 kernel: ... fixed-purpose events: 0 Jul 12 10:22:50.815965 kernel: ... event mask: 000000000000003f Jul 12 10:22:50.815973 kernel: signal: max sigframe size: 1776 Jul 12 10:22:50.815981 kernel: rcu: Hierarchical SRCU implementation. Jul 12 10:22:50.815989 kernel: rcu: Max phase no-delay instances is 400. Jul 12 10:22:50.815997 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 12 10:22:50.816005 kernel: smp: Bringing up secondary CPUs ... Jul 12 10:22:50.816013 kernel: smpboot: x86: Booting SMP configuration: Jul 12 10:22:50.816023 kernel: .... node #0, CPUs: #1 #2 #3 Jul 12 10:22:50.816030 kernel: smp: Brought up 1 node, 4 CPUs Jul 12 10:22:50.816038 kernel: smpboot: Total of 4 processors activated (22357.96 BogoMIPS) Jul 12 10:22:50.816047 kernel: Memory: 2422668K/2565800K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54608K init, 2360K bss, 137196K reserved, 0K cma-reserved) Jul 12 10:22:50.816055 kernel: devtmpfs: initialized Jul 12 10:22:50.816062 kernel: x86/mm: Memory block size: 128MB Jul 12 10:22:50.816070 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jul 12 10:22:50.816078 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jul 12 10:22:50.816090 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Jul 12 10:22:50.816112 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jul 12 10:22:50.816128 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Jul 12 10:22:50.816147 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jul 12 10:22:50.816155 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 12 10:22:50.816163 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 12 10:22:50.816171 kernel: pinctrl core: initialized pinctrl subsystem Jul 12 10:22:50.816179 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 12 10:22:50.816187 kernel: audit: initializing netlink subsys (disabled) Jul 12 10:22:50.816197 kernel: audit: type=2000 audit(1752315767.456:1): state=initialized audit_enabled=0 res=1 Jul 12 10:22:50.816205 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 12 10:22:50.816213 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 12 10:22:50.816221 kernel: cpuidle: using governor menu Jul 12 10:22:50.816229 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 12 10:22:50.816237 kernel: dca service started, version 1.12.1 Jul 12 10:22:50.816245 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Jul 12 10:22:50.816252 kernel: PCI: Using configuration type 1 for base access Jul 12 10:22:50.816260 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 12 10:22:50.816270 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 12 10:22:50.816278 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 12 10:22:50.816286 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 12 10:22:50.816294 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 12 10:22:50.816302 kernel: ACPI: Added _OSI(Module Device) Jul 12 10:22:50.816310 kernel: ACPI: Added _OSI(Processor Device) Jul 12 10:22:50.816317 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 12 10:22:50.816325 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 12 10:22:50.816333 kernel: ACPI: Interpreter enabled Jul 12 10:22:50.816343 kernel: ACPI: PM: (supports S0 S3 S5) Jul 12 10:22:50.816350 kernel: ACPI: Using IOAPIC for interrupt routing Jul 12 10:22:50.816358 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 12 10:22:50.816366 kernel: PCI: Using E820 reservations for host bridge windows Jul 12 10:22:50.816384 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 12 10:22:50.816401 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 12 10:22:50.816635 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 12 10:22:50.816781 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jul 12 10:22:50.816908 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jul 12 10:22:50.816918 kernel: PCI host bridge to bus 0000:00 Jul 12 10:22:50.817044 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 12 10:22:50.817155 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 12 10:22:50.817267 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 12 10:22:50.817377 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Jul 12 10:22:50.817485 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jul 12 10:22:50.817647 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Jul 12 10:22:50.817787 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 12 10:22:50.817962 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jul 12 10:22:50.818093 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jul 12 10:22:50.818214 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Jul 12 10:22:50.818343 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Jul 12 10:22:50.818493 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Jul 12 10:22:50.818638 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 12 10:22:50.818789 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jul 12 10:22:50.818912 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Jul 12 10:22:50.819032 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Jul 12 10:22:50.819150 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Jul 12 10:22:50.819279 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jul 12 10:22:50.819405 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Jul 12 10:22:50.819524 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Jul 12 10:22:50.819652 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Jul 12 10:22:50.819826 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jul 12 10:22:50.819960 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Jul 12 10:22:50.820084 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Jul 12 10:22:50.820203 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Jul 12 10:22:50.820338 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Jul 12 10:22:50.820469 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jul 12 10:22:50.820602 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 12 10:22:50.820778 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jul 12 10:22:50.820903 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Jul 12 10:22:50.821022 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Jul 12 10:22:50.821150 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jul 12 10:22:50.821276 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Jul 12 10:22:50.821287 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 12 10:22:50.821295 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 12 10:22:50.821303 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 12 10:22:50.821311 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 12 10:22:50.821319 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 12 10:22:50.821327 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 12 10:22:50.821336 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 12 10:22:50.821346 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 12 10:22:50.821354 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 12 10:22:50.821362 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 12 10:22:50.821488 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 12 10:22:50.821496 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 12 10:22:50.821504 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 12 10:22:50.821512 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 12 10:22:50.821520 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 12 10:22:50.821528 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 12 10:22:50.821538 kernel: iommu: Default domain type: Translated Jul 12 10:22:50.821546 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 12 10:22:50.821554 kernel: efivars: Registered efivars operations Jul 12 10:22:50.821571 kernel: PCI: Using ACPI for IRQ routing Jul 12 10:22:50.821580 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 12 10:22:50.821588 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jul 12 10:22:50.821596 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Jul 12 10:22:50.821604 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Jul 12 10:22:50.821612 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Jul 12 10:22:50.821622 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Jul 12 10:22:50.821629 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Jul 12 10:22:50.821637 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Jul 12 10:22:50.821645 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Jul 12 10:22:50.821784 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 12 10:22:50.821904 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 12 10:22:50.822026 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 12 10:22:50.822037 kernel: vgaarb: loaded Jul 12 10:22:50.822048 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 12 10:22:50.822056 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 12 10:22:50.822064 kernel: clocksource: Switched to clocksource kvm-clock Jul 12 10:22:50.822072 kernel: VFS: Disk quotas dquot_6.6.0 Jul 12 10:22:50.822081 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 12 10:22:50.822089 kernel: pnp: PnP ACPI init Jul 12 10:22:50.822245 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Jul 12 10:22:50.822273 kernel: pnp: PnP ACPI: found 6 devices Jul 12 10:22:50.822285 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 12 10:22:50.822293 kernel: NET: Registered PF_INET protocol family Jul 12 10:22:50.822302 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 12 10:22:50.822310 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 12 10:22:50.822318 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 12 10:22:50.822327 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 12 10:22:50.822335 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 12 10:22:50.822343 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 12 10:22:50.822351 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 12 10:22:50.822362 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 12 10:22:50.822370 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 12 10:22:50.822378 kernel: NET: Registered PF_XDP protocol family Jul 12 10:22:50.822501 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Jul 12 10:22:50.822634 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Jul 12 10:22:50.822778 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 12 10:22:50.822889 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 12 10:22:50.822998 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 12 10:22:50.823111 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Jul 12 10:22:50.823235 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jul 12 10:22:50.823348 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Jul 12 10:22:50.823359 kernel: PCI: CLS 0 bytes, default 64 Jul 12 10:22:50.823368 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848ddd4e75, max_idle_ns: 440795346320 ns Jul 12 10:22:50.823376 kernel: Initialise system trusted keyrings Jul 12 10:22:50.823384 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 12 10:22:50.823392 kernel: Key type asymmetric registered Jul 12 10:22:50.823403 kernel: Asymmetric key parser 'x509' registered Jul 12 10:22:50.823411 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 12 10:22:50.823420 kernel: io scheduler mq-deadline registered Jul 12 10:22:50.823430 kernel: io scheduler kyber registered Jul 12 10:22:50.823438 kernel: io scheduler bfq registered Jul 12 10:22:50.823446 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 12 10:22:50.823457 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 12 10:22:50.823465 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 12 10:22:50.823473 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jul 12 10:22:50.823481 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 12 10:22:50.823490 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 12 10:22:50.823498 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 12 10:22:50.823506 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 12 10:22:50.823515 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 12 10:22:50.823648 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 12 10:22:50.823820 kernel: rtc_cmos 00:04: registered as rtc0 Jul 12 10:22:50.823833 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 12 10:22:50.823945 kernel: rtc_cmos 00:04: setting system clock to 2025-07-12T10:22:50 UTC (1752315770) Jul 12 10:22:50.824057 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jul 12 10:22:50.824067 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 12 10:22:50.824076 kernel: efifb: probing for efifb Jul 12 10:22:50.824084 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Jul 12 10:22:50.824092 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jul 12 10:22:50.824105 kernel: efifb: scrolling: redraw Jul 12 10:22:50.824113 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 12 10:22:50.824121 kernel: Console: switching to colour frame buffer device 160x50 Jul 12 10:22:50.824130 kernel: fb0: EFI VGA frame buffer device Jul 12 10:22:50.824138 kernel: pstore: Using crash dump compression: deflate Jul 12 10:22:50.824146 kernel: pstore: Registered efi_pstore as persistent store backend Jul 12 10:22:50.824154 kernel: NET: Registered PF_INET6 protocol family Jul 12 10:22:50.824162 kernel: Segment Routing with IPv6 Jul 12 10:22:50.824170 kernel: In-situ OAM (IOAM) with IPv6 Jul 12 10:22:50.824181 kernel: NET: Registered PF_PACKET protocol family Jul 12 10:22:50.824189 kernel: Key type dns_resolver registered Jul 12 10:22:50.824197 kernel: IPI shorthand broadcast: enabled Jul 12 10:22:50.824206 kernel: sched_clock: Marking stable (4565002792, 160296280)->(4758280213, -32981141) Jul 12 10:22:50.824214 kernel: registered taskstats version 1 Jul 12 10:22:50.824222 kernel: Loading compiled-in X.509 certificates Jul 12 10:22:50.824230 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: 0b66546913a05d1e6699856b7b667f16de808d3b' Jul 12 10:22:50.824239 kernel: Demotion targets for Node 0: null Jul 12 10:22:50.824247 kernel: Key type .fscrypt registered Jul 12 10:22:50.824257 kernel: Key type fscrypt-provisioning registered Jul 12 10:22:50.824265 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 12 10:22:50.824273 kernel: ima: Allocated hash algorithm: sha1 Jul 12 10:22:50.824281 kernel: ima: No architecture policies found Jul 12 10:22:50.824289 kernel: clk: Disabling unused clocks Jul 12 10:22:50.824297 kernel: Warning: unable to open an initial console. Jul 12 10:22:50.824306 kernel: Freeing unused kernel image (initmem) memory: 54608K Jul 12 10:22:50.824314 kernel: Write protecting the kernel read-only data: 24576k Jul 12 10:22:50.824322 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 12 10:22:50.824332 kernel: Run /init as init process Jul 12 10:22:50.824340 kernel: with arguments: Jul 12 10:22:50.824348 kernel: /init Jul 12 10:22:50.824356 kernel: with environment: Jul 12 10:22:50.824364 kernel: HOME=/ Jul 12 10:22:50.824372 kernel: TERM=linux Jul 12 10:22:50.824380 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 12 10:22:50.824393 systemd[1]: Successfully made /usr/ read-only. Jul 12 10:22:50.824406 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 12 10:22:50.824416 systemd[1]: Detected virtualization kvm. Jul 12 10:22:50.824426 systemd[1]: Detected architecture x86-64. Jul 12 10:22:50.824435 systemd[1]: Running in initrd. Jul 12 10:22:50.824443 systemd[1]: No hostname configured, using default hostname. Jul 12 10:22:50.824452 systemd[1]: Hostname set to . Jul 12 10:22:50.824460 systemd[1]: Initializing machine ID from VM UUID. Jul 12 10:22:50.824469 systemd[1]: Queued start job for default target initrd.target. Jul 12 10:22:50.824480 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 12 10:22:50.824488 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 12 10:22:50.824498 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 12 10:22:50.824506 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 12 10:22:50.824515 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 12 10:22:50.824525 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 12 10:22:50.824535 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 12 10:22:50.824545 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 12 10:22:50.824554 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 12 10:22:50.824570 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 12 10:22:50.824579 systemd[1]: Reached target paths.target - Path Units. Jul 12 10:22:50.824588 systemd[1]: Reached target slices.target - Slice Units. Jul 12 10:22:50.824597 systemd[1]: Reached target swap.target - Swaps. Jul 12 10:22:50.824606 systemd[1]: Reached target timers.target - Timer Units. Jul 12 10:22:50.824615 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 12 10:22:50.824625 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 12 10:22:50.824634 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 12 10:22:50.824643 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 12 10:22:50.824652 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 12 10:22:50.824660 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 12 10:22:50.824669 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 12 10:22:50.824692 systemd[1]: Reached target sockets.target - Socket Units. Jul 12 10:22:50.824701 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 12 10:22:50.824710 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 12 10:22:50.824721 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 12 10:22:50.824730 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 12 10:22:50.824739 systemd[1]: Starting systemd-fsck-usr.service... Jul 12 10:22:50.824748 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 12 10:22:50.824757 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 12 10:22:50.824765 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 10:22:50.824774 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 12 10:22:50.824785 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 12 10:22:50.824794 systemd[1]: Finished systemd-fsck-usr.service. Jul 12 10:22:50.824805 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 12 10:22:50.824839 systemd-journald[220]: Collecting audit messages is disabled. Jul 12 10:22:50.824862 systemd-journald[220]: Journal started Jul 12 10:22:50.824883 systemd-journald[220]: Runtime Journal (/run/log/journal/f483ed346db44dfc976774dbaeb7f454) is 6M, max 48.5M, 42.4M free. Jul 12 10:22:50.816424 systemd-modules-load[221]: Inserted module 'overlay' Jul 12 10:22:50.827892 systemd[1]: Started systemd-journald.service - Journal Service. Jul 12 10:22:50.826451 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 10:22:50.830231 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 12 10:22:50.832940 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 12 10:22:50.845709 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 12 10:22:50.846833 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 12 10:22:50.850742 kernel: Bridge firewalling registered Jul 12 10:22:50.850637 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 12 10:22:50.852861 systemd-modules-load[221]: Inserted module 'br_netfilter' Jul 12 10:22:50.854339 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 12 10:22:50.858018 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 12 10:22:50.859756 systemd-tmpfiles[235]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 12 10:22:50.865902 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 12 10:22:50.868758 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 10:22:50.870435 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 12 10:22:50.877474 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 12 10:22:50.878956 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 12 10:22:50.881590 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 12 10:22:50.897828 dracut-cmdline[257]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4aa07c6f7fdf02f2e05d879e4d058ee0cec0fba29acc0516234352104ac4e6c4 Jul 12 10:22:50.949970 systemd-resolved[262]: Positive Trust Anchors: Jul 12 10:22:50.950001 systemd-resolved[262]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 12 10:22:50.950033 systemd-resolved[262]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 12 10:22:50.954428 systemd-resolved[262]: Defaulting to hostname 'linux'. Jul 12 10:22:50.955852 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 12 10:22:50.960744 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 12 10:22:51.028724 kernel: SCSI subsystem initialized Jul 12 10:22:51.037707 kernel: Loading iSCSI transport class v2.0-870. Jul 12 10:22:51.047710 kernel: iscsi: registered transport (tcp) Jul 12 10:22:51.068944 kernel: iscsi: registered transport (qla4xxx) Jul 12 10:22:51.068992 kernel: QLogic iSCSI HBA Driver Jul 12 10:22:51.090379 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 12 10:22:51.108790 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 12 10:22:51.112549 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 12 10:22:51.164829 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 12 10:22:51.168164 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 12 10:22:51.222709 kernel: raid6: avx2x4 gen() 29484 MB/s Jul 12 10:22:51.239702 kernel: raid6: avx2x2 gen() 31187 MB/s Jul 12 10:22:51.256726 kernel: raid6: avx2x1 gen() 25955 MB/s Jul 12 10:22:51.256743 kernel: raid6: using algorithm avx2x2 gen() 31187 MB/s Jul 12 10:22:51.274744 kernel: raid6: .... xor() 19812 MB/s, rmw enabled Jul 12 10:22:51.274770 kernel: raid6: using avx2x2 recovery algorithm Jul 12 10:22:51.294715 kernel: xor: automatically using best checksumming function avx Jul 12 10:22:51.562758 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 12 10:22:51.573083 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 12 10:22:51.575484 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 12 10:22:51.609521 systemd-udevd[471]: Using default interface naming scheme 'v255'. Jul 12 10:22:51.617294 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 12 10:22:51.618960 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 12 10:22:51.645405 dracut-pre-trigger[474]: rd.md=0: removing MD RAID activation Jul 12 10:22:51.676736 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 12 10:22:51.680263 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 12 10:22:51.869789 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 12 10:22:51.874971 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 12 10:22:51.911273 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jul 12 10:22:51.913745 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 12 10:22:51.917061 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 12 10:22:51.917089 kernel: GPT:9289727 != 19775487 Jul 12 10:22:51.917104 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 12 10:22:51.917118 kernel: GPT:9289727 != 19775487 Jul 12 10:22:51.918809 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 12 10:22:51.918843 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 12 10:22:51.922739 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jul 12 10:22:51.930718 kernel: cryptd: max_cpu_qlen set to 1000 Jul 12 10:22:51.942713 kernel: AES CTR mode by8 optimization enabled Jul 12 10:22:51.943701 kernel: libata version 3.00 loaded. Jul 12 10:22:51.944699 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 12 10:22:51.945119 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 10:22:51.948304 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 10:22:51.952071 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 10:22:51.954916 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 12 10:22:52.018701 kernel: ahci 0000:00:1f.2: version 3.0 Jul 12 10:22:52.021882 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 12 10:22:52.025225 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jul 12 10:22:52.025421 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jul 12 10:22:52.025575 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 12 10:22:52.025332 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 12 10:22:52.025441 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 10:22:52.030841 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 12 10:22:52.040701 kernel: scsi host0: ahci Jul 12 10:22:52.041714 kernel: scsi host1: ahci Jul 12 10:22:52.043043 kernel: scsi host2: ahci Jul 12 10:22:52.044728 kernel: scsi host3: ahci Jul 12 10:22:52.044914 kernel: scsi host4: ahci Jul 12 10:22:52.047213 kernel: scsi host5: ahci Jul 12 10:22:52.047395 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 0 Jul 12 10:22:52.047413 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 0 Jul 12 10:22:52.048224 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 0 Jul 12 10:22:52.048973 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 0 Jul 12 10:22:52.050719 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 0 Jul 12 10:22:52.050734 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 0 Jul 12 10:22:52.051892 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 12 10:22:52.074147 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 12 10:22:52.083041 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 12 10:22:52.085406 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 12 10:22:52.096532 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 12 10:22:52.099571 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 12 10:22:52.102409 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 10:22:52.125520 disk-uuid[632]: Primary Header is updated. Jul 12 10:22:52.125520 disk-uuid[632]: Secondary Entries is updated. Jul 12 10:22:52.125520 disk-uuid[632]: Secondary Header is updated. Jul 12 10:22:52.129030 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 12 10:22:52.197217 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 10:22:52.362718 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 12 10:22:52.362786 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jul 12 10:22:52.362798 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 12 10:22:52.362808 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 12 10:22:52.363712 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 12 10:22:52.364707 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 12 10:22:52.365718 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 12 10:22:52.365733 kernel: ata3.00: applying bridge limits Jul 12 10:22:52.366705 kernel: ata3.00: configured for UDMA/100 Jul 12 10:22:52.367710 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 12 10:22:52.415728 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 12 10:22:52.416093 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 12 10:22:52.441921 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 12 10:22:52.880161 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 12 10:22:52.881155 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 12 10:22:52.882691 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 12 10:22:52.883292 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 12 10:22:52.884624 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 12 10:22:52.914978 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 12 10:22:53.200704 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 12 10:22:53.200901 disk-uuid[636]: The operation has completed successfully. Jul 12 10:22:53.231424 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 12 10:22:53.231558 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 12 10:22:53.270668 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 12 10:22:53.287863 sh[667]: Success Jul 12 10:22:53.307552 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 12 10:22:53.307630 kernel: device-mapper: uevent: version 1.0.3 Jul 12 10:22:53.307644 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 12 10:22:53.317763 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jul 12 10:22:53.351807 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 12 10:22:53.356158 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 12 10:22:53.372939 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 12 10:22:53.379720 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 12 10:22:53.382604 kernel: BTRFS: device fsid 4d28aa26-35d0-4997-8a2e-14597ed98f41 devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (679) Jul 12 10:22:53.382629 kernel: BTRFS info (device dm-0): first mount of filesystem 4d28aa26-35d0-4997-8a2e-14597ed98f41 Jul 12 10:22:53.382641 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 12 10:22:53.384037 kernel: BTRFS info (device dm-0): using free-space-tree Jul 12 10:22:53.389461 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 12 10:22:53.391742 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 12 10:22:53.393914 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 12 10:22:53.396572 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 12 10:22:53.399352 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 12 10:22:53.424355 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (712) Jul 12 10:22:53.424432 kernel: BTRFS info (device vda6): first mount of filesystem 2214f333-d3a1-4dd4-b25f-bf0ce0af42b2 Jul 12 10:22:53.424449 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 12 10:22:53.425792 kernel: BTRFS info (device vda6): using free-space-tree Jul 12 10:22:53.433740 kernel: BTRFS info (device vda6): last unmount of filesystem 2214f333-d3a1-4dd4-b25f-bf0ce0af42b2 Jul 12 10:22:53.435210 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 12 10:22:53.438573 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 12 10:22:53.590704 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 12 10:22:53.594014 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 12 10:22:53.704364 systemd-networkd[849]: lo: Link UP Jul 12 10:22:53.705565 systemd-networkd[849]: lo: Gained carrier Jul 12 10:22:53.709463 systemd-networkd[849]: Enumeration completed Jul 12 10:22:53.709602 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 12 10:22:53.710813 systemd-networkd[849]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 10:22:53.710818 systemd-networkd[849]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 12 10:22:53.711007 systemd[1]: Reached target network.target - Network. Jul 12 10:22:53.712321 systemd-networkd[849]: eth0: Link UP Jul 12 10:22:53.712325 systemd-networkd[849]: eth0: Gained carrier Jul 12 10:22:53.712335 systemd-networkd[849]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 10:22:53.736824 systemd-networkd[849]: eth0: DHCPv4 address 10.0.0.137/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 12 10:22:53.750369 ignition[757]: Ignition 2.21.0 Jul 12 10:22:53.750382 ignition[757]: Stage: fetch-offline Jul 12 10:22:53.750436 ignition[757]: no configs at "/usr/lib/ignition/base.d" Jul 12 10:22:53.750446 ignition[757]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 10:22:53.750584 ignition[757]: parsed url from cmdline: "" Jul 12 10:22:53.750589 ignition[757]: no config URL provided Jul 12 10:22:53.750594 ignition[757]: reading system config file "/usr/lib/ignition/user.ign" Jul 12 10:22:53.750603 ignition[757]: no config at "/usr/lib/ignition/user.ign" Jul 12 10:22:53.750633 ignition[757]: op(1): [started] loading QEMU firmware config module Jul 12 10:22:53.750639 ignition[757]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 12 10:22:53.765001 ignition[757]: op(1): [finished] loading QEMU firmware config module Jul 12 10:22:53.809574 ignition[757]: parsing config with SHA512: 0f45fb647552844574c735a4c5d1131e61f30143361e0a052fdaf632d60a3c84db6649714a85d5c469c6ae70d22b652cffe460f3e3b9e5245fd74b34722aec40 Jul 12 10:22:53.817702 unknown[757]: fetched base config from "system" Jul 12 10:22:53.818710 unknown[757]: fetched user config from "qemu" Jul 12 10:22:53.819312 ignition[757]: fetch-offline: fetch-offline passed Jul 12 10:22:53.819388 ignition[757]: Ignition finished successfully Jul 12 10:22:53.823661 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 12 10:22:53.826671 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 12 10:22:53.829836 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 12 10:22:53.917700 ignition[862]: Ignition 2.21.0 Jul 12 10:22:53.917715 ignition[862]: Stage: kargs Jul 12 10:22:53.917880 ignition[862]: no configs at "/usr/lib/ignition/base.d" Jul 12 10:22:53.917893 ignition[862]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 10:22:53.919546 ignition[862]: kargs: kargs passed Jul 12 10:22:53.919606 ignition[862]: Ignition finished successfully Jul 12 10:22:53.927212 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 12 10:22:53.928839 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 12 10:22:53.965630 ignition[869]: Ignition 2.21.0 Jul 12 10:22:53.965643 ignition[869]: Stage: disks Jul 12 10:22:53.965813 ignition[869]: no configs at "/usr/lib/ignition/base.d" Jul 12 10:22:53.965824 ignition[869]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 10:22:53.968933 ignition[869]: disks: disks passed Jul 12 10:22:53.969158 ignition[869]: Ignition finished successfully Jul 12 10:22:53.972545 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 12 10:22:53.975032 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 12 10:22:53.976156 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 12 10:22:53.978291 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 12 10:22:53.980458 systemd[1]: Reached target sysinit.target - System Initialization. Jul 12 10:22:53.980867 systemd[1]: Reached target basic.target - Basic System. Jul 12 10:22:53.982245 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 12 10:22:54.019033 systemd-fsck[881]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 12 10:22:54.377426 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 12 10:22:54.381530 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 12 10:22:54.513719 kernel: EXT4-fs (vda9): mounted filesystem e7cb62fe-c14e-444a-ae5a-364f9f21d58c r/w with ordered data mode. Quota mode: none. Jul 12 10:22:54.514740 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 12 10:22:54.515676 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 12 10:22:54.518222 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 12 10:22:54.521105 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 12 10:22:54.521989 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 12 10:22:54.522038 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 12 10:22:54.522066 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 12 10:22:54.537442 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 12 10:22:54.541928 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (889) Jul 12 10:22:54.541956 kernel: BTRFS info (device vda6): first mount of filesystem 2214f333-d3a1-4dd4-b25f-bf0ce0af42b2 Jul 12 10:22:54.541967 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 12 10:22:54.541978 kernel: BTRFS info (device vda6): using free-space-tree Jul 12 10:22:54.542782 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 12 10:22:54.546211 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 12 10:22:54.582635 initrd-setup-root[913]: cut: /sysroot/etc/passwd: No such file or directory Jul 12 10:22:54.587766 initrd-setup-root[920]: cut: /sysroot/etc/group: No such file or directory Jul 12 10:22:54.593466 initrd-setup-root[927]: cut: /sysroot/etc/shadow: No such file or directory Jul 12 10:22:54.600039 initrd-setup-root[934]: cut: /sysroot/etc/gshadow: No such file or directory Jul 12 10:22:54.687821 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 12 10:22:54.708383 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 12 10:22:54.710858 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 12 10:22:54.732305 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 12 10:22:54.733914 kernel: BTRFS info (device vda6): last unmount of filesystem 2214f333-d3a1-4dd4-b25f-bf0ce0af42b2 Jul 12 10:22:54.746824 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 12 10:22:54.771492 ignition[1002]: INFO : Ignition 2.21.0 Jul 12 10:22:54.771492 ignition[1002]: INFO : Stage: mount Jul 12 10:22:54.773507 ignition[1002]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 10:22:54.773507 ignition[1002]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 10:22:54.775711 ignition[1002]: INFO : mount: mount passed Jul 12 10:22:54.775711 ignition[1002]: INFO : Ignition finished successfully Jul 12 10:22:54.778351 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 12 10:22:54.779722 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 12 10:22:55.378906 systemd-networkd[849]: eth0: Gained IPv6LL Jul 12 10:22:55.516538 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 12 10:22:55.544706 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1015) Jul 12 10:22:55.544735 kernel: BTRFS info (device vda6): first mount of filesystem 2214f333-d3a1-4dd4-b25f-bf0ce0af42b2 Jul 12 10:22:55.545705 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 12 10:22:55.547122 kernel: BTRFS info (device vda6): using free-space-tree Jul 12 10:22:55.550827 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 12 10:22:55.591253 ignition[1032]: INFO : Ignition 2.21.0 Jul 12 10:22:55.591253 ignition[1032]: INFO : Stage: files Jul 12 10:22:55.593531 ignition[1032]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 10:22:55.593531 ignition[1032]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 10:22:55.593531 ignition[1032]: DEBUG : files: compiled without relabeling support, skipping Jul 12 10:22:55.597138 ignition[1032]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 12 10:22:55.597138 ignition[1032]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 12 10:22:55.597138 ignition[1032]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 12 10:22:55.597138 ignition[1032]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 12 10:22:55.602657 ignition[1032]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 12 10:22:55.602657 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 12 10:22:55.602657 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jul 12 10:22:55.597191 unknown[1032]: wrote ssh authorized keys file for user: core Jul 12 10:22:55.641061 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 12 10:22:55.870639 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 12 10:22:55.872634 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 12 10:22:55.874310 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 12 10:22:55.874310 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 12 10:22:55.874310 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 12 10:22:55.874310 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 12 10:22:55.874310 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 12 10:22:55.874310 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 12 10:22:55.874310 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 12 10:22:55.886348 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 12 10:22:55.886348 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 12 10:22:55.886348 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 12 10:22:55.891899 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 12 10:22:55.891899 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 12 10:22:55.891899 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jul 12 10:22:56.605949 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 12 10:22:56.928744 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 12 10:22:56.928744 ignition[1032]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 12 10:22:56.932951 ignition[1032]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 12 10:22:56.934909 ignition[1032]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 12 10:22:56.934909 ignition[1032]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 12 10:22:56.934909 ignition[1032]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jul 12 10:22:56.934909 ignition[1032]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 12 10:22:56.941577 ignition[1032]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 12 10:22:56.941577 ignition[1032]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jul 12 10:22:56.941577 ignition[1032]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jul 12 10:22:56.955590 ignition[1032]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 12 10:22:56.959977 ignition[1032]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 12 10:22:56.961793 ignition[1032]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jul 12 10:22:56.961793 ignition[1032]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jul 12 10:22:56.966767 ignition[1032]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jul 12 10:22:56.966767 ignition[1032]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 12 10:22:56.966767 ignition[1032]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 12 10:22:56.966767 ignition[1032]: INFO : files: files passed Jul 12 10:22:56.966767 ignition[1032]: INFO : Ignition finished successfully Jul 12 10:22:56.967758 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 12 10:22:56.970069 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 12 10:22:56.973829 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 12 10:22:56.987433 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 12 10:22:56.987588 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 12 10:22:56.990408 initrd-setup-root-after-ignition[1061]: grep: /sysroot/oem/oem-release: No such file or directory Jul 12 10:22:56.992568 initrd-setup-root-after-ignition[1067]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 12 10:22:56.994264 initrd-setup-root-after-ignition[1063]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 12 10:22:56.994264 initrd-setup-root-after-ignition[1063]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 12 10:22:56.995449 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 12 10:22:56.996473 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 12 10:22:56.997891 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 12 10:22:57.055508 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 12 10:22:57.055660 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 12 10:22:57.056400 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 12 10:22:57.060619 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 12 10:22:57.061128 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 12 10:22:57.062716 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 12 10:22:57.100526 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 12 10:22:57.102697 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 12 10:22:57.126162 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 12 10:22:57.126569 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 12 10:22:57.128762 systemd[1]: Stopped target timers.target - Timer Units. Jul 12 10:22:57.129212 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 12 10:22:57.129357 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 12 10:22:57.134250 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 12 10:22:57.134645 systemd[1]: Stopped target basic.target - Basic System. Jul 12 10:22:57.135125 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 12 10:22:57.135455 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 12 10:22:57.135808 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 12 10:22:57.136270 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 12 10:22:57.136610 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 12 10:22:57.137087 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 12 10:22:57.137422 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 12 10:22:57.137761 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 12 10:22:57.138209 systemd[1]: Stopped target swap.target - Swaps. Jul 12 10:22:57.138507 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 12 10:22:57.138642 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 12 10:22:57.155664 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 12 10:22:57.156172 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 12 10:22:57.156439 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 12 10:22:57.161525 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 12 10:22:57.162109 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 12 10:22:57.162244 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 12 10:22:57.166675 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 12 10:22:57.166810 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 12 10:22:57.167284 systemd[1]: Stopped target paths.target - Path Units. Jul 12 10:22:57.169931 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 12 10:22:57.174755 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 12 10:22:57.175146 systemd[1]: Stopped target slices.target - Slice Units. Jul 12 10:22:57.177703 systemd[1]: Stopped target sockets.target - Socket Units. Jul 12 10:22:57.178160 systemd[1]: iscsid.socket: Deactivated successfully. Jul 12 10:22:57.178271 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 12 10:22:57.181106 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 12 10:22:57.181218 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 12 10:22:57.182832 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 12 10:22:57.182968 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 12 10:22:57.184736 systemd[1]: ignition-files.service: Deactivated successfully. Jul 12 10:22:57.184867 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 12 10:22:57.186021 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 12 10:22:57.193533 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 12 10:22:57.194438 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 12 10:22:57.194561 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 12 10:22:57.195266 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 12 10:22:57.195368 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 12 10:22:57.203072 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 12 10:22:57.203191 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 12 10:22:57.216119 ignition[1087]: INFO : Ignition 2.21.0 Jul 12 10:22:57.216119 ignition[1087]: INFO : Stage: umount Jul 12 10:22:57.217824 ignition[1087]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 10:22:57.217824 ignition[1087]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 10:22:57.220198 ignition[1087]: INFO : umount: umount passed Jul 12 10:22:57.220198 ignition[1087]: INFO : Ignition finished successfully Jul 12 10:22:57.223179 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 12 10:22:57.223337 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 12 10:22:57.224996 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 12 10:22:57.225555 systemd[1]: Stopped target network.target - Network. Jul 12 10:22:57.226240 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 12 10:22:57.226302 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 12 10:22:57.226611 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 12 10:22:57.226664 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 12 10:22:57.227089 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 12 10:22:57.227152 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 12 10:22:57.227412 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 12 10:22:57.227464 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 12 10:22:57.227870 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 12 10:22:57.234631 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 12 10:22:57.248772 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 12 10:22:57.248949 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 12 10:22:57.253277 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 12 10:22:57.253537 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 12 10:22:57.253698 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 12 10:22:57.258740 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 12 10:22:57.259644 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 12 10:22:57.260624 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 12 10:22:57.260696 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 12 10:22:57.262116 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 12 10:22:57.264638 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 12 10:22:57.264770 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 12 10:22:57.265244 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 12 10:22:57.265303 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 12 10:22:57.269998 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 12 10:22:57.270058 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 12 10:22:57.270378 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 12 10:22:57.270447 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 12 10:22:57.275187 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 12 10:22:57.277818 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 12 10:22:57.277902 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 12 10:22:57.301375 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 12 10:22:57.302877 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 12 10:22:57.305829 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 12 10:22:57.305879 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 12 10:22:57.307973 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 12 10:22:57.308013 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 12 10:22:57.308361 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 12 10:22:57.308420 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 12 10:22:57.312427 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 12 10:22:57.312479 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 12 10:22:57.315042 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 12 10:22:57.315096 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 10:22:57.318609 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 12 10:22:57.319166 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 12 10:22:57.319237 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 12 10:22:57.323095 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 12 10:22:57.323157 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 12 10:22:57.326376 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 12 10:22:57.326437 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 12 10:22:57.329665 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 12 10:22:57.329735 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 12 10:22:57.330218 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 12 10:22:57.330263 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 10:22:57.336455 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 12 10:22:57.336531 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jul 12 10:22:57.336587 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 12 10:22:57.336649 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 12 10:22:57.337065 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 12 10:22:57.337205 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 12 10:22:57.341839 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 12 10:22:57.341943 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 12 10:22:57.409423 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 12 10:22:57.409555 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 12 10:22:57.410629 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 12 10:22:57.413917 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 12 10:22:57.414003 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 12 10:22:57.417071 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 12 10:22:57.446069 systemd[1]: Switching root. Jul 12 10:22:57.479555 systemd-journald[220]: Journal stopped Jul 12 10:22:58.688876 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). Jul 12 10:22:58.688944 kernel: SELinux: policy capability network_peer_controls=1 Jul 12 10:22:58.688963 kernel: SELinux: policy capability open_perms=1 Jul 12 10:22:58.688975 kernel: SELinux: policy capability extended_socket_class=1 Jul 12 10:22:58.688986 kernel: SELinux: policy capability always_check_network=0 Jul 12 10:22:58.689002 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 12 10:22:58.689014 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 12 10:22:58.689025 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 12 10:22:58.689041 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 12 10:22:58.689052 kernel: SELinux: policy capability userspace_initial_context=0 Jul 12 10:22:58.689064 kernel: audit: type=1403 audit(1752315777.894:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 12 10:22:58.689076 systemd[1]: Successfully loaded SELinux policy in 62.940ms. Jul 12 10:22:58.689095 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.791ms. Jul 12 10:22:58.689109 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 12 10:22:58.689123 systemd[1]: Detected virtualization kvm. Jul 12 10:22:58.689135 systemd[1]: Detected architecture x86-64. Jul 12 10:22:58.689147 systemd[1]: Detected first boot. Jul 12 10:22:58.689159 systemd[1]: Initializing machine ID from VM UUID. Jul 12 10:22:58.689171 zram_generator::config[1132]: No configuration found. Jul 12 10:22:58.689184 kernel: Guest personality initialized and is inactive Jul 12 10:22:58.689199 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 12 10:22:58.689215 kernel: Initialized host personality Jul 12 10:22:58.689229 kernel: NET: Registered PF_VSOCK protocol family Jul 12 10:22:58.689240 systemd[1]: Populated /etc with preset unit settings. Jul 12 10:22:58.689253 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 12 10:22:58.689265 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 12 10:22:58.689278 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 12 10:22:58.689294 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 12 10:22:58.689307 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 12 10:22:58.689319 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 12 10:22:58.689335 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 12 10:22:58.689358 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 12 10:22:58.689371 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 12 10:22:58.689383 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 12 10:22:58.689396 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 12 10:22:58.689408 systemd[1]: Created slice user.slice - User and Session Slice. Jul 12 10:22:58.689420 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 12 10:22:58.689433 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 12 10:22:58.689445 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 12 10:22:58.689459 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 12 10:22:58.689472 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 12 10:22:58.689485 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 12 10:22:58.689496 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 12 10:22:58.689508 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 12 10:22:58.689521 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 12 10:22:58.689532 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 12 10:22:58.689545 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 12 10:22:58.689560 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 12 10:22:58.689573 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 12 10:22:58.689585 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 12 10:22:58.689598 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 12 10:22:58.689611 systemd[1]: Reached target slices.target - Slice Units. Jul 12 10:22:58.689623 systemd[1]: Reached target swap.target - Swaps. Jul 12 10:22:58.689636 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 12 10:22:58.689648 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 12 10:22:58.689660 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 12 10:22:58.689674 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 12 10:22:58.689699 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 12 10:22:58.689711 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 12 10:22:58.689723 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 12 10:22:58.689735 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 12 10:22:58.689747 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 12 10:22:58.689760 systemd[1]: Mounting media.mount - External Media Directory... Jul 12 10:22:58.689772 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 12 10:22:58.689788 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 12 10:22:58.689807 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 12 10:22:58.689819 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 12 10:22:58.689835 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 12 10:22:58.689848 systemd[1]: Reached target machines.target - Containers. Jul 12 10:22:58.689864 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 12 10:22:58.689879 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 12 10:22:58.689893 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 12 10:22:58.689912 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 12 10:22:58.689925 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 12 10:22:58.689943 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 12 10:22:58.689956 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 12 10:22:58.689971 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 12 10:22:58.689987 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 12 10:22:58.690001 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 12 10:22:58.690017 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 12 10:22:58.690032 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 12 10:22:58.690046 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 12 10:22:58.690064 systemd[1]: Stopped systemd-fsck-usr.service. Jul 12 10:22:58.690079 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 12 10:22:58.690094 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 12 10:22:58.690110 kernel: fuse: init (API version 7.41) Jul 12 10:22:58.690124 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 12 10:22:58.690139 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 12 10:22:58.690155 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 12 10:22:58.690169 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 12 10:22:58.690188 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 12 10:22:58.690203 systemd[1]: verity-setup.service: Deactivated successfully. Jul 12 10:22:58.690221 systemd[1]: Stopped verity-setup.service. Jul 12 10:22:58.690241 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 12 10:22:58.690255 kernel: ACPI: bus type drm_connector registered Jul 12 10:22:58.690270 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 12 10:22:58.690283 kernel: loop: module loaded Jul 12 10:22:58.690297 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 12 10:22:58.690312 systemd[1]: Mounted media.mount - External Media Directory. Jul 12 10:22:58.690327 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 12 10:22:58.690342 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 12 10:22:58.690369 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 12 10:22:58.690385 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 12 10:22:58.690424 systemd-journald[1203]: Collecting audit messages is disabled. Jul 12 10:22:58.690452 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 12 10:22:58.690468 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 12 10:22:58.690482 systemd-journald[1203]: Journal started Jul 12 10:22:58.690511 systemd-journald[1203]: Runtime Journal (/run/log/journal/f483ed346db44dfc976774dbaeb7f454) is 6M, max 48.5M, 42.4M free. Jul 12 10:22:58.413283 systemd[1]: Queued start job for default target multi-user.target. Jul 12 10:22:58.439827 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 12 10:22:58.440266 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 12 10:22:58.692219 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 12 10:22:58.694715 systemd[1]: Started systemd-journald.service - Journal Service. Jul 12 10:22:58.696461 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 10:22:58.696671 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 12 10:22:58.698306 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 12 10:22:58.698526 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 12 10:22:58.700025 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 10:22:58.700230 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 12 10:22:58.701989 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 12 10:22:58.702190 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 12 10:22:58.703802 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 10:22:58.704009 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 12 10:22:58.705597 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 12 10:22:58.707318 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 12 10:22:58.709165 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 12 10:22:58.710947 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 12 10:22:58.725522 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 12 10:22:58.728495 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 12 10:22:58.731131 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 12 10:22:58.732609 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 12 10:22:58.732744 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 12 10:22:58.735078 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 12 10:22:58.742840 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 12 10:22:58.744183 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 12 10:22:58.746837 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 12 10:22:58.750046 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 12 10:22:58.751327 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 12 10:22:58.752294 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 12 10:22:58.754424 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 12 10:22:58.755908 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 12 10:22:58.760338 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 12 10:22:58.765666 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 12 10:22:58.771063 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 12 10:22:58.774802 systemd-journald[1203]: Time spent on flushing to /var/log/journal/f483ed346db44dfc976774dbaeb7f454 is 19.298ms for 1069 entries. Jul 12 10:22:58.774802 systemd-journald[1203]: System Journal (/var/log/journal/f483ed346db44dfc976774dbaeb7f454) is 8M, max 195.6M, 187.6M free. Jul 12 10:22:58.806011 systemd-journald[1203]: Received client request to flush runtime journal. Jul 12 10:22:58.806045 kernel: loop0: detected capacity change from 0 to 224512 Jul 12 10:22:58.773714 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 12 10:22:58.782873 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 12 10:22:58.785676 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 12 10:22:58.788728 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 12 10:22:58.792910 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 12 10:22:58.802664 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 12 10:22:58.808363 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 12 10:22:58.816713 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 12 10:22:58.817497 systemd-tmpfiles[1252]: ACLs are not supported, ignoring. Jul 12 10:22:58.817517 systemd-tmpfiles[1252]: ACLs are not supported, ignoring. Jul 12 10:22:58.822951 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 12 10:22:58.827650 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 12 10:22:58.832244 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 12 10:22:58.839709 kernel: loop1: detected capacity change from 0 to 114000 Jul 12 10:22:58.867722 kernel: loop2: detected capacity change from 0 to 146488 Jul 12 10:22:58.868602 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 12 10:22:58.871304 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 12 10:22:58.895405 systemd-tmpfiles[1272]: ACLs are not supported, ignoring. Jul 12 10:22:58.895427 systemd-tmpfiles[1272]: ACLs are not supported, ignoring. Jul 12 10:22:58.901832 kernel: loop3: detected capacity change from 0 to 224512 Jul 12 10:22:58.899848 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 12 10:22:58.911723 kernel: loop4: detected capacity change from 0 to 114000 Jul 12 10:22:58.922718 kernel: loop5: detected capacity change from 0 to 146488 Jul 12 10:22:58.934929 (sd-merge)[1275]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 12 10:22:58.935637 (sd-merge)[1275]: Merged extensions into '/usr'. Jul 12 10:22:58.940025 systemd[1]: Reload requested from client PID 1251 ('systemd-sysext') (unit systemd-sysext.service)... Jul 12 10:22:58.940041 systemd[1]: Reloading... Jul 12 10:22:59.005726 zram_generator::config[1302]: No configuration found. Jul 12 10:22:59.100212 ldconfig[1246]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 12 10:22:59.115563 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 10:22:59.196474 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 12 10:22:59.197006 systemd[1]: Reloading finished in 256 ms. Jul 12 10:22:59.225962 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 12 10:22:59.227630 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 12 10:22:59.239949 systemd[1]: Starting ensure-sysext.service... Jul 12 10:22:59.241737 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 12 10:22:59.251807 systemd[1]: Reload requested from client PID 1339 ('systemctl') (unit ensure-sysext.service)... Jul 12 10:22:59.251897 systemd[1]: Reloading... Jul 12 10:22:59.258146 systemd-tmpfiles[1341]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 12 10:22:59.258194 systemd-tmpfiles[1341]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 12 10:22:59.258872 systemd-tmpfiles[1341]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 12 10:22:59.259133 systemd-tmpfiles[1341]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 12 10:22:59.260038 systemd-tmpfiles[1341]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 12 10:22:59.260326 systemd-tmpfiles[1341]: ACLs are not supported, ignoring. Jul 12 10:22:59.260409 systemd-tmpfiles[1341]: ACLs are not supported, ignoring. Jul 12 10:22:59.264599 systemd-tmpfiles[1341]: Detected autofs mount point /boot during canonicalization of boot. Jul 12 10:22:59.264611 systemd-tmpfiles[1341]: Skipping /boot Jul 12 10:22:59.274834 systemd-tmpfiles[1341]: Detected autofs mount point /boot during canonicalization of boot. Jul 12 10:22:59.274846 systemd-tmpfiles[1341]: Skipping /boot Jul 12 10:22:59.303788 zram_generator::config[1368]: No configuration found. Jul 12 10:22:59.400996 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 10:22:59.482197 systemd[1]: Reloading finished in 229 ms. Jul 12 10:22:59.505422 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 12 10:22:59.525552 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 12 10:22:59.535232 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 12 10:22:59.538060 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 12 10:22:59.551024 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 12 10:22:59.555005 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 12 10:22:59.558586 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 12 10:22:59.561514 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 12 10:22:59.565394 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 12 10:22:59.565571 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 12 10:22:59.570981 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 12 10:22:59.575081 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 12 10:22:59.579559 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 12 10:22:59.580778 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 12 10:22:59.581033 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 12 10:22:59.584006 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 12 10:22:59.585068 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 12 10:22:59.586553 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 10:22:59.586816 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 12 10:22:59.589169 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 10:22:59.589388 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 12 10:22:59.591223 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 10:22:59.591629 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 12 10:22:59.593223 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 12 10:22:59.604253 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 12 10:22:59.605749 systemd-udevd[1412]: Using default interface naming scheme 'v255'. Jul 12 10:22:59.610671 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 12 10:22:59.611312 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 12 10:22:59.612974 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 12 10:22:59.615479 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 12 10:22:59.617940 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 12 10:22:59.621883 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 12 10:22:59.623052 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 12 10:22:59.623174 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 12 10:22:59.625868 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 12 10:22:59.626170 augenrules[1443]: No rules Jul 12 10:22:59.627020 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 12 10:22:59.629727 systemd[1]: audit-rules.service: Deactivated successfully. Jul 12 10:22:59.631246 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 12 10:22:59.634055 systemd[1]: Finished ensure-sysext.service. Jul 12 10:22:59.635635 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 10:22:59.637132 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 12 10:22:59.639272 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 12 10:22:59.639570 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 12 10:22:59.641499 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 10:22:59.641737 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 12 10:22:59.643385 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 10:22:59.643603 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 12 10:22:59.645141 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 12 10:22:59.646798 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 12 10:22:59.653188 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 12 10:22:59.665832 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 12 10:22:59.666926 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 12 10:22:59.666993 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 12 10:22:59.669012 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 12 10:22:59.670739 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 12 10:22:59.670935 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 12 10:22:59.728290 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 12 10:22:59.796725 kernel: mousedev: PS/2 mouse device common for all mice Jul 12 10:22:59.806705 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 12 10:22:59.810322 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 12 10:22:59.814881 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 12 10:22:59.823706 kernel: ACPI: button: Power Button [PWRF] Jul 12 10:22:59.844047 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 12 10:22:59.865515 systemd-networkd[1479]: lo: Link UP Jul 12 10:22:59.865528 systemd-networkd[1479]: lo: Gained carrier Jul 12 10:22:59.867236 systemd-networkd[1479]: Enumeration completed Jul 12 10:22:59.867356 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 12 10:22:59.870187 systemd-networkd[1479]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 10:22:59.870203 systemd-networkd[1479]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 12 10:22:59.870766 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 12 10:22:59.871645 systemd-networkd[1479]: eth0: Link UP Jul 12 10:22:59.872500 systemd-networkd[1479]: eth0: Gained carrier Jul 12 10:22:59.872528 systemd-networkd[1479]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 10:22:59.873920 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 12 10:22:59.886735 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jul 12 10:22:59.887085 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 12 10:22:59.887260 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 12 10:22:59.885746 systemd-networkd[1479]: eth0: DHCPv4 address 10.0.0.137/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 12 10:22:59.899691 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 12 10:22:59.901107 systemd[1]: Reached target time-set.target - System Time Set. Jul 12 10:23:01.451390 systemd-timesyncd[1486]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 12 10:23:01.451441 systemd-timesyncd[1486]: Initial clock synchronization to Sat 2025-07-12 10:23:01.451290 UTC. Jul 12 10:23:01.456870 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 12 10:23:01.465193 systemd-resolved[1410]: Positive Trust Anchors: Jul 12 10:23:01.465215 systemd-resolved[1410]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 12 10:23:01.465246 systemd-resolved[1410]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 12 10:23:01.469027 systemd-resolved[1410]: Defaulting to hostname 'linux'. Jul 12 10:23:01.472116 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 12 10:23:01.473384 systemd[1]: Reached target network.target - Network. Jul 12 10:23:01.474290 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 12 10:23:01.475467 systemd[1]: Reached target sysinit.target - System Initialization. Jul 12 10:23:01.476612 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 12 10:23:01.477851 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 12 10:23:01.479793 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 12 10:23:01.481113 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 12 10:23:01.482250 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 12 10:23:01.483782 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 12 10:23:01.485004 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 12 10:23:01.485042 systemd[1]: Reached target paths.target - Path Units. Jul 12 10:23:01.485946 systemd[1]: Reached target timers.target - Timer Units. Jul 12 10:23:01.487897 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 12 10:23:01.490515 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 12 10:23:01.493938 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 12 10:23:01.495461 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 12 10:23:01.496726 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 12 10:23:01.540195 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 12 10:23:01.542659 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 12 10:23:01.545614 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 12 10:23:01.561149 systemd[1]: Reached target sockets.target - Socket Units. Jul 12 10:23:01.562863 systemd[1]: Reached target basic.target - Basic System. Jul 12 10:23:01.564906 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 12 10:23:01.564998 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 12 10:23:01.566655 systemd[1]: Starting containerd.service - containerd container runtime... Jul 12 10:23:01.573192 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 12 10:23:01.581560 kernel: kvm_amd: TSC scaling supported Jul 12 10:23:01.581600 kernel: kvm_amd: Nested Virtualization enabled Jul 12 10:23:01.581626 kernel: kvm_amd: Nested Paging enabled Jul 12 10:23:01.581638 kernel: kvm_amd: LBR virtualization supported Jul 12 10:23:01.581650 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jul 12 10:23:01.582013 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 12 10:23:01.583173 kernel: kvm_amd: Virtual GIF supported Jul 12 10:23:01.586867 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 12 10:23:01.592135 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 12 10:23:01.593261 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 12 10:23:01.595695 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 12 10:23:01.598064 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 12 10:23:01.600438 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 12 10:23:01.601808 jq[1533]: false Jul 12 10:23:01.602713 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 12 10:23:01.605427 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 12 10:23:01.612773 google_oslogin_nss_cache[1535]: oslogin_cache_refresh[1535]: Refreshing passwd entry cache Jul 12 10:23:01.610899 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 12 10:23:01.612785 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 12 10:23:01.613279 oslogin_cache_refresh[1535]: Refreshing passwd entry cache Jul 12 10:23:01.613286 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 12 10:23:01.614431 systemd[1]: Starting update-engine.service - Update Engine... Jul 12 10:23:01.617966 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 12 10:23:01.623485 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 12 10:23:01.623893 oslogin_cache_refresh[1535]: Failure getting users, quitting Jul 12 10:23:01.626438 google_oslogin_nss_cache[1535]: oslogin_cache_refresh[1535]: Failure getting users, quitting Jul 12 10:23:01.626438 google_oslogin_nss_cache[1535]: oslogin_cache_refresh[1535]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 12 10:23:01.626438 google_oslogin_nss_cache[1535]: oslogin_cache_refresh[1535]: Refreshing group entry cache Jul 12 10:23:01.625498 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 12 10:23:01.623915 oslogin_cache_refresh[1535]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 12 10:23:01.625769 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 12 10:23:01.623976 oslogin_cache_refresh[1535]: Refreshing group entry cache Jul 12 10:23:01.626732 jq[1545]: true Jul 12 10:23:01.627692 extend-filesystems[1534]: Found /dev/vda6 Jul 12 10:23:01.627336 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 12 10:23:01.629488 oslogin_cache_refresh[1535]: Failure getting groups, quitting Jul 12 10:23:01.632507 google_oslogin_nss_cache[1535]: oslogin_cache_refresh[1535]: Failure getting groups, quitting Jul 12 10:23:01.632507 google_oslogin_nss_cache[1535]: oslogin_cache_refresh[1535]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 12 10:23:01.627955 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 12 10:23:01.629498 oslogin_cache_refresh[1535]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 12 10:23:01.637819 extend-filesystems[1534]: Found /dev/vda9 Jul 12 10:23:01.641947 extend-filesystems[1534]: Checking size of /dev/vda9 Jul 12 10:23:01.639182 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 12 10:23:01.639463 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 12 10:23:01.646858 jq[1554]: true Jul 12 10:23:01.643385 systemd[1]: motdgen.service: Deactivated successfully. Jul 12 10:23:01.643667 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 12 10:23:01.652798 kernel: EDAC MC: Ver: 3.0.0 Jul 12 10:23:01.663300 (ntainerd)[1560]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 12 10:23:01.664793 extend-filesystems[1534]: Resized partition /dev/vda9 Jul 12 10:23:01.672600 tar[1550]: linux-amd64/LICENSE Jul 12 10:23:01.672600 tar[1550]: linux-amd64/helm Jul 12 10:23:01.673186 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 10:23:01.678091 extend-filesystems[1575]: resize2fs 1.47.2 (1-Jan-2025) Jul 12 10:23:01.685100 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 12 10:23:01.685157 update_engine[1544]: I20250712 10:23:01.682528 1544 main.cc:92] Flatcar Update Engine starting Jul 12 10:23:01.721747 dbus-daemon[1531]: [system] SELinux support is enabled Jul 12 10:23:01.722197 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 12 10:23:01.727495 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 12 10:23:01.727521 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 12 10:23:01.728816 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 12 10:23:01.728836 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 12 10:23:01.831298 update_engine[1544]: I20250712 10:23:01.830945 1544 update_check_scheduler.cc:74] Next update check in 10m31s Jul 12 10:23:01.831513 systemd[1]: Started update-engine.service - Update Engine. Jul 12 10:23:01.836514 sshd_keygen[1557]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 12 10:23:01.839150 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 12 10:23:01.851741 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 12 10:23:01.928902 systemd-logind[1540]: Watching system buttons on /dev/input/event2 (Power Button) Jul 12 10:23:01.928932 systemd-logind[1540]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 12 10:23:01.932522 systemd-logind[1540]: New seat seat0. Jul 12 10:23:01.937890 extend-filesystems[1575]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 12 10:23:01.937890 extend-filesystems[1575]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 12 10:23:01.937890 extend-filesystems[1575]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 12 10:23:01.937811 systemd[1]: Started systemd-logind.service - User Login Management. Jul 12 10:23:01.938872 extend-filesystems[1534]: Resized filesystem in /dev/vda9 Jul 12 10:23:01.938441 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 12 10:23:01.939758 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 12 10:23:01.941609 bash[1593]: Updated "/home/core/.ssh/authorized_keys" Jul 12 10:23:01.996931 locksmithd[1597]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 12 10:23:01.999352 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 12 10:23:02.001240 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 12 10:23:02.003183 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 10:23:02.010870 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 12 10:23:02.013603 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 12 10:23:02.066856 systemd[1]: issuegen.service: Deactivated successfully. Jul 12 10:23:02.067199 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 12 10:23:02.071127 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 12 10:23:02.101992 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 12 10:23:02.106127 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 12 10:23:02.109245 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 12 10:23:02.112627 systemd[1]: Reached target getty.target - Login Prompts. Jul 12 10:23:02.263352 containerd[1560]: time="2025-07-12T10:23:02Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 12 10:23:02.264626 containerd[1560]: time="2025-07-12T10:23:02.264355563Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Jul 12 10:23:02.279673 containerd[1560]: time="2025-07-12T10:23:02.279603910Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="19.628µs" Jul 12 10:23:02.279673 containerd[1560]: time="2025-07-12T10:23:02.279655346Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 12 10:23:02.279673 containerd[1560]: time="2025-07-12T10:23:02.279675735Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 12 10:23:02.279981 containerd[1560]: time="2025-07-12T10:23:02.279948045Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 12 10:23:02.279981 containerd[1560]: time="2025-07-12T10:23:02.279969666Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 12 10:23:02.280028 containerd[1560]: time="2025-07-12T10:23:02.279996637Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 12 10:23:02.280096 containerd[1560]: time="2025-07-12T10:23:02.280070936Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 12 10:23:02.280096 containerd[1560]: time="2025-07-12T10:23:02.280086144Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 12 10:23:02.280456 containerd[1560]: time="2025-07-12T10:23:02.280415693Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 12 10:23:02.280456 containerd[1560]: time="2025-07-12T10:23:02.280434698Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 12 10:23:02.280456 containerd[1560]: time="2025-07-12T10:23:02.280445318Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 12 10:23:02.280456 containerd[1560]: time="2025-07-12T10:23:02.280453934Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 12 10:23:02.280592 containerd[1560]: time="2025-07-12T10:23:02.280561306Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 12 10:23:02.280882 containerd[1560]: time="2025-07-12T10:23:02.280851791Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 12 10:23:02.280910 containerd[1560]: time="2025-07-12T10:23:02.280891545Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 12 10:23:02.280910 containerd[1560]: time="2025-07-12T10:23:02.280902696Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 12 10:23:02.280969 containerd[1560]: time="2025-07-12T10:23:02.280942060Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 12 10:23:02.281358 containerd[1560]: time="2025-07-12T10:23:02.281299190Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 12 10:23:02.281434 containerd[1560]: time="2025-07-12T10:23:02.281411952Z" level=info msg="metadata content store policy set" policy=shared Jul 12 10:23:02.308208 containerd[1560]: time="2025-07-12T10:23:02.308108577Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 12 10:23:02.308306 containerd[1560]: time="2025-07-12T10:23:02.308272715Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 12 10:23:02.308359 containerd[1560]: time="2025-07-12T10:23:02.308313170Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 12 10:23:02.308409 containerd[1560]: time="2025-07-12T10:23:02.308383082Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 12 10:23:02.308457 containerd[1560]: time="2025-07-12T10:23:02.308435660Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 12 10:23:02.308570 containerd[1560]: time="2025-07-12T10:23:02.308467821Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 12 10:23:02.308570 containerd[1560]: time="2025-07-12T10:23:02.308494961Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 12 10:23:02.308570 containerd[1560]: time="2025-07-12T10:23:02.308543252Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 12 10:23:02.308764 containerd[1560]: time="2025-07-12T10:23:02.308584479Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 12 10:23:02.308764 containerd[1560]: time="2025-07-12T10:23:02.308615157Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 12 10:23:02.308764 containerd[1560]: time="2025-07-12T10:23:02.308656995Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 12 10:23:02.308764 containerd[1560]: time="2025-07-12T10:23:02.308702621Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 12 10:23:02.309433 containerd[1560]: time="2025-07-12T10:23:02.309354494Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 12 10:23:02.309484 containerd[1560]: time="2025-07-12T10:23:02.309451746Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 12 10:23:02.309506 containerd[1560]: time="2025-07-12T10:23:02.309482314Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 12 10:23:02.309506 containerd[1560]: time="2025-07-12T10:23:02.309498915Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 12 10:23:02.309543 containerd[1560]: time="2025-07-12T10:23:02.309515115Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 12 10:23:02.309564 containerd[1560]: time="2025-07-12T10:23:02.309542767Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 12 10:23:02.309609 containerd[1560]: time="2025-07-12T10:23:02.309581269Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 12 10:23:02.309673 containerd[1560]: time="2025-07-12T10:23:02.309605805Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 12 10:23:02.309673 containerd[1560]: time="2025-07-12T10:23:02.309623298Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 12 10:23:02.309769 containerd[1560]: time="2025-07-12T10:23:02.309749365Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 12 10:23:02.309798 containerd[1560]: time="2025-07-12T10:23:02.309773490Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 12 10:23:02.310022 containerd[1560]: time="2025-07-12T10:23:02.309971672Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 12 10:23:02.310052 containerd[1560]: time="2025-07-12T10:23:02.310034660Z" level=info msg="Start snapshots syncer" Jul 12 10:23:02.310150 containerd[1560]: time="2025-07-12T10:23:02.310101505Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 12 10:23:02.342241 containerd[1560]: time="2025-07-12T10:23:02.311353384Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 12 10:23:02.342241 containerd[1560]: time="2025-07-12T10:23:02.311573216Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 12 10:23:02.342570 containerd[1560]: time="2025-07-12T10:23:02.342400881Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 12 10:23:02.342862 containerd[1560]: time="2025-07-12T10:23:02.342796764Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 12 10:23:02.343034 containerd[1560]: time="2025-07-12T10:23:02.343011637Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 12 10:23:02.343069 containerd[1560]: time="2025-07-12T10:23:02.343037486Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 12 10:23:02.343069 containerd[1560]: time="2025-07-12T10:23:02.343056612Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 12 10:23:02.343107 containerd[1560]: time="2025-07-12T10:23:02.343082781Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 12 10:23:02.343107 containerd[1560]: time="2025-07-12T10:23:02.343098600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 12 10:23:02.343167 containerd[1560]: time="2025-07-12T10:23:02.343112276Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 12 10:23:02.343167 containerd[1560]: time="2025-07-12T10:23:02.343154044Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 12 10:23:02.343215 containerd[1560]: time="2025-07-12T10:23:02.343179713Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 12 10:23:02.343215 containerd[1560]: time="2025-07-12T10:23:02.343198728Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 12 10:23:02.343577 containerd[1560]: time="2025-07-12T10:23:02.343363056Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 12 10:23:02.343577 containerd[1560]: time="2025-07-12T10:23:02.343388073Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 12 10:23:02.343681 containerd[1560]: time="2025-07-12T10:23:02.343654343Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 12 10:23:02.343706 containerd[1560]: time="2025-07-12T10:23:02.343688687Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 12 10:23:02.343746 containerd[1560]: time="2025-07-12T10:23:02.343702614Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 12 10:23:02.344009 containerd[1560]: time="2025-07-12T10:23:02.343765111Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 12 10:23:02.344009 containerd[1560]: time="2025-07-12T10:23:02.343787453Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 12 10:23:02.344009 containerd[1560]: time="2025-07-12T10:23:02.343821156Z" level=info msg="runtime interface created" Jul 12 10:23:02.344009 containerd[1560]: time="2025-07-12T10:23:02.343828620Z" level=info msg="created NRI interface" Jul 12 10:23:02.344009 containerd[1560]: time="2025-07-12T10:23:02.343863085Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 12 10:23:02.344009 containerd[1560]: time="2025-07-12T10:23:02.343886438Z" level=info msg="Connect containerd service" Jul 12 10:23:02.344009 containerd[1560]: time="2025-07-12T10:23:02.343922426Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 12 10:23:02.345090 containerd[1560]: time="2025-07-12T10:23:02.345053698Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 12 10:23:02.499598 tar[1550]: linux-amd64/README.md Jul 12 10:23:02.519765 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 12 10:23:02.737826 containerd[1560]: time="2025-07-12T10:23:02.737754593Z" level=info msg="Start subscribing containerd event" Jul 12 10:23:02.737956 containerd[1560]: time="2025-07-12T10:23:02.737839052Z" level=info msg="Start recovering state" Jul 12 10:23:02.738013 containerd[1560]: time="2025-07-12T10:23:02.737973875Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 12 10:23:02.738068 containerd[1560]: time="2025-07-12T10:23:02.738011405Z" level=info msg="Start event monitor" Jul 12 10:23:02.738068 containerd[1560]: time="2025-07-12T10:23:02.738033827Z" level=info msg="Start cni network conf syncer for default" Jul 12 10:23:02.738068 containerd[1560]: time="2025-07-12T10:23:02.738049637Z" level=info msg="Start streaming server" Jul 12 10:23:02.738127 containerd[1560]: time="2025-07-12T10:23:02.738052181Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 12 10:23:02.738127 containerd[1560]: time="2025-07-12T10:23:02.738071377Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 12 10:23:02.738127 containerd[1560]: time="2025-07-12T10:23:02.738100963Z" level=info msg="runtime interface starting up..." Jul 12 10:23:02.738127 containerd[1560]: time="2025-07-12T10:23:02.738108898Z" level=info msg="starting plugins..." Jul 12 10:23:02.738198 containerd[1560]: time="2025-07-12T10:23:02.738160184Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 12 10:23:02.738368 containerd[1560]: time="2025-07-12T10:23:02.738347385Z" level=info msg="containerd successfully booted in 0.475995s" Jul 12 10:23:02.738666 systemd[1]: Started containerd.service - containerd container runtime. Jul 12 10:23:03.326944 systemd-networkd[1479]: eth0: Gained IPv6LL Jul 12 10:23:03.330241 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 12 10:23:03.332205 systemd[1]: Reached target network-online.target - Network is Online. Jul 12 10:23:03.335574 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 12 10:23:03.338516 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 10:23:03.353069 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 12 10:23:03.372135 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 12 10:23:03.372479 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 12 10:23:03.408000 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 12 10:23:03.430748 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 12 10:23:05.046874 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 10:23:05.048858 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 12 10:23:05.050388 systemd[1]: Startup finished in 4.623s (kernel) + 7.254s (initrd) + 5.668s (userspace) = 17.546s. Jul 12 10:23:05.064227 (kubelet)[1672]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 10:23:05.604648 kubelet[1672]: E0712 10:23:05.604556 1672 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 10:23:05.608370 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 10:23:05.608621 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 10:23:05.609063 systemd[1]: kubelet.service: Consumed 2.067s CPU time, 264.8M memory peak. Jul 12 10:23:05.737344 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 12 10:23:05.738886 systemd[1]: Started sshd@0-10.0.0.137:22-10.0.0.1:55222.service - OpenSSH per-connection server daemon (10.0.0.1:55222). Jul 12 10:23:05.813223 sshd[1685]: Accepted publickey for core from 10.0.0.1 port 55222 ssh2: RSA SHA256:ljiWRYFI9rWPiw0u7CK8T0RxaTLaHmCBjbv/AQVYFjw Jul 12 10:23:05.815704 sshd-session[1685]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 10:23:05.828808 systemd-logind[1540]: New session 1 of user core. Jul 12 10:23:05.830416 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 12 10:23:05.831809 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 12 10:23:05.858215 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 12 10:23:05.860605 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 12 10:23:05.883772 (systemd)[1690]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 12 10:23:05.886481 systemd-logind[1540]: New session c1 of user core. Jul 12 10:23:06.040871 systemd[1690]: Queued start job for default target default.target. Jul 12 10:23:06.059056 systemd[1690]: Created slice app.slice - User Application Slice. Jul 12 10:23:06.059082 systemd[1690]: Reached target paths.target - Paths. Jul 12 10:23:06.059123 systemd[1690]: Reached target timers.target - Timers. Jul 12 10:23:06.060746 systemd[1690]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 12 10:23:06.073630 systemd[1690]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 12 10:23:06.073776 systemd[1690]: Reached target sockets.target - Sockets. Jul 12 10:23:06.073817 systemd[1690]: Reached target basic.target - Basic System. Jul 12 10:23:06.073856 systemd[1690]: Reached target default.target - Main User Target. Jul 12 10:23:06.073888 systemd[1690]: Startup finished in 179ms. Jul 12 10:23:06.074147 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 12 10:23:06.075760 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 12 10:23:06.141240 systemd[1]: Started sshd@1-10.0.0.137:22-10.0.0.1:48298.service - OpenSSH per-connection server daemon (10.0.0.1:48298). Jul 12 10:23:06.194633 sshd[1701]: Accepted publickey for core from 10.0.0.1 port 48298 ssh2: RSA SHA256:ljiWRYFI9rWPiw0u7CK8T0RxaTLaHmCBjbv/AQVYFjw Jul 12 10:23:06.196235 sshd-session[1701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 10:23:06.200914 systemd-logind[1540]: New session 2 of user core. Jul 12 10:23:06.210866 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 12 10:23:06.265975 sshd[1704]: Connection closed by 10.0.0.1 port 48298 Jul 12 10:23:06.266330 sshd-session[1701]: pam_unix(sshd:session): session closed for user core Jul 12 10:23:06.277105 systemd[1]: sshd@1-10.0.0.137:22-10.0.0.1:48298.service: Deactivated successfully. Jul 12 10:23:06.278792 systemd[1]: session-2.scope: Deactivated successfully. Jul 12 10:23:06.279462 systemd-logind[1540]: Session 2 logged out. Waiting for processes to exit. Jul 12 10:23:06.282135 systemd[1]: Started sshd@2-10.0.0.137:22-10.0.0.1:48310.service - OpenSSH per-connection server daemon (10.0.0.1:48310). Jul 12 10:23:06.282638 systemd-logind[1540]: Removed session 2. Jul 12 10:23:06.333162 sshd[1710]: Accepted publickey for core from 10.0.0.1 port 48310 ssh2: RSA SHA256:ljiWRYFI9rWPiw0u7CK8T0RxaTLaHmCBjbv/AQVYFjw Jul 12 10:23:06.334507 sshd-session[1710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 10:23:06.338509 systemd-logind[1540]: New session 3 of user core. Jul 12 10:23:06.348841 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 12 10:23:06.398155 sshd[1713]: Connection closed by 10.0.0.1 port 48310 Jul 12 10:23:06.398476 sshd-session[1710]: pam_unix(sshd:session): session closed for user core Jul 12 10:23:06.413288 systemd[1]: sshd@2-10.0.0.137:22-10.0.0.1:48310.service: Deactivated successfully. Jul 12 10:23:06.415320 systemd[1]: session-3.scope: Deactivated successfully. Jul 12 10:23:06.416034 systemd-logind[1540]: Session 3 logged out. Waiting for processes to exit. Jul 12 10:23:06.418974 systemd[1]: Started sshd@3-10.0.0.137:22-10.0.0.1:48320.service - OpenSSH per-connection server daemon (10.0.0.1:48320). Jul 12 10:23:06.419534 systemd-logind[1540]: Removed session 3. Jul 12 10:23:06.477379 sshd[1719]: Accepted publickey for core from 10.0.0.1 port 48320 ssh2: RSA SHA256:ljiWRYFI9rWPiw0u7CK8T0RxaTLaHmCBjbv/AQVYFjw Jul 12 10:23:06.479020 sshd-session[1719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 10:23:06.483313 systemd-logind[1540]: New session 4 of user core. Jul 12 10:23:06.496868 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 12 10:23:06.549113 sshd[1722]: Connection closed by 10.0.0.1 port 48320 Jul 12 10:23:06.549416 sshd-session[1719]: pam_unix(sshd:session): session closed for user core Jul 12 10:23:06.563399 systemd[1]: sshd@3-10.0.0.137:22-10.0.0.1:48320.service: Deactivated successfully. Jul 12 10:23:06.565239 systemd[1]: session-4.scope: Deactivated successfully. Jul 12 10:23:06.566054 systemd-logind[1540]: Session 4 logged out. Waiting for processes to exit. Jul 12 10:23:06.569073 systemd[1]: Started sshd@4-10.0.0.137:22-10.0.0.1:48322.service - OpenSSH per-connection server daemon (10.0.0.1:48322). Jul 12 10:23:06.569810 systemd-logind[1540]: Removed session 4. Jul 12 10:23:06.621029 sshd[1728]: Accepted publickey for core from 10.0.0.1 port 48322 ssh2: RSA SHA256:ljiWRYFI9rWPiw0u7CK8T0RxaTLaHmCBjbv/AQVYFjw Jul 12 10:23:06.622515 sshd-session[1728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 10:23:06.626747 systemd-logind[1540]: New session 5 of user core. Jul 12 10:23:06.642844 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 12 10:23:06.700884 sudo[1732]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 12 10:23:06.701201 sudo[1732]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 10:23:06.718317 sudo[1732]: pam_unix(sudo:session): session closed for user root Jul 12 10:23:06.720088 sshd[1731]: Connection closed by 10.0.0.1 port 48322 Jul 12 10:23:06.720493 sshd-session[1728]: pam_unix(sshd:session): session closed for user core Jul 12 10:23:06.733255 systemd[1]: sshd@4-10.0.0.137:22-10.0.0.1:48322.service: Deactivated successfully. Jul 12 10:23:06.735003 systemd[1]: session-5.scope: Deactivated successfully. Jul 12 10:23:06.735700 systemd-logind[1540]: Session 5 logged out. Waiting for processes to exit. Jul 12 10:23:06.738458 systemd[1]: Started sshd@5-10.0.0.137:22-10.0.0.1:48332.service - OpenSSH per-connection server daemon (10.0.0.1:48332). Jul 12 10:23:06.739193 systemd-logind[1540]: Removed session 5. Jul 12 10:23:06.794120 sshd[1738]: Accepted publickey for core from 10.0.0.1 port 48332 ssh2: RSA SHA256:ljiWRYFI9rWPiw0u7CK8T0RxaTLaHmCBjbv/AQVYFjw Jul 12 10:23:06.795510 sshd-session[1738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 10:23:06.799696 systemd-logind[1540]: New session 6 of user core. Jul 12 10:23:06.810831 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 12 10:23:06.863997 sudo[1743]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 12 10:23:06.864326 sudo[1743]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 10:23:06.871667 sudo[1743]: pam_unix(sudo:session): session closed for user root Jul 12 10:23:06.877994 sudo[1742]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 12 10:23:06.878300 sudo[1742]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 10:23:06.888476 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 12 10:23:06.931661 augenrules[1765]: No rules Jul 12 10:23:06.933424 systemd[1]: audit-rules.service: Deactivated successfully. Jul 12 10:23:06.933700 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 12 10:23:06.934937 sudo[1742]: pam_unix(sudo:session): session closed for user root Jul 12 10:23:06.936285 sshd[1741]: Connection closed by 10.0.0.1 port 48332 Jul 12 10:23:06.936619 sshd-session[1738]: pam_unix(sshd:session): session closed for user core Jul 12 10:23:06.950416 systemd[1]: sshd@5-10.0.0.137:22-10.0.0.1:48332.service: Deactivated successfully. Jul 12 10:23:06.952347 systemd[1]: session-6.scope: Deactivated successfully. Jul 12 10:23:06.953229 systemd-logind[1540]: Session 6 logged out. Waiting for processes to exit. Jul 12 10:23:06.956034 systemd[1]: Started sshd@6-10.0.0.137:22-10.0.0.1:48336.service - OpenSSH per-connection server daemon (10.0.0.1:48336). Jul 12 10:23:06.956577 systemd-logind[1540]: Removed session 6. Jul 12 10:23:07.016270 sshd[1775]: Accepted publickey for core from 10.0.0.1 port 48336 ssh2: RSA SHA256:ljiWRYFI9rWPiw0u7CK8T0RxaTLaHmCBjbv/AQVYFjw Jul 12 10:23:07.018003 sshd-session[1775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 10:23:07.022422 systemd-logind[1540]: New session 7 of user core. Jul 12 10:23:07.031856 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 12 10:23:07.084521 sudo[1779]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 12 10:23:07.084840 sudo[1779]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 10:23:07.796383 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 12 10:23:07.821041 (dockerd)[1799]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 12 10:23:08.473370 dockerd[1799]: time="2025-07-12T10:23:08.473266012Z" level=info msg="Starting up" Jul 12 10:23:08.474393 dockerd[1799]: time="2025-07-12T10:23:08.474340097Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 12 10:23:08.619882 dockerd[1799]: time="2025-07-12T10:23:08.619818996Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jul 12 10:23:08.915284 dockerd[1799]: time="2025-07-12T10:23:08.915126734Z" level=info msg="Loading containers: start." Jul 12 10:23:08.927743 kernel: Initializing XFRM netlink socket Jul 12 10:23:09.223560 systemd-networkd[1479]: docker0: Link UP Jul 12 10:23:09.229677 dockerd[1799]: time="2025-07-12T10:23:09.229627450Z" level=info msg="Loading containers: done." Jul 12 10:23:09.249877 dockerd[1799]: time="2025-07-12T10:23:09.249815284Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 12 10:23:09.250032 dockerd[1799]: time="2025-07-12T10:23:09.249950888Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jul 12 10:23:09.250100 dockerd[1799]: time="2025-07-12T10:23:09.250083587Z" level=info msg="Initializing buildkit" Jul 12 10:23:09.279811 dockerd[1799]: time="2025-07-12T10:23:09.279711501Z" level=info msg="Completed buildkit initialization" Jul 12 10:23:09.286738 dockerd[1799]: time="2025-07-12T10:23:09.286670537Z" level=info msg="Daemon has completed initialization" Jul 12 10:23:09.286843 dockerd[1799]: time="2025-07-12T10:23:09.286779722Z" level=info msg="API listen on /run/docker.sock" Jul 12 10:23:09.287020 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 12 10:23:09.962675 containerd[1560]: time="2025-07-12T10:23:09.962624916Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jul 12 10:23:10.569747 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3108238008.mount: Deactivated successfully. Jul 12 10:23:12.003474 containerd[1560]: time="2025-07-12T10:23:12.003373355Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:23:12.004070 containerd[1560]: time="2025-07-12T10:23:12.004025579Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=28799045" Jul 12 10:23:12.005277 containerd[1560]: time="2025-07-12T10:23:12.005209951Z" level=info msg="ImageCreate event name:\"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:23:12.007857 containerd[1560]: time="2025-07-12T10:23:12.007820328Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:23:12.008895 containerd[1560]: time="2025-07-12T10:23:12.008862213Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"28795845\" in 2.046189437s" Jul 12 10:23:12.008945 containerd[1560]: time="2025-07-12T10:23:12.008903029Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\"" Jul 12 10:23:12.009924 containerd[1560]: time="2025-07-12T10:23:12.009890933Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jul 12 10:23:13.478466 containerd[1560]: time="2025-07-12T10:23:13.478397316Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:23:13.479412 containerd[1560]: time="2025-07-12T10:23:13.479350263Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=24783912" Jul 12 10:23:13.480609 containerd[1560]: time="2025-07-12T10:23:13.480550175Z" level=info msg="ImageCreate event name:\"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:23:13.483494 containerd[1560]: time="2025-07-12T10:23:13.483457609Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:23:13.484437 containerd[1560]: time="2025-07-12T10:23:13.484399466Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"26385746\" in 1.474467677s" Jul 12 10:23:13.484437 containerd[1560]: time="2025-07-12T10:23:13.484431536Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\"" Jul 12 10:23:13.485108 containerd[1560]: time="2025-07-12T10:23:13.485076045Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jul 12 10:23:15.119675 containerd[1560]: time="2025-07-12T10:23:15.119592866Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:23:15.120416 containerd[1560]: time="2025-07-12T10:23:15.120344656Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=19176916" Jul 12 10:23:15.121654 containerd[1560]: time="2025-07-12T10:23:15.121607385Z" level=info msg="ImageCreate event name:\"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:23:15.124579 containerd[1560]: time="2025-07-12T10:23:15.124542221Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:23:15.125689 containerd[1560]: time="2025-07-12T10:23:15.125636203Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"20778768\" in 1.640520213s" Jul 12 10:23:15.125689 containerd[1560]: time="2025-07-12T10:23:15.125676419Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\"" Jul 12 10:23:15.126324 containerd[1560]: time="2025-07-12T10:23:15.126284169Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 12 10:23:15.859106 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 12 10:23:15.860987 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 10:23:16.274294 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 10:23:16.285125 (kubelet)[2093]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 10:23:16.399428 kubelet[2093]: E0712 10:23:16.399365 2093 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 10:23:16.406641 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 10:23:16.406869 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 10:23:16.407267 systemd[1]: kubelet.service: Consumed 421ms CPU time, 111.1M memory peak. Jul 12 10:23:16.554731 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount293115469.mount: Deactivated successfully. Jul 12 10:23:17.709072 containerd[1560]: time="2025-07-12T10:23:17.708979540Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:23:17.709855 containerd[1560]: time="2025-07-12T10:23:17.709816821Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=30895363" Jul 12 10:23:17.711162 containerd[1560]: time="2025-07-12T10:23:17.711099176Z" level=info msg="ImageCreate event name:\"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:23:17.713093 containerd[1560]: time="2025-07-12T10:23:17.713041490Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:23:17.713558 containerd[1560]: time="2025-07-12T10:23:17.713508156Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"30894382\" in 2.587183781s" Jul 12 10:23:17.713558 containerd[1560]: time="2025-07-12T10:23:17.713552669Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\"" Jul 12 10:23:17.714221 containerd[1560]: time="2025-07-12T10:23:17.714178684Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 12 10:23:18.306952 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4267208461.mount: Deactivated successfully. Jul 12 10:23:19.258548 containerd[1560]: time="2025-07-12T10:23:19.258477168Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:23:19.259290 containerd[1560]: time="2025-07-12T10:23:19.259251290Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jul 12 10:23:19.260321 containerd[1560]: time="2025-07-12T10:23:19.260280892Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:23:19.262995 containerd[1560]: time="2025-07-12T10:23:19.262960078Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:23:19.263895 containerd[1560]: time="2025-07-12T10:23:19.263810584Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.54958919s" Jul 12 10:23:19.263895 containerd[1560]: time="2025-07-12T10:23:19.263882749Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 12 10:23:19.264351 containerd[1560]: time="2025-07-12T10:23:19.264324328Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 12 10:23:19.710852 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1318124503.mount: Deactivated successfully. Jul 12 10:23:19.717087 containerd[1560]: time="2025-07-12T10:23:19.717033492Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 10:23:19.717840 containerd[1560]: time="2025-07-12T10:23:19.717767580Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 12 10:23:19.719036 containerd[1560]: time="2025-07-12T10:23:19.718986306Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 10:23:19.720887 containerd[1560]: time="2025-07-12T10:23:19.720828182Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 10:23:19.721468 containerd[1560]: time="2025-07-12T10:23:19.721431884Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 457.081367ms" Jul 12 10:23:19.721468 containerd[1560]: time="2025-07-12T10:23:19.721461660Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 12 10:23:19.722147 containerd[1560]: time="2025-07-12T10:23:19.721944686Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 12 10:23:20.280953 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3777357490.mount: Deactivated successfully. Jul 12 10:23:22.959905 containerd[1560]: time="2025-07-12T10:23:22.959792860Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:23:22.963500 containerd[1560]: time="2025-07-12T10:23:22.963443529Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" Jul 12 10:23:22.965013 containerd[1560]: time="2025-07-12T10:23:22.964951749Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:23:22.998280 containerd[1560]: time="2025-07-12T10:23:22.998207319Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:23:22.999400 containerd[1560]: time="2025-07-12T10:23:22.999351966Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.277375099s" Jul 12 10:23:22.999400 containerd[1560]: time="2025-07-12T10:23:22.999387262Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jul 12 10:23:25.061792 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 10:23:25.062007 systemd[1]: kubelet.service: Consumed 421ms CPU time, 111.1M memory peak. Jul 12 10:23:25.064573 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 10:23:25.093926 systemd[1]: Reload requested from client PID 2247 ('systemctl') (unit session-7.scope)... Jul 12 10:23:25.093946 systemd[1]: Reloading... Jul 12 10:23:25.193798 zram_generator::config[2290]: No configuration found. Jul 12 10:23:25.457899 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 10:23:25.573880 systemd[1]: Reloading finished in 479 ms. Jul 12 10:23:25.661735 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 12 10:23:25.661851 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 12 10:23:25.662152 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 10:23:25.662192 systemd[1]: kubelet.service: Consumed 154ms CPU time, 98.2M memory peak. Jul 12 10:23:25.663799 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 10:23:25.854818 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 10:23:25.858707 (kubelet)[2338]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 12 10:23:25.897784 kubelet[2338]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 10:23:25.897784 kubelet[2338]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 12 10:23:25.897784 kubelet[2338]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 10:23:25.898203 kubelet[2338]: I0712 10:23:25.897839 2338 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 12 10:23:26.319898 kubelet[2338]: I0712 10:23:26.319630 2338 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 12 10:23:26.319898 kubelet[2338]: I0712 10:23:26.319681 2338 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 12 10:23:26.320537 kubelet[2338]: I0712 10:23:26.320484 2338 server.go:954] "Client rotation is on, will bootstrap in background" Jul 12 10:23:26.455885 kubelet[2338]: E0712 10:23:26.455780 2338 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.137:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Jul 12 10:23:26.459934 kubelet[2338]: I0712 10:23:26.459881 2338 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 12 10:23:26.468348 kubelet[2338]: I0712 10:23:26.468308 2338 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 12 10:23:26.475013 kubelet[2338]: I0712 10:23:26.474989 2338 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 12 10:23:26.476232 kubelet[2338]: I0712 10:23:26.476170 2338 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 12 10:23:26.476450 kubelet[2338]: I0712 10:23:26.476220 2338 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 12 10:23:26.476746 kubelet[2338]: I0712 10:23:26.476458 2338 topology_manager.go:138] "Creating topology manager with none policy" Jul 12 10:23:26.476746 kubelet[2338]: I0712 10:23:26.476468 2338 container_manager_linux.go:304] "Creating device plugin manager" Jul 12 10:23:26.476746 kubelet[2338]: I0712 10:23:26.476693 2338 state_mem.go:36] "Initialized new in-memory state store" Jul 12 10:23:26.481946 kubelet[2338]: I0712 10:23:26.481896 2338 kubelet.go:446] "Attempting to sync node with API server" Jul 12 10:23:26.481983 kubelet[2338]: I0712 10:23:26.481951 2338 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 12 10:23:26.482013 kubelet[2338]: I0712 10:23:26.481992 2338 kubelet.go:352] "Adding apiserver pod source" Jul 12 10:23:26.482013 kubelet[2338]: I0712 10:23:26.482013 2338 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 12 10:23:26.486866 kubelet[2338]: W0712 10:23:26.485295 2338 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.137:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Jul 12 10:23:26.486866 kubelet[2338]: E0712 10:23:26.485392 2338 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.137:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Jul 12 10:23:26.486866 kubelet[2338]: W0712 10:23:26.485695 2338 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.137:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Jul 12 10:23:26.486866 kubelet[2338]: E0712 10:23:26.485769 2338 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.137:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Jul 12 10:23:26.488035 kubelet[2338]: I0712 10:23:26.487096 2338 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 12 10:23:26.488035 kubelet[2338]: I0712 10:23:26.487666 2338 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 12 10:23:26.488581 kubelet[2338]: W0712 10:23:26.488538 2338 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 12 10:23:26.491769 kubelet[2338]: I0712 10:23:26.491709 2338 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 12 10:23:26.491867 kubelet[2338]: I0712 10:23:26.491804 2338 server.go:1287] "Started kubelet" Jul 12 10:23:26.492066 kubelet[2338]: I0712 10:23:26.492013 2338 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 12 10:23:26.493583 kubelet[2338]: I0712 10:23:26.493563 2338 server.go:479] "Adding debug handlers to kubelet server" Jul 12 10:23:26.496004 kubelet[2338]: I0712 10:23:26.495703 2338 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 12 10:23:26.496500 kubelet[2338]: I0712 10:23:26.496465 2338 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 12 10:23:26.496959 kubelet[2338]: E0712 10:23:26.496924 2338 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 12 10:23:26.498630 kubelet[2338]: I0712 10:23:26.497501 2338 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 12 10:23:26.498630 kubelet[2338]: I0712 10:23:26.497628 2338 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 12 10:23:26.498630 kubelet[2338]: I0712 10:23:26.498067 2338 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 12 10:23:26.498630 kubelet[2338]: I0712 10:23:26.498180 2338 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 12 10:23:26.498630 kubelet[2338]: I0712 10:23:26.498243 2338 reconciler.go:26] "Reconciler: start to sync state" Jul 12 10:23:26.498630 kubelet[2338]: E0712 10:23:26.498595 2338 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 10:23:26.499272 kubelet[2338]: E0712 10:23:26.499229 2338 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.137:6443: connect: connection refused" interval="200ms" Jul 12 10:23:26.499453 kubelet[2338]: W0712 10:23:26.499344 2338 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.137:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Jul 12 10:23:26.499453 kubelet[2338]: E0712 10:23:26.499409 2338 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.137:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Jul 12 10:23:26.499453 kubelet[2338]: I0712 10:23:26.499419 2338 factory.go:221] Registration of the systemd container factory successfully Jul 12 10:23:26.499527 kubelet[2338]: I0712 10:23:26.499513 2338 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 12 10:23:26.500828 kubelet[2338]: E0712 10:23:26.499506 2338 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.137:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.137:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.185179f11c3e584a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-12 10:23:26.491760714 +0000 UTC m=+0.628939947,LastTimestamp:2025-07-12 10:23:26.491760714 +0000 UTC m=+0.628939947,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 12 10:23:26.501288 kubelet[2338]: I0712 10:23:26.501250 2338 factory.go:221] Registration of the containerd container factory successfully Jul 12 10:23:26.518898 kubelet[2338]: I0712 10:23:26.518697 2338 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 12 10:23:26.518898 kubelet[2338]: I0712 10:23:26.518789 2338 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 12 10:23:26.518898 kubelet[2338]: I0712 10:23:26.518819 2338 state_mem.go:36] "Initialized new in-memory state store" Jul 12 10:23:26.522228 kubelet[2338]: I0712 10:23:26.522188 2338 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 12 10:23:26.524474 kubelet[2338]: I0712 10:23:26.524437 2338 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 12 10:23:26.524523 kubelet[2338]: I0712 10:23:26.524492 2338 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 12 10:23:26.524560 kubelet[2338]: I0712 10:23:26.524532 2338 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 12 10:23:26.524560 kubelet[2338]: I0712 10:23:26.524542 2338 kubelet.go:2382] "Starting kubelet main sync loop" Jul 12 10:23:26.524635 kubelet[2338]: E0712 10:23:26.524611 2338 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 12 10:23:26.525793 kubelet[2338]: W0712 10:23:26.525602 2338 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.137:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Jul 12 10:23:26.525793 kubelet[2338]: E0712 10:23:26.525670 2338 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.137:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Jul 12 10:23:26.599354 kubelet[2338]: E0712 10:23:26.599189 2338 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 10:23:26.625455 kubelet[2338]: E0712 10:23:26.625412 2338 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 12 10:23:26.700080 kubelet[2338]: E0712 10:23:26.700003 2338 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 10:23:26.700534 kubelet[2338]: E0712 10:23:26.700471 2338 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.137:6443: connect: connection refused" interval="400ms" Jul 12 10:23:26.795572 kubelet[2338]: I0712 10:23:26.795503 2338 policy_none.go:49] "None policy: Start" Jul 12 10:23:26.795572 kubelet[2338]: I0712 10:23:26.795558 2338 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 12 10:23:26.795779 kubelet[2338]: I0712 10:23:26.795590 2338 state_mem.go:35] "Initializing new in-memory state store" Jul 12 10:23:26.800122 kubelet[2338]: E0712 10:23:26.800087 2338 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 10:23:26.804566 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 12 10:23:26.825822 kubelet[2338]: E0712 10:23:26.825742 2338 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 12 10:23:26.826947 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 12 10:23:26.830391 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 12 10:23:26.853132 kubelet[2338]: I0712 10:23:26.853000 2338 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 12 10:23:26.853554 kubelet[2338]: I0712 10:23:26.853249 2338 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 12 10:23:26.853554 kubelet[2338]: I0712 10:23:26.853264 2338 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 12 10:23:26.853554 kubelet[2338]: I0712 10:23:26.853547 2338 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 12 10:23:26.854853 kubelet[2338]: E0712 10:23:26.854814 2338 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 12 10:23:26.855157 kubelet[2338]: E0712 10:23:26.854876 2338 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 12 10:23:26.955782 kubelet[2338]: I0712 10:23:26.955740 2338 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 12 10:23:26.956888 kubelet[2338]: E0712 10:23:26.956831 2338 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.137:6443/api/v1/nodes\": dial tcp 10.0.0.137:6443: connect: connection refused" node="localhost" Jul 12 10:23:27.102236 kubelet[2338]: E0712 10:23:27.102167 2338 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.137:6443: connect: connection refused" interval="800ms" Jul 12 10:23:27.158792 kubelet[2338]: I0712 10:23:27.158607 2338 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 12 10:23:27.159114 kubelet[2338]: E0712 10:23:27.159061 2338 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.137:6443/api/v1/nodes\": dial tcp 10.0.0.137:6443: connect: connection refused" node="localhost" Jul 12 10:23:27.236737 systemd[1]: Created slice kubepods-burstable-pod831cb1ffb9161033f88a5be31054459e.slice - libcontainer container kubepods-burstable-pod831cb1ffb9161033f88a5be31054459e.slice. Jul 12 10:23:27.253586 kubelet[2338]: E0712 10:23:27.253526 2338 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 12 10:23:27.260461 systemd[1]: Created slice kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice - libcontainer container kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice. Jul 12 10:23:27.269305 kubelet[2338]: E0712 10:23:27.269262 2338 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 12 10:23:27.272228 systemd[1]: Created slice kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice - libcontainer container kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice. Jul 12 10:23:27.274270 kubelet[2338]: E0712 10:23:27.274237 2338 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 12 10:23:27.303781 kubelet[2338]: I0712 10:23:27.303694 2338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 10:23:27.303781 kubelet[2338]: I0712 10:23:27.303766 2338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/831cb1ffb9161033f88a5be31054459e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"831cb1ffb9161033f88a5be31054459e\") " pod="kube-system/kube-apiserver-localhost" Jul 12 10:23:27.303781 kubelet[2338]: I0712 10:23:27.303787 2338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/831cb1ffb9161033f88a5be31054459e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"831cb1ffb9161033f88a5be31054459e\") " pod="kube-system/kube-apiserver-localhost" Jul 12 10:23:27.304025 kubelet[2338]: I0712 10:23:27.303807 2338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/831cb1ffb9161033f88a5be31054459e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"831cb1ffb9161033f88a5be31054459e\") " pod="kube-system/kube-apiserver-localhost" Jul 12 10:23:27.304025 kubelet[2338]: I0712 10:23:27.303829 2338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 10:23:27.304025 kubelet[2338]: I0712 10:23:27.303843 2338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 10:23:27.304025 kubelet[2338]: I0712 10:23:27.303878 2338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 10:23:27.304025 kubelet[2338]: I0712 10:23:27.303936 2338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 10:23:27.304165 kubelet[2338]: I0712 10:23:27.303964 2338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jul 12 10:23:27.470398 kubelet[2338]: W0712 10:23:27.470210 2338 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.137:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Jul 12 10:23:27.470398 kubelet[2338]: E0712 10:23:27.470308 2338 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.137:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Jul 12 10:23:27.554170 kubelet[2338]: E0712 10:23:27.554112 2338 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 10:23:27.554813 containerd[1560]: time="2025-07-12T10:23:27.554765096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:831cb1ffb9161033f88a5be31054459e,Namespace:kube-system,Attempt:0,}" Jul 12 10:23:27.560662 kubelet[2338]: I0712 10:23:27.560625 2338 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 12 10:23:27.561027 kubelet[2338]: E0712 10:23:27.560993 2338 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.137:6443/api/v1/nodes\": dial tcp 10.0.0.137:6443: connect: connection refused" node="localhost" Jul 12 10:23:27.570297 kubelet[2338]: E0712 10:23:27.570251 2338 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 10:23:27.570986 containerd[1560]: time="2025-07-12T10:23:27.570938339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,}" Jul 12 10:23:27.575164 kubelet[2338]: E0712 10:23:27.575125 2338 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 10:23:27.575470 containerd[1560]: time="2025-07-12T10:23:27.575441407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,}" Jul 12 10:23:27.903392 kubelet[2338]: E0712 10:23:27.903266 2338 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.137:6443: connect: connection refused" interval="1.6s" Jul 12 10:23:27.915062 kubelet[2338]: W0712 10:23:27.914974 2338 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.137:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Jul 12 10:23:27.915062 kubelet[2338]: E0712 10:23:27.915051 2338 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.137:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Jul 12 10:23:27.972945 containerd[1560]: time="2025-07-12T10:23:27.972870211Z" level=info msg="connecting to shim 2f5b4e36b09cb1295a87e09a0da0afed6e319a8f0ebdb2a231b8250de84064e6" address="unix:///run/containerd/s/f26dd4618444f3cb253abee99f27a5a58ab4144dbc809863c12836f938bf770f" namespace=k8s.io protocol=ttrpc version=3 Jul 12 10:23:27.974153 kubelet[2338]: W0712 10:23:27.974090 2338 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.137:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Jul 12 10:23:27.974547 kubelet[2338]: E0712 10:23:27.974168 2338 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.137:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Jul 12 10:23:27.979085 containerd[1560]: time="2025-07-12T10:23:27.979004550Z" level=info msg="connecting to shim 9a72217b299f009a766338173e22fd9a8f74b71a69a345fd70d5ef4dfc5c0593" address="unix:///run/containerd/s/ddbe8c78be46be614515ba47e5d0b4fc0a93373604533d2776cff030400d47c0" namespace=k8s.io protocol=ttrpc version=3 Jul 12 10:23:27.980748 containerd[1560]: time="2025-07-12T10:23:27.980455231Z" level=info msg="connecting to shim 87cd3cadcc5cac949bb5a7f1500c92383f12b20bb9ef57ab018c25827d61dfca" address="unix:///run/containerd/s/152c092f59e631e55f59a563677d31317962b107132d404d7f24f864ae22767e" namespace=k8s.io protocol=ttrpc version=3 Jul 12 10:23:28.042888 systemd[1]: Started cri-containerd-9a72217b299f009a766338173e22fd9a8f74b71a69a345fd70d5ef4dfc5c0593.scope - libcontainer container 9a72217b299f009a766338173e22fd9a8f74b71a69a345fd70d5ef4dfc5c0593. Jul 12 10:23:28.047353 systemd[1]: Started cri-containerd-87cd3cadcc5cac949bb5a7f1500c92383f12b20bb9ef57ab018c25827d61dfca.scope - libcontainer container 87cd3cadcc5cac949bb5a7f1500c92383f12b20bb9ef57ab018c25827d61dfca. Jul 12 10:23:28.060068 systemd[1]: Started cri-containerd-2f5b4e36b09cb1295a87e09a0da0afed6e319a8f0ebdb2a231b8250de84064e6.scope - libcontainer container 2f5b4e36b09cb1295a87e09a0da0afed6e319a8f0ebdb2a231b8250de84064e6. Jul 12 10:23:28.063599 kubelet[2338]: W0712 10:23:28.063505 2338 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.137:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.137:6443: connect: connection refused Jul 12 10:23:28.063734 kubelet[2338]: E0712 10:23:28.063623 2338 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.137:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" Jul 12 10:23:28.187372 containerd[1560]: time="2025-07-12T10:23:28.186592359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"87cd3cadcc5cac949bb5a7f1500c92383f12b20bb9ef57ab018c25827d61dfca\"" Jul 12 10:23:28.190242 kubelet[2338]: E0712 10:23:28.190007 2338 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 10:23:28.193751 containerd[1560]: time="2025-07-12T10:23:28.193679435Z" level=info msg="CreateContainer within sandbox \"87cd3cadcc5cac949bb5a7f1500c92383f12b20bb9ef57ab018c25827d61dfca\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 12 10:23:28.200513 containerd[1560]: time="2025-07-12T10:23:28.200480835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:831cb1ffb9161033f88a5be31054459e,Namespace:kube-system,Attempt:0,} returns sandbox id \"2f5b4e36b09cb1295a87e09a0da0afed6e319a8f0ebdb2a231b8250de84064e6\"" Jul 12 10:23:28.201297 kubelet[2338]: E0712 10:23:28.201256 2338 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 10:23:28.203386 containerd[1560]: time="2025-07-12T10:23:28.203360147Z" level=info msg="CreateContainer within sandbox \"2f5b4e36b09cb1295a87e09a0da0afed6e319a8f0ebdb2a231b8250de84064e6\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 12 10:23:28.208223 containerd[1560]: time="2025-07-12T10:23:28.208154752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"9a72217b299f009a766338173e22fd9a8f74b71a69a345fd70d5ef4dfc5c0593\"" Jul 12 10:23:28.208911 kubelet[2338]: E0712 10:23:28.208875 2338 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 10:23:28.210629 containerd[1560]: time="2025-07-12T10:23:28.210568210Z" level=info msg="CreateContainer within sandbox \"9a72217b299f009a766338173e22fd9a8f74b71a69a345fd70d5ef4dfc5c0593\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 12 10:23:28.213079 containerd[1560]: time="2025-07-12T10:23:28.213032433Z" level=info msg="Container 9fb34ae1476224c4aec5439107006c03243d39e0affa19ab2380e9228afbacbc: CDI devices from CRI Config.CDIDevices: []" Jul 12 10:23:28.218919 containerd[1560]: time="2025-07-12T10:23:28.218882678Z" level=info msg="Container f7b587c44e32b39390d7509058af399417c6d740862ce463d948e656e63912c9: CDI devices from CRI Config.CDIDevices: []" Jul 12 10:23:28.227099 containerd[1560]: time="2025-07-12T10:23:28.227052876Z" level=info msg="CreateContainer within sandbox \"87cd3cadcc5cac949bb5a7f1500c92383f12b20bb9ef57ab018c25827d61dfca\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9fb34ae1476224c4aec5439107006c03243d39e0affa19ab2380e9228afbacbc\"" Jul 12 10:23:28.227864 containerd[1560]: time="2025-07-12T10:23:28.227825345Z" level=info msg="StartContainer for \"9fb34ae1476224c4aec5439107006c03243d39e0affa19ab2380e9228afbacbc\"" Jul 12 10:23:28.228999 containerd[1560]: time="2025-07-12T10:23:28.228972498Z" level=info msg="connecting to shim 9fb34ae1476224c4aec5439107006c03243d39e0affa19ab2380e9228afbacbc" address="unix:///run/containerd/s/152c092f59e631e55f59a563677d31317962b107132d404d7f24f864ae22767e" protocol=ttrpc version=3 Jul 12 10:23:28.232757 containerd[1560]: time="2025-07-12T10:23:28.230559685Z" level=info msg="Container 7df5e4b2f53d9337eaed9567c5f4572596068ae6beb129877ed63849fecf4ba9: CDI devices from CRI Config.CDIDevices: []" Jul 12 10:23:28.235737 containerd[1560]: time="2025-07-12T10:23:28.235664432Z" level=info msg="CreateContainer within sandbox \"2f5b4e36b09cb1295a87e09a0da0afed6e319a8f0ebdb2a231b8250de84064e6\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f7b587c44e32b39390d7509058af399417c6d740862ce463d948e656e63912c9\"" Jul 12 10:23:28.237163 containerd[1560]: time="2025-07-12T10:23:28.237123509Z" level=info msg="StartContainer for \"f7b587c44e32b39390d7509058af399417c6d740862ce463d948e656e63912c9\"" Jul 12 10:23:28.238272 containerd[1560]: time="2025-07-12T10:23:28.238246687Z" level=info msg="connecting to shim f7b587c44e32b39390d7509058af399417c6d740862ce463d948e656e63912c9" address="unix:///run/containerd/s/f26dd4618444f3cb253abee99f27a5a58ab4144dbc809863c12836f938bf770f" protocol=ttrpc version=3 Jul 12 10:23:28.240709 containerd[1560]: time="2025-07-12T10:23:28.240646298Z" level=info msg="CreateContainer within sandbox \"9a72217b299f009a766338173e22fd9a8f74b71a69a345fd70d5ef4dfc5c0593\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7df5e4b2f53d9337eaed9567c5f4572596068ae6beb129877ed63849fecf4ba9\"" Jul 12 10:23:28.241964 containerd[1560]: time="2025-07-12T10:23:28.241935166Z" level=info msg="StartContainer for \"7df5e4b2f53d9337eaed9567c5f4572596068ae6beb129877ed63849fecf4ba9\"" Jul 12 10:23:28.243761 containerd[1560]: time="2025-07-12T10:23:28.243728631Z" level=info msg="connecting to shim 7df5e4b2f53d9337eaed9567c5f4572596068ae6beb129877ed63849fecf4ba9" address="unix:///run/containerd/s/ddbe8c78be46be614515ba47e5d0b4fc0a93373604533d2776cff030400d47c0" protocol=ttrpc version=3 Jul 12 10:23:28.251905 systemd[1]: Started cri-containerd-9fb34ae1476224c4aec5439107006c03243d39e0affa19ab2380e9228afbacbc.scope - libcontainer container 9fb34ae1476224c4aec5439107006c03243d39e0affa19ab2380e9228afbacbc. Jul 12 10:23:28.266841 systemd[1]: Started cri-containerd-f7b587c44e32b39390d7509058af399417c6d740862ce463d948e656e63912c9.scope - libcontainer container f7b587c44e32b39390d7509058af399417c6d740862ce463d948e656e63912c9. Jul 12 10:23:28.270916 systemd[1]: Started cri-containerd-7df5e4b2f53d9337eaed9567c5f4572596068ae6beb129877ed63849fecf4ba9.scope - libcontainer container 7df5e4b2f53d9337eaed9567c5f4572596068ae6beb129877ed63849fecf4ba9. Jul 12 10:23:28.363555 kubelet[2338]: I0712 10:23:28.363519 2338 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 12 10:23:28.364181 kubelet[2338]: E0712 10:23:28.364155 2338 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.137:6443/api/v1/nodes\": dial tcp 10.0.0.137:6443: connect: connection refused" node="localhost" Jul 12 10:23:28.394974 containerd[1560]: time="2025-07-12T10:23:28.394930947Z" level=info msg="StartContainer for \"9fb34ae1476224c4aec5439107006c03243d39e0affa19ab2380e9228afbacbc\" returns successfully" Jul 12 10:23:28.406021 containerd[1560]: time="2025-07-12T10:23:28.405880419Z" level=info msg="StartContainer for \"f7b587c44e32b39390d7509058af399417c6d740862ce463d948e656e63912c9\" returns successfully" Jul 12 10:23:28.407169 containerd[1560]: time="2025-07-12T10:23:28.407103474Z" level=info msg="StartContainer for \"7df5e4b2f53d9337eaed9567c5f4572596068ae6beb129877ed63849fecf4ba9\" returns successfully" Jul 12 10:23:28.535697 kubelet[2338]: E0712 10:23:28.535544 2338 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 12 10:23:28.536535 kubelet[2338]: E0712 10:23:28.535839 2338 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 10:23:28.537861 kubelet[2338]: E0712 10:23:28.537793 2338 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 12 10:23:28.538127 kubelet[2338]: E0712 10:23:28.538102 2338 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 10:23:28.540662 kubelet[2338]: E0712 10:23:28.540638 2338 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 12 10:23:28.541793 kubelet[2338]: E0712 10:23:28.540799 2338 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 10:23:29.542075 kubelet[2338]: E0712 10:23:29.542040 2338 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 12 10:23:29.542592 kubelet[2338]: E0712 10:23:29.542170 2338 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 10:23:29.542592 kubelet[2338]: E0712 10:23:29.542169 2338 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 12 10:23:29.542592 kubelet[2338]: E0712 10:23:29.542385 2338 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 10:23:29.893141 kubelet[2338]: E0712 10:23:29.893011 2338 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 12 10:23:29.966948 kubelet[2338]: I0712 10:23:29.966897 2338 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 12 10:23:30.036159 kubelet[2338]: E0712 10:23:30.036034 2338 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.185179f11c3e584a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-12 10:23:26.491760714 +0000 UTC m=+0.628939947,LastTimestamp:2025-07-12 10:23:26.491760714 +0000 UTC m=+0.628939947,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 12 10:23:30.076181 kubelet[2338]: I0712 10:23:30.076125 2338 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 12 10:23:30.099324 kubelet[2338]: I0712 10:23:30.099275 2338 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 12 10:23:30.105761 kubelet[2338]: E0712 10:23:30.105694 2338 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 12 10:23:30.105761 kubelet[2338]: I0712 10:23:30.105757 2338 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 12 10:23:30.108996 kubelet[2338]: E0712 10:23:30.108954 2338 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 12 10:23:30.108996 kubelet[2338]: I0712 10:23:30.108986 2338 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 12 10:23:30.110831 kubelet[2338]: E0712 10:23:30.110797 2338 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jul 12 10:23:30.487246 kubelet[2338]: I0712 10:23:30.487162 2338 apiserver.go:52] "Watching apiserver" Jul 12 10:23:30.499439 kubelet[2338]: I0712 10:23:30.499368 2338 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 12 10:23:31.822418 systemd[1]: Reload requested from client PID 2619 ('systemctl') (unit session-7.scope)... Jul 12 10:23:31.822434 systemd[1]: Reloading... Jul 12 10:23:31.908776 zram_generator::config[2665]: No configuration found. Jul 12 10:23:31.997942 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 10:23:32.127226 systemd[1]: Reloading finished in 304 ms. Jul 12 10:23:32.154253 kubelet[2338]: I0712 10:23:32.154194 2338 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 12 10:23:32.154313 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 10:23:32.180036 systemd[1]: kubelet.service: Deactivated successfully. Jul 12 10:23:32.180361 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 10:23:32.180414 systemd[1]: kubelet.service: Consumed 1.169s CPU time, 132.1M memory peak. Jul 12 10:23:32.182412 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 10:23:32.410579 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 10:23:32.420052 (kubelet)[2707]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 12 10:23:32.459025 kubelet[2707]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 10:23:32.459025 kubelet[2707]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 12 10:23:32.459025 kubelet[2707]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 10:23:32.459416 kubelet[2707]: I0712 10:23:32.459072 2707 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 12 10:23:32.466363 kubelet[2707]: I0712 10:23:32.466324 2707 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 12 10:23:32.466363 kubelet[2707]: I0712 10:23:32.466352 2707 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 12 10:23:32.466681 kubelet[2707]: I0712 10:23:32.466655 2707 server.go:954] "Client rotation is on, will bootstrap in background" Jul 12 10:23:32.467897 kubelet[2707]: I0712 10:23:32.467872 2707 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 12 10:23:32.470015 kubelet[2707]: I0712 10:23:32.469994 2707 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 12 10:23:32.473516 kubelet[2707]: I0712 10:23:32.473479 2707 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 12 10:23:32.478054 kubelet[2707]: I0712 10:23:32.478025 2707 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 12 10:23:32.478273 kubelet[2707]: I0712 10:23:32.478233 2707 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 12 10:23:32.478417 kubelet[2707]: I0712 10:23:32.478260 2707 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 12 10:23:32.478515 kubelet[2707]: I0712 10:23:32.478420 2707 topology_manager.go:138] "Creating topology manager with none policy" Jul 12 10:23:32.478515 kubelet[2707]: I0712 10:23:32.478429 2707 container_manager_linux.go:304] "Creating device plugin manager" Jul 12 10:23:32.478515 kubelet[2707]: I0712 10:23:32.478478 2707 state_mem.go:36] "Initialized new in-memory state store" Jul 12 10:23:32.478661 kubelet[2707]: I0712 10:23:32.478633 2707 kubelet.go:446] "Attempting to sync node with API server" Jul 12 10:23:32.478661 kubelet[2707]: I0712 10:23:32.478654 2707 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 12 10:23:32.478790 kubelet[2707]: I0712 10:23:32.478677 2707 kubelet.go:352] "Adding apiserver pod source" Jul 12 10:23:32.478790 kubelet[2707]: I0712 10:23:32.478687 2707 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 12 10:23:32.479547 kubelet[2707]: I0712 10:23:32.479504 2707 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 12 10:23:32.481742 kubelet[2707]: I0712 10:23:32.479983 2707 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 12 10:23:32.481742 kubelet[2707]: I0712 10:23:32.480476 2707 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 12 10:23:32.481742 kubelet[2707]: I0712 10:23:32.480515 2707 server.go:1287] "Started kubelet" Jul 12 10:23:32.481742 kubelet[2707]: I0712 10:23:32.480644 2707 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 12 10:23:32.481742 kubelet[2707]: I0712 10:23:32.480904 2707 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 12 10:23:32.481742 kubelet[2707]: I0712 10:23:32.481178 2707 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 12 10:23:32.482128 kubelet[2707]: I0712 10:23:32.482112 2707 server.go:479] "Adding debug handlers to kubelet server" Jul 12 10:23:32.484329 kubelet[2707]: I0712 10:23:32.484298 2707 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 12 10:23:32.488483 kubelet[2707]: I0712 10:23:32.487391 2707 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 12 10:23:32.489330 kubelet[2707]: I0712 10:23:32.489315 2707 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 12 10:23:32.489525 kubelet[2707]: E0712 10:23:32.489509 2707 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 10:23:32.489973 kubelet[2707]: I0712 10:23:32.489957 2707 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 12 10:23:32.490136 kubelet[2707]: I0712 10:23:32.490125 2707 reconciler.go:26] "Reconciler: start to sync state" Jul 12 10:23:32.493226 kubelet[2707]: I0712 10:23:32.493171 2707 factory.go:221] Registration of the systemd container factory successfully Jul 12 10:23:32.493296 kubelet[2707]: I0712 10:23:32.493255 2707 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 12 10:23:32.495542 kubelet[2707]: I0712 10:23:32.494955 2707 factory.go:221] Registration of the containerd container factory successfully Jul 12 10:23:32.495841 kubelet[2707]: E0712 10:23:32.495629 2707 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 12 10:23:32.500996 kubelet[2707]: I0712 10:23:32.500960 2707 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 12 10:23:32.502157 kubelet[2707]: I0712 10:23:32.502131 2707 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 12 10:23:32.502205 kubelet[2707]: I0712 10:23:32.502167 2707 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 12 10:23:32.502205 kubelet[2707]: I0712 10:23:32.502189 2707 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 12 10:23:32.502205 kubelet[2707]: I0712 10:23:32.502196 2707 kubelet.go:2382] "Starting kubelet main sync loop" Jul 12 10:23:32.502278 kubelet[2707]: E0712 10:23:32.502243 2707 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 12 10:23:32.536284 kubelet[2707]: I0712 10:23:32.536249 2707 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 12 10:23:32.536284 kubelet[2707]: I0712 10:23:32.536275 2707 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 12 10:23:32.536418 kubelet[2707]: I0712 10:23:32.536304 2707 state_mem.go:36] "Initialized new in-memory state store" Jul 12 10:23:32.536536 kubelet[2707]: I0712 10:23:32.536515 2707 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 12 10:23:32.536565 kubelet[2707]: I0712 10:23:32.536534 2707 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 12 10:23:32.536565 kubelet[2707]: I0712 10:23:32.536563 2707 policy_none.go:49] "None policy: Start" Jul 12 10:23:32.536608 kubelet[2707]: I0712 10:23:32.536581 2707 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 12 10:23:32.536608 kubelet[2707]: I0712 10:23:32.536598 2707 state_mem.go:35] "Initializing new in-memory state store" Jul 12 10:23:32.536783 kubelet[2707]: I0712 10:23:32.536764 2707 state_mem.go:75] "Updated machine memory state" Jul 12 10:23:32.540943 kubelet[2707]: I0712 10:23:32.540917 2707 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 12 10:23:32.541128 kubelet[2707]: I0712 10:23:32.541099 2707 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 12 10:23:32.541128 kubelet[2707]: I0712 10:23:32.541117 2707 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 12 10:23:32.541445 kubelet[2707]: I0712 10:23:32.541289 2707 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 12 10:23:32.542543 kubelet[2707]: E0712 10:23:32.542522 2707 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 12 10:23:32.603276 kubelet[2707]: I0712 10:23:32.603170 2707 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 12 10:23:32.603574 kubelet[2707]: I0712 10:23:32.603170 2707 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 12 10:23:32.603574 kubelet[2707]: I0712 10:23:32.603397 2707 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 12 10:23:32.647084 kubelet[2707]: I0712 10:23:32.647047 2707 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 12 10:23:32.692155 kubelet[2707]: I0712 10:23:32.691422 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jul 12 10:23:32.692155 kubelet[2707]: I0712 10:23:32.691468 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/831cb1ffb9161033f88a5be31054459e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"831cb1ffb9161033f88a5be31054459e\") " pod="kube-system/kube-apiserver-localhost" Jul 12 10:23:32.692155 kubelet[2707]: I0712 10:23:32.691500 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/831cb1ffb9161033f88a5be31054459e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"831cb1ffb9161033f88a5be31054459e\") " pod="kube-system/kube-apiserver-localhost" Jul 12 10:23:32.692155 kubelet[2707]: I0712 10:23:32.691524 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 10:23:32.692155 kubelet[2707]: I0712 10:23:32.691547 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 10:23:32.692398 kubelet[2707]: I0712 10:23:32.691567 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/831cb1ffb9161033f88a5be31054459e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"831cb1ffb9161033f88a5be31054459e\") " pod="kube-system/kube-apiserver-localhost" Jul 12 10:23:32.692398 kubelet[2707]: I0712 10:23:32.691589 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 10:23:32.692398 kubelet[2707]: I0712 10:23:32.691657 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 10:23:32.692398 kubelet[2707]: I0712 10:23:32.691758 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 10:23:32.805273 kubelet[2707]: E0712 10:23:32.805221 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 10:23:32.805839 kubelet[2707]: E0712 10:23:32.805435 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 10:23:32.805839 kubelet[2707]: E0712 10:23:32.805762 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 10:23:32.808103 kubelet[2707]: I0712 10:23:32.808073 2707 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 12 10:23:32.808163 kubelet[2707]: I0712 10:23:32.808144 2707 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 12 10:23:33.479745 kubelet[2707]: I0712 10:23:33.479668 2707 apiserver.go:52] "Watching apiserver" Jul 12 10:23:33.490704 kubelet[2707]: I0712 10:23:33.490651 2707 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 12 10:23:33.519991 kubelet[2707]: I0712 10:23:33.519948 2707 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 12 10:23:33.520119 kubelet[2707]: E0712 10:23:33.520027 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 10:23:33.520446 kubelet[2707]: I0712 10:23:33.520427 2707 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 12 10:23:33.526100 kubelet[2707]: E0712 10:23:33.526006 2707 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 12 10:23:33.526230 kubelet[2707]: E0712 10:23:33.526202 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 10:23:33.528010 kubelet[2707]: E0712 10:23:33.527966 2707 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 12 10:23:33.528194 kubelet[2707]: E0712 10:23:33.528162 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 10:23:33.551198 kubelet[2707]: I0712 10:23:33.551006 2707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.550984021 podStartE2EDuration="1.550984021s" podCreationTimestamp="2025-07-12 10:23:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 10:23:33.550816337 +0000 UTC m=+1.126801712" watchObservedRunningTime="2025-07-12 10:23:33.550984021 +0000 UTC m=+1.126969396" Jul 12 10:23:33.564835 kubelet[2707]: I0712 10:23:33.564779 2707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.564755467 podStartE2EDuration="1.564755467s" podCreationTimestamp="2025-07-12 10:23:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 10:23:33.557861163 +0000 UTC m=+1.133846538" watchObservedRunningTime="2025-07-12 10:23:33.564755467 +0000 UTC m=+1.140740832" Jul 12 10:23:33.565409 kubelet[2707]: I0712 10:23:33.565325 2707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.5653162790000001 podStartE2EDuration="1.565316279s" podCreationTimestamp="2025-07-12 10:23:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 10:23:33.565264963 +0000 UTC m=+1.141250338" watchObservedRunningTime="2025-07-12 10:23:33.565316279 +0000 UTC m=+1.141301654" Jul 12 10:23:34.520978 kubelet[2707]: E0712 10:23:34.520928 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 10:23:34.521390 kubelet[2707]: E0712 10:23:34.521036 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 10:23:35.704556 kubelet[2707]: E0712 10:23:35.704506 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 10:23:37.920416 kubelet[2707]: I0712 10:23:37.920373 2707 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 12 10:23:37.920848 containerd[1560]: time="2025-07-12T10:23:37.920676127Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 12 10:23:37.921096 kubelet[2707]: I0712 10:23:37.921075 2707 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 12 10:23:38.582766 systemd[1]: Created slice kubepods-besteffort-pod0b559f12_cf8c_4a73_9cf8_17627fe20696.slice - libcontainer container kubepods-besteffort-pod0b559f12_cf8c_4a73_9cf8_17627fe20696.slice. Jul 12 10:23:38.630024 kubelet[2707]: I0712 10:23:38.629961 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0b559f12-cf8c-4a73-9cf8-17627fe20696-xtables-lock\") pod \"kube-proxy-sb86r\" (UID: \"0b559f12-cf8c-4a73-9cf8-17627fe20696\") " pod="kube-system/kube-proxy-sb86r" Jul 12 10:23:38.630024 kubelet[2707]: I0712 10:23:38.630008 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c45hc\" (UniqueName: \"kubernetes.io/projected/0b559f12-cf8c-4a73-9cf8-17627fe20696-kube-api-access-c45hc\") pod \"kube-proxy-sb86r\" (UID: \"0b559f12-cf8c-4a73-9cf8-17627fe20696\") " pod="kube-system/kube-proxy-sb86r" Jul 12 10:23:38.630182 kubelet[2707]: I0712 10:23:38.630037 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0b559f12-cf8c-4a73-9cf8-17627fe20696-kube-proxy\") pod \"kube-proxy-sb86r\" (UID: \"0b559f12-cf8c-4a73-9cf8-17627fe20696\") " pod="kube-system/kube-proxy-sb86r" Jul 12 10:23:38.630182 kubelet[2707]: I0712 10:23:38.630055 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0b559f12-cf8c-4a73-9cf8-17627fe20696-lib-modules\") pod \"kube-proxy-sb86r\" (UID: \"0b559f12-cf8c-4a73-9cf8-17627fe20696\") " pod="kube-system/kube-proxy-sb86r" Jul 12 10:23:38.734457 kubelet[2707]: E0712 10:23:38.734413 2707 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 12 10:23:38.734457 kubelet[2707]: E0712 10:23:38.734446 2707 projected.go:194] Error preparing data for projected volume kube-api-access-c45hc for pod kube-system/kube-proxy-sb86r: configmap "kube-root-ca.crt" not found Jul 12 10:23:38.734617 kubelet[2707]: E0712 10:23:38.734495 2707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b559f12-cf8c-4a73-9cf8-17627fe20696-kube-api-access-c45hc podName:0b559f12-cf8c-4a73-9cf8-17627fe20696 nodeName:}" failed. No retries permitted until 2025-07-12 10:23:39.234478667 +0000 UTC m=+6.810464042 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-c45hc" (UniqueName: "kubernetes.io/projected/0b559f12-cf8c-4a73-9cf8-17627fe20696-kube-api-access-c45hc") pod "kube-proxy-sb86r" (UID: "0b559f12-cf8c-4a73-9cf8-17627fe20696") : configmap "kube-root-ca.crt" not found Jul 12 10:23:39.098748 systemd[1]: Created slice kubepods-besteffort-podd7df0e33_55dd_46f2_96a8_97aecb20cebd.slice - libcontainer container kubepods-besteffort-podd7df0e33_55dd_46f2_96a8_97aecb20cebd.slice. Jul 12 10:23:39.133700 kubelet[2707]: I0712 10:23:39.133673 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s62gm\" (UniqueName: \"kubernetes.io/projected/d7df0e33-55dd-46f2-96a8-97aecb20cebd-kube-api-access-s62gm\") pod \"tigera-operator-747864d56d-jnfpc\" (UID: \"d7df0e33-55dd-46f2-96a8-97aecb20cebd\") " pod="tigera-operator/tigera-operator-747864d56d-jnfpc" Jul 12 10:23:39.134019 kubelet[2707]: I0712 10:23:39.133732 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d7df0e33-55dd-46f2-96a8-97aecb20cebd-var-lib-calico\") pod \"tigera-operator-747864d56d-jnfpc\" (UID: \"d7df0e33-55dd-46f2-96a8-97aecb20cebd\") " pod="tigera-operator/tigera-operator-747864d56d-jnfpc" Jul 12 10:23:39.402196 containerd[1560]: time="2025-07-12T10:23:39.402069437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-jnfpc,Uid:d7df0e33-55dd-46f2-96a8-97aecb20cebd,Namespace:tigera-operator,Attempt:0,}" Jul 12 10:23:39.421189 containerd[1560]: time="2025-07-12T10:23:39.421141091Z" level=info msg="connecting to shim 7dafd391ab1918547bf9380b6645894dda57936fe368ffd78dd5226287080559" address="unix:///run/containerd/s/652ac40f71952f312eb651af28c636ba9b77d994ac8ba19e1d5a3ce5ef2508be" namespace=k8s.io protocol=ttrpc version=3 Jul 12 10:23:39.449924 systemd[1]: Started cri-containerd-7dafd391ab1918547bf9380b6645894dda57936fe368ffd78dd5226287080559.scope - libcontainer container 7dafd391ab1918547bf9380b6645894dda57936fe368ffd78dd5226287080559. Jul 12 10:23:39.496689 containerd[1560]: time="2025-07-12T10:23:39.496634265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-jnfpc,Uid:d7df0e33-55dd-46f2-96a8-97aecb20cebd,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"7dafd391ab1918547bf9380b6645894dda57936fe368ffd78dd5226287080559\"" Jul 12 10:23:39.498028 kubelet[2707]: E0712 10:23:39.498004 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 10:23:39.498355 containerd[1560]: time="2025-07-12T10:23:39.498284210Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 12 10:23:39.498446 containerd[1560]: time="2025-07-12T10:23:39.498429889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sb86r,Uid:0b559f12-cf8c-4a73-9cf8-17627fe20696,Namespace:kube-system,Attempt:0,}" Jul 12 10:23:39.524111 containerd[1560]: time="2025-07-12T10:23:39.524063321Z" level=info msg="connecting to shim f08419ca5afe038718811fe4ebf2fb53b923df81536388a04ea963ae3e8353cc" address="unix:///run/containerd/s/82205dcf199979b49df4739da8d1cff31e3dc441c4c416db318248c7d793b81d" namespace=k8s.io protocol=ttrpc version=3 Jul 12 10:23:39.552869 systemd[1]: Started cri-containerd-f08419ca5afe038718811fe4ebf2fb53b923df81536388a04ea963ae3e8353cc.scope - libcontainer container f08419ca5afe038718811fe4ebf2fb53b923df81536388a04ea963ae3e8353cc. Jul 12 10:23:39.581684 containerd[1560]: time="2025-07-12T10:23:39.581630975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sb86r,Uid:0b559f12-cf8c-4a73-9cf8-17627fe20696,Namespace:kube-system,Attempt:0,} returns sandbox id \"f08419ca5afe038718811fe4ebf2fb53b923df81536388a04ea963ae3e8353cc\"" Jul 12 10:23:39.582313 kubelet[2707]: E0712 10:23:39.582287 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 10:23:39.584185 containerd[1560]: time="2025-07-12T10:23:39.584152908Z" level=info msg="CreateContainer within sandbox \"f08419ca5afe038718811fe4ebf2fb53b923df81536388a04ea963ae3e8353cc\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 12 10:23:39.597179 containerd[1560]: time="2025-07-12T10:23:39.597132791Z" level=info msg="Container f0431fc41e69dd81f90e4eebbd85d8465f348749613c0aedb8be27a971f2e2b1: CDI devices from CRI Config.CDIDevices: []" Jul 12 10:23:39.604977 containerd[1560]: time="2025-07-12T10:23:39.604938899Z" level=info msg="CreateContainer within sandbox \"f08419ca5afe038718811fe4ebf2fb53b923df81536388a04ea963ae3e8353cc\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f0431fc41e69dd81f90e4eebbd85d8465f348749613c0aedb8be27a971f2e2b1\"" Jul 12 10:23:39.605471 containerd[1560]: time="2025-07-12T10:23:39.605431521Z" level=info msg="StartContainer for \"f0431fc41e69dd81f90e4eebbd85d8465f348749613c0aedb8be27a971f2e2b1\"" Jul 12 10:23:39.607110 containerd[1560]: time="2025-07-12T10:23:39.607086126Z" level=info msg="connecting to shim f0431fc41e69dd81f90e4eebbd85d8465f348749613c0aedb8be27a971f2e2b1" address="unix:///run/containerd/s/82205dcf199979b49df4739da8d1cff31e3dc441c4c416db318248c7d793b81d" protocol=ttrpc version=3 Jul 12 10:23:39.683879 systemd[1]: Started cri-containerd-f0431fc41e69dd81f90e4eebbd85d8465f348749613c0aedb8be27a971f2e2b1.scope - libcontainer container f0431fc41e69dd81f90e4eebbd85d8465f348749613c0aedb8be27a971f2e2b1. Jul 12 10:23:39.731089 containerd[1560]: time="2025-07-12T10:23:39.731020661Z" level=info msg="StartContainer for \"f0431fc41e69dd81f90e4eebbd85d8465f348749613c0aedb8be27a971f2e2b1\" returns successfully" Jul 12 10:23:40.532852 kubelet[2707]: E0712 10:23:40.532802 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 10:23:40.541260 kubelet[2707]: I0712 10:23:40.541191 2707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-sb86r" podStartSLOduration=2.541170222 podStartE2EDuration="2.541170222s" podCreationTimestamp="2025-07-12 10:23:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 10:23:40.541000097 +0000 UTC m=+8.116985492" watchObservedRunningTime="2025-07-12 10:23:40.541170222 +0000 UTC m=+8.117155597" Jul 12 10:23:40.831380 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1368544451.mount: Deactivated successfully. Jul 12 10:23:41.163965 containerd[1560]: time="2025-07-12T10:23:41.163837151Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:23:41.164799 containerd[1560]: time="2025-07-12T10:23:41.164767878Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 12 10:23:41.165942 containerd[1560]: time="2025-07-12T10:23:41.165895689Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:23:41.167936 containerd[1560]: time="2025-07-12T10:23:41.167910614Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:23:41.168511 containerd[1560]: time="2025-07-12T10:23:41.168479650Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 1.670159752s" Jul 12 10:23:41.168545 containerd[1560]: time="2025-07-12T10:23:41.168518914Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 12 10:23:41.170479 containerd[1560]: time="2025-07-12T10:23:41.170455389Z" level=info msg="CreateContainer within sandbox \"7dafd391ab1918547bf9380b6645894dda57936fe368ffd78dd5226287080559\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 12 10:23:41.177058 containerd[1560]: time="2025-07-12T10:23:41.177017560Z" level=info msg="Container 9b180ebd159050e624b8e26679ae1c403936d229c0b84b8024abf3307cb17355: CDI devices from CRI Config.CDIDevices: []" Jul 12 10:23:41.183649 containerd[1560]: time="2025-07-12T10:23:41.183610730Z" level=info msg="CreateContainer within sandbox \"7dafd391ab1918547bf9380b6645894dda57936fe368ffd78dd5226287080559\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"9b180ebd159050e624b8e26679ae1c403936d229c0b84b8024abf3307cb17355\"" Jul 12 10:23:41.184095 containerd[1560]: time="2025-07-12T10:23:41.184051441Z" level=info msg="StartContainer for \"9b180ebd159050e624b8e26679ae1c403936d229c0b84b8024abf3307cb17355\"" Jul 12 10:23:41.185637 containerd[1560]: time="2025-07-12T10:23:41.185400395Z" level=info msg="connecting to shim 9b180ebd159050e624b8e26679ae1c403936d229c0b84b8024abf3307cb17355" address="unix:///run/containerd/s/652ac40f71952f312eb651af28c636ba9b77d994ac8ba19e1d5a3ce5ef2508be" protocol=ttrpc version=3 Jul 12 10:23:41.234843 systemd[1]: Started cri-containerd-9b180ebd159050e624b8e26679ae1c403936d229c0b84b8024abf3307cb17355.scope - libcontainer container 9b180ebd159050e624b8e26679ae1c403936d229c0b84b8024abf3307cb17355. Jul 12 10:23:41.263769 containerd[1560]: time="2025-07-12T10:23:41.263679823Z" level=info msg="StartContainer for \"9b180ebd159050e624b8e26679ae1c403936d229c0b84b8024abf3307cb17355\" returns successfully" Jul 12 10:23:42.459172 kubelet[2707]: E0712 10:23:42.459116 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 10:23:42.489223 kubelet[2707]: I0712 10:23:42.489167 2707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-jnfpc" podStartSLOduration=1.817651417 podStartE2EDuration="3.489144665s" podCreationTimestamp="2025-07-12 10:23:39 +0000 UTC" firstStartedPulling="2025-07-12 10:23:39.497857765 +0000 UTC m=+7.073843130" lastFinishedPulling="2025-07-12 10:23:41.169351002 +0000 UTC m=+8.745336378" observedRunningTime="2025-07-12 10:23:41.54737519 +0000 UTC m=+9.123360565" watchObservedRunningTime="2025-07-12 10:23:42.489144665 +0000 UTC m=+10.065130040" Jul 12 10:23:42.540111 kubelet[2707]: E0712 10:23:42.540063 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 10:23:42.598517 kubelet[2707]: E0712 10:23:42.598459 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 10:23:43.543265 kubelet[2707]: E0712 10:23:43.543204 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 10:23:43.545131 kubelet[2707]: E0712 10:23:43.545063 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 10:23:45.709897 kubelet[2707]: E0712 10:23:45.709852 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 10:23:46.564866 sudo[1779]: pam_unix(sudo:session): session closed for user root Jul 12 10:23:46.567765 sshd-session[1775]: pam_unix(sshd:session): session closed for user core Jul 12 10:23:46.568144 sshd[1778]: Connection closed by 10.0.0.1 port 48336 Jul 12 10:23:46.572383 systemd[1]: sshd@6-10.0.0.137:22-10.0.0.1:48336.service: Deactivated successfully. Jul 12 10:23:46.575431 systemd[1]: session-7.scope: Deactivated successfully. Jul 12 10:23:46.576109 systemd[1]: session-7.scope: Consumed 4.301s CPU time, 227.8M memory peak. Jul 12 10:23:46.577609 systemd-logind[1540]: Session 7 logged out. Waiting for processes to exit. Jul 12 10:23:46.580083 systemd-logind[1540]: Removed session 7. Jul 12 10:23:47.506745 update_engine[1544]: I20250712 10:23:47.504225 1544 update_attempter.cc:509] Updating boot flags... Jul 12 10:23:48.925331 systemd[1]: Created slice kubepods-besteffort-pod03ce7d31_a7fa_42c2_bf81_46075516be24.slice - libcontainer container kubepods-besteffort-pod03ce7d31_a7fa_42c2_bf81_46075516be24.slice. Jul 12 10:23:48.996054 kubelet[2707]: I0712 10:23:48.995986 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/03ce7d31-a7fa-42c2-bf81-46075516be24-typha-certs\") pod \"calico-typha-7887c8ff4d-kxsks\" (UID: \"03ce7d31-a7fa-42c2-bf81-46075516be24\") " pod="calico-system/calico-typha-7887c8ff4d-kxsks" Jul 12 10:23:48.996593 kubelet[2707]: I0712 10:23:48.996140 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03ce7d31-a7fa-42c2-bf81-46075516be24-tigera-ca-bundle\") pod \"calico-typha-7887c8ff4d-kxsks\" (UID: \"03ce7d31-a7fa-42c2-bf81-46075516be24\") " pod="calico-system/calico-typha-7887c8ff4d-kxsks" Jul 12 10:23:48.996593 kubelet[2707]: I0712 10:23:48.996225 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbczr\" (UniqueName: \"kubernetes.io/projected/03ce7d31-a7fa-42c2-bf81-46075516be24-kube-api-access-tbczr\") pod \"calico-typha-7887c8ff4d-kxsks\" (UID: \"03ce7d31-a7fa-42c2-bf81-46075516be24\") " pod="calico-system/calico-typha-7887c8ff4d-kxsks" Jul 12 10:23:49.230817 kubelet[2707]: E0712 10:23:49.230774 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 10:23:49.231291 containerd[1560]: time="2025-07-12T10:23:49.231244945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7887c8ff4d-kxsks,Uid:03ce7d31-a7fa-42c2-bf81-46075516be24,Namespace:calico-system,Attempt:0,}" Jul 12 10:23:49.368388 systemd[1]: Created slice kubepods-besteffort-pode91389d6_f789_4953_b773_4f711d977f78.slice - libcontainer container kubepods-besteffort-pode91389d6_f789_4953_b773_4f711d977f78.slice. Jul 12 10:23:49.376546 containerd[1560]: time="2025-07-12T10:23:49.376476830Z" level=info msg="connecting to shim 20798c470ac9c50b89704386b99bd555486e08edb13e6c0ef90fe5d552eb043f" address="unix:///run/containerd/s/2480bb3fe297a7b769df615c3c3da100af969740fee975d12a8dc25568128139" namespace=k8s.io protocol=ttrpc version=3 Jul 12 10:23:49.399070 kubelet[2707]: I0712 10:23:49.399013 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e91389d6-f789-4953-b773-4f711d977f78-lib-modules\") pod \"calico-node-pwlkk\" (UID: \"e91389d6-f789-4953-b773-4f711d977f78\") " pod="calico-system/calico-node-pwlkk" Jul 12 10:23:49.399070 kubelet[2707]: I0712 10:23:49.399068 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5qpc\" (UniqueName: \"kubernetes.io/projected/e91389d6-f789-4953-b773-4f711d977f78-kube-api-access-w5qpc\") pod \"calico-node-pwlkk\" (UID: \"e91389d6-f789-4953-b773-4f711d977f78\") " pod="calico-system/calico-node-pwlkk" Jul 12 10:23:49.399308 kubelet[2707]: I0712 10:23:49.399087 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e91389d6-f789-4953-b773-4f711d977f78-var-lib-calico\") pod \"calico-node-pwlkk\" (UID: \"e91389d6-f789-4953-b773-4f711d977f78\") " pod="calico-system/calico-node-pwlkk" Jul 12 10:23:49.399308 kubelet[2707]: I0712 10:23:49.399104 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e91389d6-f789-4953-b773-4f711d977f78-flexvol-driver-host\") pod \"calico-node-pwlkk\" (UID: \"e91389d6-f789-4953-b773-4f711d977f78\") " pod="calico-system/calico-node-pwlkk" Jul 12 10:23:49.399308 kubelet[2707]: I0712 10:23:49.399120 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e91389d6-f789-4953-b773-4f711d977f78-xtables-lock\") pod \"calico-node-pwlkk\" (UID: \"e91389d6-f789-4953-b773-4f711d977f78\") " pod="calico-system/calico-node-pwlkk" Jul 12 10:23:49.399308 kubelet[2707]: I0712 10:23:49.399140 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e91389d6-f789-4953-b773-4f711d977f78-cni-bin-dir\") pod \"calico-node-pwlkk\" (UID: \"e91389d6-f789-4953-b773-4f711d977f78\") " pod="calico-system/calico-node-pwlkk" Jul 12 10:23:49.399308 kubelet[2707]: I0712 10:23:49.399156 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e91389d6-f789-4953-b773-4f711d977f78-cni-net-dir\") pod \"calico-node-pwlkk\" (UID: \"e91389d6-f789-4953-b773-4f711d977f78\") " pod="calico-system/calico-node-pwlkk" Jul 12 10:23:49.399445 kubelet[2707]: I0712 10:23:49.399170 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e91389d6-f789-4953-b773-4f711d977f78-tigera-ca-bundle\") pod \"calico-node-pwlkk\" (UID: \"e91389d6-f789-4953-b773-4f711d977f78\") " pod="calico-system/calico-node-pwlkk" Jul 12 10:23:49.399445 kubelet[2707]: I0712 10:23:49.399186 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e91389d6-f789-4953-b773-4f711d977f78-node-certs\") pod \"calico-node-pwlkk\" (UID: \"e91389d6-f789-4953-b773-4f711d977f78\") " pod="calico-system/calico-node-pwlkk" Jul 12 10:23:49.399445 kubelet[2707]: I0712 10:23:49.399199 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e91389d6-f789-4953-b773-4f711d977f78-policysync\") pod \"calico-node-pwlkk\" (UID: \"e91389d6-f789-4953-b773-4f711d977f78\") " pod="calico-system/calico-node-pwlkk" Jul 12 10:23:49.399445 kubelet[2707]: I0712 10:23:49.399212 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e91389d6-f789-4953-b773-4f711d977f78-var-run-calico\") pod \"calico-node-pwlkk\" (UID: \"e91389d6-f789-4953-b773-4f711d977f78\") " pod="calico-system/calico-node-pwlkk" Jul 12 10:23:49.399445 kubelet[2707]: I0712 10:23:49.399227 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e91389d6-f789-4953-b773-4f711d977f78-cni-log-dir\") pod \"calico-node-pwlkk\" (UID: \"e91389d6-f789-4953-b773-4f711d977f78\") " pod="calico-system/calico-node-pwlkk" Jul 12 10:23:49.406870 systemd[1]: Started cri-containerd-20798c470ac9c50b89704386b99bd555486e08edb13e6c0ef90fe5d552eb043f.scope - libcontainer container 20798c470ac9c50b89704386b99bd555486e08edb13e6c0ef90fe5d552eb043f. Jul 12 10:23:49.483829 containerd[1560]: time="2025-07-12T10:23:49.483686597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7887c8ff4d-kxsks,Uid:03ce7d31-a7fa-42c2-bf81-46075516be24,Namespace:calico-system,Attempt:0,} returns sandbox id \"20798c470ac9c50b89704386b99bd555486e08edb13e6c0ef90fe5d552eb043f\"" Jul 12 10:23:49.484750 kubelet[2707]: E0712 10:23:49.484705 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 10:23:49.486208 containerd[1560]: time="2025-07-12T10:23:49.486148563Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 12 10:23:49.501558 kubelet[2707]: E0712 10:23:49.501449 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:49.501558 kubelet[2707]: W0712 10:23:49.501489 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:49.501769 kubelet[2707]: E0712 10:23:49.501563 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:49.504623 kubelet[2707]: E0712 10:23:49.504589 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:49.504623 kubelet[2707]: W0712 10:23:49.504612 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:49.504699 kubelet[2707]: E0712 10:23:49.504624 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:49.508709 kubelet[2707]: E0712 10:23:49.508687 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:49.508709 kubelet[2707]: W0712 10:23:49.508704 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:49.508806 kubelet[2707]: E0712 10:23:49.508754 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:49.594899 kubelet[2707]: E0712 10:23:49.594681 2707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mvff8" podUID="4e0c1f65-4f12-4625-8a4a-ef0ef07f6a27" Jul 12 10:23:49.673023 containerd[1560]: time="2025-07-12T10:23:49.672964835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pwlkk,Uid:e91389d6-f789-4953-b773-4f711d977f78,Namespace:calico-system,Attempt:0,}" Jul 12 10:23:49.691749 kubelet[2707]: E0712 10:23:49.691693 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:49.691749 kubelet[2707]: W0712 10:23:49.691735 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:49.691749 kubelet[2707]: E0712 10:23:49.691760 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:49.691968 kubelet[2707]: E0712 10:23:49.691941 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:49.691968 kubelet[2707]: W0712 10:23:49.691953 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:49.691968 kubelet[2707]: E0712 10:23:49.691960 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:49.692194 kubelet[2707]: E0712 10:23:49.692168 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:49.692194 kubelet[2707]: W0712 10:23:49.692185 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:49.692194 kubelet[2707]: E0712 10:23:49.692194 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:49.692459 kubelet[2707]: E0712 10:23:49.692443 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:49.692459 kubelet[2707]: W0712 10:23:49.692454 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:49.692459 kubelet[2707]: E0712 10:23:49.692462 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:49.692734 kubelet[2707]: E0712 10:23:49.692692 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:49.692734 kubelet[2707]: W0712 10:23:49.692704 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:49.692943 kubelet[2707]: E0712 10:23:49.692712 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:49.693003 kubelet[2707]: E0712 10:23:49.692971 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:49.693003 kubelet[2707]: W0712 10:23:49.692986 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:49.693003 kubelet[2707]: E0712 10:23:49.692995 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:49.693202 kubelet[2707]: E0712 10:23:49.693186 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:49.693202 kubelet[2707]: W0712 10:23:49.693196 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:49.693256 kubelet[2707]: E0712 10:23:49.693204 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:49.693523 kubelet[2707]: E0712 10:23:49.693390 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:49.693523 kubelet[2707]: W0712 10:23:49.693415 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:49.693523 kubelet[2707]: E0712 10:23:49.693423 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:49.693768 kubelet[2707]: E0712 10:23:49.693681 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:49.693768 kubelet[2707]: W0712 10:23:49.693709 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:49.693768 kubelet[2707]: E0712 10:23:49.693761 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:49.694097 kubelet[2707]: E0712 10:23:49.694079 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:49.694097 kubelet[2707]: W0712 10:23:49.694092 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:49.694176 kubelet[2707]: E0712 10:23:49.694102 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:49.694321 kubelet[2707]: E0712 10:23:49.694298 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:49.694321 kubelet[2707]: W0712 10:23:49.694313 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:49.694384 kubelet[2707]: E0712 10:23:49.694322 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:49.694549 kubelet[2707]: E0712 10:23:49.694522 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:49.694549 kubelet[2707]: W0712 10:23:49.694541 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:49.694549 kubelet[2707]: E0712 10:23:49.694551 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:49.694825 kubelet[2707]: E0712 10:23:49.694794 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:49.694825 kubelet[2707]: W0712 10:23:49.694808 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:49.694825 kubelet[2707]: E0712 10:23:49.694819 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:49.695121 kubelet[2707]: E0712 10:23:49.695074 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:49.695121 kubelet[2707]: W0712 10:23:49.695102 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:49.695121 kubelet[2707]: E0712 10:23:49.695131 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:49.695381 kubelet[2707]: E0712 10:23:49.695336 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:49.695381 kubelet[2707]: W0712 10:23:49.695353 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:49.695381 kubelet[2707]: E0712 10:23:49.695363 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:49.695579 kubelet[2707]: E0712 10:23:49.695526 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:49.695579 kubelet[2707]: W0712 10:23:49.695549 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:49.695579 kubelet[2707]: E0712 10:23:49.695559 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:49.695810 kubelet[2707]: E0712 10:23:49.695789 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:49.695810 kubelet[2707]: W0712 10:23:49.695801 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:49.695810 kubelet[2707]: E0712 10:23:49.695810 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:49.695915 containerd[1560]: time="2025-07-12T10:23:49.695777638Z" level=info msg="connecting to shim 68fea6d228a5bcff6104d24f4d7fa24e6375014e47a45821d6e1987728cc81b6" address="unix:///run/containerd/s/7df1cf4a1776dd37ee41a54db06d3ff729979c9a0c87faf5f4defe1f958757fe" namespace=k8s.io protocol=ttrpc version=3 Jul 12 10:23:49.696045 kubelet[2707]: E0712 10:23:49.696024 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:49.696045 kubelet[2707]: W0712 10:23:49.696034 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:49.696139 kubelet[2707]: E0712 10:23:49.696060 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:49.696279 kubelet[2707]: E0712 10:23:49.696256 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:49.696279 kubelet[2707]: W0712 10:23:49.696271 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:49.696279 kubelet[2707]: E0712 10:23:49.696283 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:49.696503 kubelet[2707]: E0712 10:23:49.696487 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:49.696503 kubelet[2707]: W0712 10:23:49.696501 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:49.696580 kubelet[2707]: E0712 10:23:49.696510 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:49.700844 kubelet[2707]: E0712 10:23:49.700814 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:49.700844 kubelet[2707]: W0712 10:23:49.700827 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:49.700844 kubelet[2707]: E0712 10:23:49.700838 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:49.700971 kubelet[2707]: I0712 10:23:49.700866 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/4e0c1f65-4f12-4625-8a4a-ef0ef07f6a27-varrun\") pod \"csi-node-driver-mvff8\" (UID: \"4e0c1f65-4f12-4625-8a4a-ef0ef07f6a27\") " pod="calico-system/csi-node-driver-mvff8" Jul 12 10:23:49.701070 kubelet[2707]: E0712 10:23:49.701051 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:49.701070 kubelet[2707]: W0712 10:23:49.701063 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:49.701133 kubelet[2707]: E0712 10:23:49.701078 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:49.701133 kubelet[2707]: I0712 10:23:49.701093 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4e0c1f65-4f12-4625-8a4a-ef0ef07f6a27-kubelet-dir\") pod \"csi-node-driver-mvff8\" (UID: \"4e0c1f65-4f12-4625-8a4a-ef0ef07f6a27\") " pod="calico-system/csi-node-driver-mvff8" Jul 12 10:23:49.701323 kubelet[2707]: E0712 10:23:49.701306 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:49.701323 kubelet[2707]: W0712 10:23:49.701317 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:49.701404 kubelet[2707]: E0712 10:23:49.701332 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:49.701404 kubelet[2707]: I0712 10:23:49.701346 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4e0c1f65-4f12-4625-8a4a-ef0ef07f6a27-registration-dir\") pod \"csi-node-driver-mvff8\" (UID: \"4e0c1f65-4f12-4625-8a4a-ef0ef07f6a27\") " pod="calico-system/csi-node-driver-mvff8" Jul 12 10:23:49.701587 kubelet[2707]: E0712 10:23:49.701559 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:49.701587 kubelet[2707]: W0712 10:23:49.701572 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:49.701587 kubelet[2707]: E0712 10:23:49.701586 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:49.701678 kubelet[2707]: I0712 10:23:49.701599 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4e0c1f65-4f12-4625-8a4a-ef0ef07f6a27-socket-dir\") pod \"csi-node-driver-mvff8\" (UID: \"4e0c1f65-4f12-4625-8a4a-ef0ef07f6a27\") " pod="calico-system/csi-node-driver-mvff8" Jul 12 10:23:49.701834 kubelet[2707]: E0712 10:23:49.701818 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:49.701834 kubelet[2707]: W0712 10:23:49.701829 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:49.701899 kubelet[2707]: E0712 10:23:49.701843 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:49.701899 kubelet[2707]: I0712 10:23:49.701859 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9fwx\" (UniqueName: \"kubernetes.io/projected/4e0c1f65-4f12-4625-8a4a-ef0ef07f6a27-kube-api-access-w9fwx\") pod \"csi-node-driver-mvff8\" (UID: \"4e0c1f65-4f12-4625-8a4a-ef0ef07f6a27\") " pod="calico-system/csi-node-driver-mvff8" Jul 12 10:23:49.702169 kubelet[2707]: E0712 10:23:49.702129 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:49.702169 kubelet[2707]: W0712 10:23:49.702156 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:49.702228 kubelet[2707]: E0712 10:23:49.702196 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:49.702412 kubelet[2707]: E0712 10:23:49.702396 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:49.702412 kubelet[2707]: W0712 10:23:49.702407 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:49.702555 kubelet[2707]: E0712 10:23:49.702441 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:49.702623 kubelet[2707]: E0712 10:23:49.702607 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:49.702623 kubelet[2707]: W0712 10:23:49.702619 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:49.702676 kubelet[2707]: E0712 10:23:49.702650 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:49.702862 kubelet[2707]: E0712 10:23:49.702840 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:49.702862 kubelet[2707]: W0712 10:23:49.702853 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:49.702974 kubelet[2707]: E0712 10:23:49.702904 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:49.703100 kubelet[2707]: E0712 10:23:49.703064 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:49.703100 kubelet[2707]: W0712 10:23:49.703090 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:49.703254 kubelet[2707]: E0712 10:23:49.703221 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:49.703299 kubelet[2707]: E0712 10:23:49.703285 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:49.703299 kubelet[2707]: W0712 10:23:49.703295 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:49.703414 kubelet[2707]: E0712 10:23:49.703342 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:49.703519 kubelet[2707]: E0712 10:23:49.703502 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:49.703519 kubelet[2707]: W0712 10:23:49.703514 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:49.703596 kubelet[2707]: E0712 10:23:49.703522 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:49.703791 kubelet[2707]: E0712 10:23:49.703775 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:49.703791 kubelet[2707]: W0712 10:23:49.703788 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:49.703854 kubelet[2707]: E0712 10:23:49.703799 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:49.704048 kubelet[2707]: E0712 10:23:49.704020 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:49.704048 kubelet[2707]: W0712 10:23:49.704032 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:49.704048 kubelet[2707]: E0712 10:23:49.704041 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:49.704260 kubelet[2707]: E0712 10:23:49.704242 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:49.704260 kubelet[2707]: W0712 10:23:49.704254 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:49.704329 kubelet[2707]: E0712 10:23:49.704262 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:49.717868 systemd[1]: Started cri-containerd-68fea6d228a5bcff6104d24f4d7fa24e6375014e47a45821d6e1987728cc81b6.scope - libcontainer container 68fea6d228a5bcff6104d24f4d7fa24e6375014e47a45821d6e1987728cc81b6. Jul 12 10:23:49.751697 containerd[1560]: time="2025-07-12T10:23:49.751527161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pwlkk,Uid:e91389d6-f789-4953-b773-4f711d977f78,Namespace:calico-system,Attempt:0,} returns sandbox id \"68fea6d228a5bcff6104d24f4d7fa24e6375014e47a45821d6e1987728cc81b6\"" Jul 12 10:23:49.803400 kubelet[2707]: E0712 10:23:49.803317 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:49.803400 kubelet[2707]: W0712 10:23:49.803345 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:49.803400 kubelet[2707]: E0712 10:23:49.803368 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:49.803904 kubelet[2707]: E0712 10:23:49.803891 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:49.804060 kubelet[2707]: W0712 10:23:49.803962 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:49.804060 kubelet[2707]: E0712 10:23:49.803977 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:49.804270 kubelet[2707]: E0712 10:23:49.804245 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:49.804363 kubelet[2707]: W0712 10:23:49.804349 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:49.804482 kubelet[2707]: E0712 10:23:49.804413 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:49.804740 kubelet[2707]: E0712 10:23:49.804727 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:49.804876 kubelet[2707]: W0712 10:23:49.804795 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:49.804876 kubelet[2707]: E0712 10:23:49.804821 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:49.805203 kubelet[2707]: E0712 10:23:49.805177 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:49.805203 kubelet[2707]: W0712 10:23:49.805189 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:49.805464 kubelet[2707]: E0712 10:23:49.805408 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:49.805594 kubelet[2707]: E0712 10:23:49.805570 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:49.805594 kubelet[2707]: W0712 10:23:49.805581 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:49.805762 kubelet[2707]: E0712 10:23:49.805739 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:49.806090 kubelet[2707]: E0712 10:23:49.806040 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:49.806090 kubelet[2707]: W0712 10:23:49.806052 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:49.806158 kubelet[2707]: E0712 10:23:49.806110 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:49.806385 kubelet[2707]: E0712 10:23:49.806355 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:49.806385 kubelet[2707]: W0712 10:23:49.806368 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:49.806526 kubelet[2707]: E0712 10:23:49.806507 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:49.806807 kubelet[2707]: E0712 10:23:49.806749 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:49.806807 kubelet[2707]: W0712 10:23:49.806762 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:49.806807 kubelet[2707]: E0712 10:23:49.806796 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:49.807129 kubelet[2707]: E0712 10:23:49.807096 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:49.807129 kubelet[2707]: W0712 10:23:49.807109 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:49.807361 kubelet[2707]: E0712 10:23:49.807335 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:49.807481 kubelet[2707]: E0712 10:23:49.807458 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:49.807481 kubelet[2707]: W0712 10:23:49.807468 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:49.807689 kubelet[2707]: E0712 10:23:49.807599 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:49.807884 kubelet[2707]: E0712 10:23:49.807850 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:49.807884 kubelet[2707]: W0712 10:23:49.807863 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:49.808071 kubelet[2707]: E0712 10:23:49.807953 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:49.808169 kubelet[2707]: E0712 10:23:49.808145 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:49.808169 kubelet[2707]: W0712 10:23:49.808158 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:49.808246 kubelet[2707]: E0712 10:23:49.808224 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:49.808362 kubelet[2707]: E0712 10:23:49.808341 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:49.808362 kubelet[2707]: W0712 10:23:49.808355 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:49.808437 kubelet[2707]: E0712 10:23:49.808418 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:49.808676 kubelet[2707]: E0712 10:23:49.808631 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:49.808676 kubelet[2707]: W0712 10:23:49.808660 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:49.808807 kubelet[2707]: E0712 10:23:49.808706 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:49.808939 kubelet[2707]: E0712 10:23:49.808922 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:49.808939 kubelet[2707]: W0712 10:23:49.808934 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:49.808995 kubelet[2707]: E0712 10:23:49.808961 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:49.809137 kubelet[2707]: E0712 10:23:49.809120 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:49.809137 kubelet[2707]: W0712 10:23:49.809131 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:49.809195 kubelet[2707]: E0712 10:23:49.809154 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:49.809435 kubelet[2707]: E0712 10:23:49.809418 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:49.809435 kubelet[2707]: W0712 10:23:49.809429 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:49.809494 kubelet[2707]: E0712 10:23:49.809445 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:49.809659 kubelet[2707]: E0712 10:23:49.809642 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:49.809659 kubelet[2707]: W0712 10:23:49.809653 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:49.809708 kubelet[2707]: E0712 10:23:49.809678 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:49.809926 kubelet[2707]: E0712 10:23:49.809908 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:49.809926 kubelet[2707]: W0712 10:23:49.809919 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:49.810001 kubelet[2707]: E0712 10:23:49.809964 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:49.810101 kubelet[2707]: E0712 10:23:49.810084 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:49.810101 kubelet[2707]: W0712 10:23:49.810094 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:49.810164 kubelet[2707]: E0712 10:23:49.810122 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:49.810272 kubelet[2707]: E0712 10:23:49.810257 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:49.810272 kubelet[2707]: W0712 10:23:49.810267 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:49.810322 kubelet[2707]: E0712 10:23:49.810296 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:49.810459 kubelet[2707]: E0712 10:23:49.810442 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:49.810459 kubelet[2707]: W0712 10:23:49.810453 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:49.810510 kubelet[2707]: E0712 10:23:49.810468 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:49.810836 kubelet[2707]: E0712 10:23:49.810815 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:49.810836 kubelet[2707]: W0712 10:23:49.810832 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:49.810901 kubelet[2707]: E0712 10:23:49.810853 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:49.811222 kubelet[2707]: E0712 10:23:49.811205 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:49.811222 kubelet[2707]: W0712 10:23:49.811217 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:49.811288 kubelet[2707]: E0712 10:23:49.811241 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:49.818318 kubelet[2707]: E0712 10:23:49.818298 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:49.818318 kubelet[2707]: W0712 10:23:49.818314 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:49.818407 kubelet[2707]: E0712 10:23:49.818329 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:50.806608 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount996855073.mount: Deactivated successfully. Jul 12 10:23:51.503524 kubelet[2707]: E0712 10:23:51.503446 2707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mvff8" podUID="4e0c1f65-4f12-4625-8a4a-ef0ef07f6a27" Jul 12 10:23:52.368332 containerd[1560]: time="2025-07-12T10:23:52.368253354Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:23:52.369407 containerd[1560]: time="2025-07-12T10:23:52.369279835Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Jul 12 10:23:52.370899 containerd[1560]: time="2025-07-12T10:23:52.370837032Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:23:52.373698 containerd[1560]: time="2025-07-12T10:23:52.373632680Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:23:52.374181 containerd[1560]: time="2025-07-12T10:23:52.374149097Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 2.887952885s" Jul 12 10:23:52.374235 containerd[1560]: time="2025-07-12T10:23:52.374185195Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 12 10:23:52.375288 containerd[1560]: time="2025-07-12T10:23:52.375244249Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 12 10:23:52.383597 containerd[1560]: time="2025-07-12T10:23:52.383523381Z" level=info msg="CreateContainer within sandbox \"20798c470ac9c50b89704386b99bd555486e08edb13e6c0ef90fe5d552eb043f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 12 10:23:52.392492 containerd[1560]: time="2025-07-12T10:23:52.392448836Z" level=info msg="Container e67a5470272e27c464938792d36b57091c89ffd54cc0cd1fc32b6ecffe8ec4d4: CDI devices from CRI Config.CDIDevices: []" Jul 12 10:23:52.402208 containerd[1560]: time="2025-07-12T10:23:52.402162422Z" level=info msg="CreateContainer within sandbox \"20798c470ac9c50b89704386b99bd555486e08edb13e6c0ef90fe5d552eb043f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"e67a5470272e27c464938792d36b57091c89ffd54cc0cd1fc32b6ecffe8ec4d4\"" Jul 12 10:23:52.402701 containerd[1560]: time="2025-07-12T10:23:52.402665613Z" level=info msg="StartContainer for \"e67a5470272e27c464938792d36b57091c89ffd54cc0cd1fc32b6ecffe8ec4d4\"" Jul 12 10:23:52.403969 containerd[1560]: time="2025-07-12T10:23:52.403940486Z" level=info msg="connecting to shim e67a5470272e27c464938792d36b57091c89ffd54cc0cd1fc32b6ecffe8ec4d4" address="unix:///run/containerd/s/2480bb3fe297a7b769df615c3c3da100af969740fee975d12a8dc25568128139" protocol=ttrpc version=3 Jul 12 10:23:52.427945 systemd[1]: Started cri-containerd-e67a5470272e27c464938792d36b57091c89ffd54cc0cd1fc32b6ecffe8ec4d4.scope - libcontainer container e67a5470272e27c464938792d36b57091c89ffd54cc0cd1fc32b6ecffe8ec4d4. Jul 12 10:23:52.481322 containerd[1560]: time="2025-07-12T10:23:52.481252863Z" level=info msg="StartContainer for \"e67a5470272e27c464938792d36b57091c89ffd54cc0cd1fc32b6ecffe8ec4d4\" returns successfully" Jul 12 10:23:52.569888 kubelet[2707]: E0712 10:23:52.569454 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 10:23:52.614540 kubelet[2707]: E0712 10:23:52.614259 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:52.614540 kubelet[2707]: W0712 10:23:52.614283 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:52.614540 kubelet[2707]: E0712 10:23:52.614303 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:52.617341 kubelet[2707]: I0712 10:23:52.615965 2707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7887c8ff4d-kxsks" podStartSLOduration=1.726261018 podStartE2EDuration="4.615952806s" podCreationTimestamp="2025-07-12 10:23:48 +0000 UTC" firstStartedPulling="2025-07-12 10:23:49.485339168 +0000 UTC m=+17.061324543" lastFinishedPulling="2025-07-12 10:23:52.375030956 +0000 UTC m=+19.951016331" observedRunningTime="2025-07-12 10:23:52.615284352 +0000 UTC m=+20.191269727" watchObservedRunningTime="2025-07-12 10:23:52.615952806 +0000 UTC m=+20.191938181" Jul 12 10:23:52.619810 kubelet[2707]: E0712 10:23:52.617948 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:52.619988 kubelet[2707]: W0712 10:23:52.619905 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:52.619988 kubelet[2707]: E0712 10:23:52.619932 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:52.623948 kubelet[2707]: E0712 10:23:52.623761 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:52.623948 kubelet[2707]: W0712 10:23:52.623778 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:52.623948 kubelet[2707]: E0712 10:23:52.623793 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:52.624138 kubelet[2707]: E0712 10:23:52.624126 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:52.624315 kubelet[2707]: W0712 10:23:52.624275 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:52.624315 kubelet[2707]: E0712 10:23:52.624291 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:52.625234 kubelet[2707]: E0712 10:23:52.625186 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:52.625276 kubelet[2707]: W0712 10:23:52.625229 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:52.625276 kubelet[2707]: E0712 10:23:52.625261 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:52.625967 kubelet[2707]: E0712 10:23:52.625874 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:52.625967 kubelet[2707]: W0712 10:23:52.625895 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:52.625967 kubelet[2707]: E0712 10:23:52.625907 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:52.626807 kubelet[2707]: E0712 10:23:52.626782 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:52.626807 kubelet[2707]: W0712 10:23:52.626800 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:52.626957 kubelet[2707]: E0712 10:23:52.626813 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:52.627099 kubelet[2707]: E0712 10:23:52.627059 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:52.627099 kubelet[2707]: W0712 10:23:52.627070 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:52.627099 kubelet[2707]: E0712 10:23:52.627079 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:52.631145 kubelet[2707]: E0712 10:23:52.631113 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:52.631145 kubelet[2707]: W0712 10:23:52.631137 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:52.631263 kubelet[2707]: E0712 10:23:52.631158 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:52.631523 kubelet[2707]: E0712 10:23:52.631490 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:52.631523 kubelet[2707]: W0712 10:23:52.631515 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:52.631523 kubelet[2707]: E0712 10:23:52.631526 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:52.632835 kubelet[2707]: E0712 10:23:52.632806 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:52.632835 kubelet[2707]: W0712 10:23:52.632830 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:52.632901 kubelet[2707]: E0712 10:23:52.632842 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:52.633096 kubelet[2707]: E0712 10:23:52.633073 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:52.633096 kubelet[2707]: W0712 10:23:52.633090 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:52.633169 kubelet[2707]: E0712 10:23:52.633101 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:52.633486 kubelet[2707]: E0712 10:23:52.633461 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:52.633486 kubelet[2707]: W0712 10:23:52.633478 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:52.633486 kubelet[2707]: E0712 10:23:52.633487 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:52.633847 kubelet[2707]: E0712 10:23:52.633821 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:52.633847 kubelet[2707]: W0712 10:23:52.633841 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:52.633910 kubelet[2707]: E0712 10:23:52.633853 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:52.636155 kubelet[2707]: E0712 10:23:52.636124 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:52.636155 kubelet[2707]: W0712 10:23:52.636146 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:52.636155 kubelet[2707]: E0712 10:23:52.636159 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:52.728013 kubelet[2707]: E0712 10:23:52.727868 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:52.728013 kubelet[2707]: W0712 10:23:52.727899 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:52.728013 kubelet[2707]: E0712 10:23:52.727924 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:52.728240 kubelet[2707]: E0712 10:23:52.728158 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:52.728240 kubelet[2707]: W0712 10:23:52.728168 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:52.728240 kubelet[2707]: E0712 10:23:52.728185 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:52.728428 kubelet[2707]: E0712 10:23:52.728401 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:52.728428 kubelet[2707]: W0712 10:23:52.728425 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:52.728514 kubelet[2707]: E0712 10:23:52.728449 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:52.728682 kubelet[2707]: E0712 10:23:52.728636 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:52.728682 kubelet[2707]: W0712 10:23:52.728650 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:52.728682 kubelet[2707]: E0712 10:23:52.728665 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:52.729056 kubelet[2707]: E0712 10:23:52.729034 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:52.729056 kubelet[2707]: W0712 10:23:52.729047 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:52.729140 kubelet[2707]: E0712 10:23:52.729060 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:52.729296 kubelet[2707]: E0712 10:23:52.729276 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:52.729296 kubelet[2707]: W0712 10:23:52.729287 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:52.729296 kubelet[2707]: E0712 10:23:52.729299 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:52.729578 kubelet[2707]: E0712 10:23:52.729554 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:52.729578 kubelet[2707]: W0712 10:23:52.729572 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:52.729681 kubelet[2707]: E0712 10:23:52.729594 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:52.729782 kubelet[2707]: E0712 10:23:52.729765 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:52.729782 kubelet[2707]: W0712 10:23:52.729777 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:52.729864 kubelet[2707]: E0712 10:23:52.729791 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:52.729955 kubelet[2707]: E0712 10:23:52.729940 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:52.729955 kubelet[2707]: W0712 10:23:52.729951 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:52.730009 kubelet[2707]: E0712 10:23:52.729973 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:52.730161 kubelet[2707]: E0712 10:23:52.730136 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:52.730161 kubelet[2707]: W0712 10:23:52.730160 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:52.730209 kubelet[2707]: E0712 10:23:52.730174 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:52.730373 kubelet[2707]: E0712 10:23:52.730356 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:52.730373 kubelet[2707]: W0712 10:23:52.730369 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:52.730425 kubelet[2707]: E0712 10:23:52.730384 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:52.730654 kubelet[2707]: E0712 10:23:52.730634 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:52.730654 kubelet[2707]: W0712 10:23:52.730650 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:52.730704 kubelet[2707]: E0712 10:23:52.730670 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:52.730924 kubelet[2707]: E0712 10:23:52.730900 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:52.730970 kubelet[2707]: W0712 10:23:52.730930 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:52.730970 kubelet[2707]: E0712 10:23:52.730951 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:52.731244 kubelet[2707]: E0712 10:23:52.731224 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:52.731244 kubelet[2707]: W0712 10:23:52.731239 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:52.731304 kubelet[2707]: E0712 10:23:52.731257 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:52.731525 kubelet[2707]: E0712 10:23:52.731495 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:52.731525 kubelet[2707]: W0712 10:23:52.731520 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:52.731599 kubelet[2707]: E0712 10:23:52.731537 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:52.731904 kubelet[2707]: E0712 10:23:52.731882 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:52.731904 kubelet[2707]: W0712 10:23:52.731899 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:52.731966 kubelet[2707]: E0712 10:23:52.731931 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:52.732274 kubelet[2707]: E0712 10:23:52.732234 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:52.732274 kubelet[2707]: W0712 10:23:52.732264 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:52.732462 kubelet[2707]: E0712 10:23:52.732294 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:52.732633 kubelet[2707]: E0712 10:23:52.732608 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:52.732633 kubelet[2707]: W0712 10:23:52.732625 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:52.732704 kubelet[2707]: E0712 10:23:52.732637 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:53.503317 kubelet[2707]: E0712 10:23:53.503256 2707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mvff8" podUID="4e0c1f65-4f12-4625-8a4a-ef0ef07f6a27" Jul 12 10:23:53.571695 kubelet[2707]: I0712 10:23:53.571636 2707 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 10:23:53.572351 kubelet[2707]: E0712 10:23:53.572313 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 10:23:53.642594 kubelet[2707]: E0712 10:23:53.642533 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:53.642594 kubelet[2707]: W0712 10:23:53.642563 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:53.642594 kubelet[2707]: E0712 10:23:53.642587 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:53.642928 kubelet[2707]: E0712 10:23:53.642813 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:53.642928 kubelet[2707]: W0712 10:23:53.642821 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:53.642928 kubelet[2707]: E0712 10:23:53.642831 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:53.643126 kubelet[2707]: E0712 10:23:53.643080 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:53.643126 kubelet[2707]: W0712 10:23:53.643094 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:53.643126 kubelet[2707]: E0712 10:23:53.643134 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:53.643427 kubelet[2707]: E0712 10:23:53.643350 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:53.643427 kubelet[2707]: W0712 10:23:53.643370 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:53.643427 kubelet[2707]: E0712 10:23:53.643379 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:53.643774 kubelet[2707]: E0712 10:23:53.643756 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:53.643774 kubelet[2707]: W0712 10:23:53.643771 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:53.643854 kubelet[2707]: E0712 10:23:53.643810 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:53.644021 kubelet[2707]: E0712 10:23:53.644006 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:53.644021 kubelet[2707]: W0712 10:23:53.644017 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:53.644073 kubelet[2707]: E0712 10:23:53.644025 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:53.644201 kubelet[2707]: E0712 10:23:53.644187 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:53.644201 kubelet[2707]: W0712 10:23:53.644197 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:53.644245 kubelet[2707]: E0712 10:23:53.644205 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:53.644379 kubelet[2707]: E0712 10:23:53.644364 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:53.644379 kubelet[2707]: W0712 10:23:53.644375 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:53.644432 kubelet[2707]: E0712 10:23:53.644383 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:53.644637 kubelet[2707]: E0712 10:23:53.644619 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:53.644637 kubelet[2707]: W0712 10:23:53.644633 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:53.644697 kubelet[2707]: E0712 10:23:53.644662 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:53.644896 kubelet[2707]: E0712 10:23:53.644878 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:53.644896 kubelet[2707]: W0712 10:23:53.644890 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:53.644955 kubelet[2707]: E0712 10:23:53.644899 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:53.645069 kubelet[2707]: E0712 10:23:53.645055 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:53.645069 kubelet[2707]: W0712 10:23:53.645065 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:53.645115 kubelet[2707]: E0712 10:23:53.645072 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:53.645243 kubelet[2707]: E0712 10:23:53.645228 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:53.645243 kubelet[2707]: W0712 10:23:53.645238 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:53.645293 kubelet[2707]: E0712 10:23:53.645246 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:53.645470 kubelet[2707]: E0712 10:23:53.645454 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:53.645470 kubelet[2707]: W0712 10:23:53.645466 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:53.645549 kubelet[2707]: E0712 10:23:53.645474 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:53.645703 kubelet[2707]: E0712 10:23:53.645686 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:53.645703 kubelet[2707]: W0712 10:23:53.645700 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:53.645792 kubelet[2707]: E0712 10:23:53.645708 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:53.645903 kubelet[2707]: E0712 10:23:53.645886 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:53.645903 kubelet[2707]: W0712 10:23:53.645897 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:53.645945 kubelet[2707]: E0712 10:23:53.645904 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:53.737840 kubelet[2707]: E0712 10:23:53.737796 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:53.737840 kubelet[2707]: W0712 10:23:53.737825 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:53.737840 kubelet[2707]: E0712 10:23:53.737851 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:53.738162 kubelet[2707]: E0712 10:23:53.738139 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:53.738162 kubelet[2707]: W0712 10:23:53.738155 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:53.738220 kubelet[2707]: E0712 10:23:53.738172 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:53.738643 kubelet[2707]: E0712 10:23:53.738613 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:53.738643 kubelet[2707]: W0712 10:23:53.738640 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:53.738707 kubelet[2707]: E0712 10:23:53.738662 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:53.738929 kubelet[2707]: E0712 10:23:53.738901 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:53.738929 kubelet[2707]: W0712 10:23:53.738918 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:53.738983 kubelet[2707]: E0712 10:23:53.738935 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:53.739157 kubelet[2707]: E0712 10:23:53.739140 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:53.739157 kubelet[2707]: W0712 10:23:53.739153 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:53.739208 kubelet[2707]: E0712 10:23:53.739167 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:53.739411 kubelet[2707]: E0712 10:23:53.739390 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:53.739411 kubelet[2707]: W0712 10:23:53.739404 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:53.739478 kubelet[2707]: E0712 10:23:53.739424 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:53.739706 kubelet[2707]: E0712 10:23:53.739671 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:53.739706 kubelet[2707]: W0712 10:23:53.739699 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:53.739802 kubelet[2707]: E0712 10:23:53.739746 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:53.740007 kubelet[2707]: E0712 10:23:53.739990 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:53.740007 kubelet[2707]: W0712 10:23:53.740001 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:53.740071 kubelet[2707]: E0712 10:23:53.740016 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:53.740243 kubelet[2707]: E0712 10:23:53.740227 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:53.740243 kubelet[2707]: W0712 10:23:53.740238 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:53.740243 kubelet[2707]: E0712 10:23:53.740253 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:53.740512 kubelet[2707]: E0712 10:23:53.740477 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:53.740512 kubelet[2707]: W0712 10:23:53.740506 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:53.740570 kubelet[2707]: E0712 10:23:53.740540 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:53.740784 kubelet[2707]: E0712 10:23:53.740762 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:53.740784 kubelet[2707]: W0712 10:23:53.740779 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:53.740853 kubelet[2707]: E0712 10:23:53.740809 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:53.741018 kubelet[2707]: E0712 10:23:53.741000 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:53.741018 kubelet[2707]: W0712 10:23:53.741015 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:53.741071 kubelet[2707]: E0712 10:23:53.741034 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:53.741274 kubelet[2707]: E0712 10:23:53.741256 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:53.741274 kubelet[2707]: W0712 10:23:53.741270 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:53.741391 kubelet[2707]: E0712 10:23:53.741287 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:53.741500 kubelet[2707]: E0712 10:23:53.741476 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:53.741500 kubelet[2707]: W0712 10:23:53.741497 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:53.741551 kubelet[2707]: E0712 10:23:53.741511 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:53.741767 kubelet[2707]: E0712 10:23:53.741750 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:53.741767 kubelet[2707]: W0712 10:23:53.741764 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:53.741827 kubelet[2707]: E0712 10:23:53.741779 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:53.742077 kubelet[2707]: E0712 10:23:53.742056 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:53.742077 kubelet[2707]: W0712 10:23:53.742072 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:53.742133 kubelet[2707]: E0712 10:23:53.742088 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:53.742288 kubelet[2707]: E0712 10:23:53.742271 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:53.742288 kubelet[2707]: W0712 10:23:53.742283 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:53.742339 kubelet[2707]: E0712 10:23:53.742316 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:53.742470 kubelet[2707]: E0712 10:23:53.742454 2707 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:23:53.742470 kubelet[2707]: W0712 10:23:53.742465 2707 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:23:53.742527 kubelet[2707]: E0712 10:23:53.742474 2707 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:23:53.840378 containerd[1560]: time="2025-07-12T10:23:53.840181977Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:23:53.841838 containerd[1560]: time="2025-07-12T10:23:53.841471876Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Jul 12 10:23:53.843132 containerd[1560]: time="2025-07-12T10:23:53.843085537Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:23:53.845008 containerd[1560]: time="2025-07-12T10:23:53.844969680Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:23:53.845529 containerd[1560]: time="2025-07-12T10:23:53.845473081Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 1.47010583s" Jul 12 10:23:53.845529 containerd[1560]: time="2025-07-12T10:23:53.845524849Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 12 10:23:53.847677 containerd[1560]: time="2025-07-12T10:23:53.847646981Z" level=info msg="CreateContainer within sandbox \"68fea6d228a5bcff6104d24f4d7fa24e6375014e47a45821d6e1987728cc81b6\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 12 10:23:53.856687 containerd[1560]: time="2025-07-12T10:23:53.856643442Z" level=info msg="Container 7939d9a3134dff87fc8fb0ae8bdb029f0852b57aec931a883fb43b52c0bde911: CDI devices from CRI Config.CDIDevices: []" Jul 12 10:23:53.865254 containerd[1560]: time="2025-07-12T10:23:53.865215130Z" level=info msg="CreateContainer within sandbox \"68fea6d228a5bcff6104d24f4d7fa24e6375014e47a45821d6e1987728cc81b6\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"7939d9a3134dff87fc8fb0ae8bdb029f0852b57aec931a883fb43b52c0bde911\"" Jul 12 10:23:53.865857 containerd[1560]: time="2025-07-12T10:23:53.865810295Z" level=info msg="StartContainer for \"7939d9a3134dff87fc8fb0ae8bdb029f0852b57aec931a883fb43b52c0bde911\"" Jul 12 10:23:53.867375 containerd[1560]: time="2025-07-12T10:23:53.867349876Z" level=info msg="connecting to shim 7939d9a3134dff87fc8fb0ae8bdb029f0852b57aec931a883fb43b52c0bde911" address="unix:///run/containerd/s/7df1cf4a1776dd37ee41a54db06d3ff729979c9a0c87faf5f4defe1f958757fe" protocol=ttrpc version=3 Jul 12 10:23:53.888881 systemd[1]: Started cri-containerd-7939d9a3134dff87fc8fb0ae8bdb029f0852b57aec931a883fb43b52c0bde911.scope - libcontainer container 7939d9a3134dff87fc8fb0ae8bdb029f0852b57aec931a883fb43b52c0bde911. Jul 12 10:23:53.935443 containerd[1560]: time="2025-07-12T10:23:53.935398873Z" level=info msg="StartContainer for \"7939d9a3134dff87fc8fb0ae8bdb029f0852b57aec931a883fb43b52c0bde911\" returns successfully" Jul 12 10:23:53.946188 systemd[1]: cri-containerd-7939d9a3134dff87fc8fb0ae8bdb029f0852b57aec931a883fb43b52c0bde911.scope: Deactivated successfully. Jul 12 10:23:53.947990 containerd[1560]: time="2025-07-12T10:23:53.947933463Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7939d9a3134dff87fc8fb0ae8bdb029f0852b57aec931a883fb43b52c0bde911\" id:\"7939d9a3134dff87fc8fb0ae8bdb029f0852b57aec931a883fb43b52c0bde911\" pid:3445 exited_at:{seconds:1752315833 nanos:947310115}" Jul 12 10:23:53.948283 containerd[1560]: time="2025-07-12T10:23:53.948235936Z" level=info msg="received exit event container_id:\"7939d9a3134dff87fc8fb0ae8bdb029f0852b57aec931a883fb43b52c0bde911\" id:\"7939d9a3134dff87fc8fb0ae8bdb029f0852b57aec931a883fb43b52c0bde911\" pid:3445 exited_at:{seconds:1752315833 nanos:947310115}" Jul 12 10:23:53.973839 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7939d9a3134dff87fc8fb0ae8bdb029f0852b57aec931a883fb43b52c0bde911-rootfs.mount: Deactivated successfully. Jul 12 10:23:54.577409 containerd[1560]: time="2025-07-12T10:23:54.577335508Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 12 10:23:55.503530 kubelet[2707]: E0712 10:23:55.503437 2707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mvff8" podUID="4e0c1f65-4f12-4625-8a4a-ef0ef07f6a27" Jul 12 10:23:57.503692 kubelet[2707]: E0712 10:23:57.503542 2707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mvff8" podUID="4e0c1f65-4f12-4625-8a4a-ef0ef07f6a27" Jul 12 10:23:57.863004 containerd[1560]: time="2025-07-12T10:23:57.862835084Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:23:57.863947 containerd[1560]: time="2025-07-12T10:23:57.863884325Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 12 10:23:57.865311 containerd[1560]: time="2025-07-12T10:23:57.865270060Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:23:57.868221 containerd[1560]: time="2025-07-12T10:23:57.868162029Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:23:57.868808 containerd[1560]: time="2025-07-12T10:23:57.868758494Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 3.291338186s" Jul 12 10:23:57.868808 containerd[1560]: time="2025-07-12T10:23:57.868802417Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 12 10:23:57.870950 containerd[1560]: time="2025-07-12T10:23:57.870922839Z" level=info msg="CreateContainer within sandbox \"68fea6d228a5bcff6104d24f4d7fa24e6375014e47a45821d6e1987728cc81b6\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 12 10:23:57.884014 containerd[1560]: time="2025-07-12T10:23:57.883958313Z" level=info msg="Container 07614feb053b5d0af1b7bfaa0925ef5e9c14e6941d89a23fb6ec0073b7cd2645: CDI devices from CRI Config.CDIDevices: []" Jul 12 10:23:57.894941 containerd[1560]: time="2025-07-12T10:23:57.894896328Z" level=info msg="CreateContainer within sandbox \"68fea6d228a5bcff6104d24f4d7fa24e6375014e47a45821d6e1987728cc81b6\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"07614feb053b5d0af1b7bfaa0925ef5e9c14e6941d89a23fb6ec0073b7cd2645\"" Jul 12 10:23:57.895469 containerd[1560]: time="2025-07-12T10:23:57.895422631Z" level=info msg="StartContainer for \"07614feb053b5d0af1b7bfaa0925ef5e9c14e6941d89a23fb6ec0073b7cd2645\"" Jul 12 10:23:57.896758 containerd[1560]: time="2025-07-12T10:23:57.896733575Z" level=info msg="connecting to shim 07614feb053b5d0af1b7bfaa0925ef5e9c14e6941d89a23fb6ec0073b7cd2645" address="unix:///run/containerd/s/7df1cf4a1776dd37ee41a54db06d3ff729979c9a0c87faf5f4defe1f958757fe" protocol=ttrpc version=3 Jul 12 10:23:57.920840 systemd[1]: Started cri-containerd-07614feb053b5d0af1b7bfaa0925ef5e9c14e6941d89a23fb6ec0073b7cd2645.scope - libcontainer container 07614feb053b5d0af1b7bfaa0925ef5e9c14e6941d89a23fb6ec0073b7cd2645. Jul 12 10:23:57.967067 containerd[1560]: time="2025-07-12T10:23:57.966956388Z" level=info msg="StartContainer for \"07614feb053b5d0af1b7bfaa0925ef5e9c14e6941d89a23fb6ec0073b7cd2645\" returns successfully" Jul 12 10:23:59.085689 containerd[1560]: time="2025-07-12T10:23:59.085635130Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 12 10:23:59.088611 systemd[1]: cri-containerd-07614feb053b5d0af1b7bfaa0925ef5e9c14e6941d89a23fb6ec0073b7cd2645.scope: Deactivated successfully. Jul 12 10:23:59.089011 systemd[1]: cri-containerd-07614feb053b5d0af1b7bfaa0925ef5e9c14e6941d89a23fb6ec0073b7cd2645.scope: Consumed 613ms CPU time, 176.8M memory peak, 3.2M read from disk, 171.2M written to disk. Jul 12 10:23:59.089584 containerd[1560]: time="2025-07-12T10:23:59.089546066Z" level=info msg="received exit event container_id:\"07614feb053b5d0af1b7bfaa0925ef5e9c14e6941d89a23fb6ec0073b7cd2645\" id:\"07614feb053b5d0af1b7bfaa0925ef5e9c14e6941d89a23fb6ec0073b7cd2645\" pid:3505 exited_at:{seconds:1752315839 nanos:89325660}" Jul 12 10:23:59.089691 containerd[1560]: time="2025-07-12T10:23:59.089662295Z" level=info msg="TaskExit event in podsandbox handler container_id:\"07614feb053b5d0af1b7bfaa0925ef5e9c14e6941d89a23fb6ec0073b7cd2645\" id:\"07614feb053b5d0af1b7bfaa0925ef5e9c14e6941d89a23fb6ec0073b7cd2645\" pid:3505 exited_at:{seconds:1752315839 nanos:89325660}" Jul 12 10:23:59.112001 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-07614feb053b5d0af1b7bfaa0925ef5e9c14e6941d89a23fb6ec0073b7cd2645-rootfs.mount: Deactivated successfully. Jul 12 10:23:59.183855 kubelet[2707]: I0712 10:23:59.183801 2707 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 12 10:23:59.509208 systemd[1]: Created slice kubepods-besteffort-pod4e0c1f65_4f12_4625_8a4a_ef0ef07f6a27.slice - libcontainer container kubepods-besteffort-pod4e0c1f65_4f12_4625_8a4a_ef0ef07f6a27.slice. Jul 12 10:23:59.511869 containerd[1560]: time="2025-07-12T10:23:59.511818181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mvff8,Uid:4e0c1f65-4f12-4625-8a4a-ef0ef07f6a27,Namespace:calico-system,Attempt:0,}" Jul 12 10:24:00.011297 systemd[1]: Created slice kubepods-burstable-pod9c226913_f0b1_4c3a_8e9d_41c8ddc6d70e.slice - libcontainer container kubepods-burstable-pod9c226913_f0b1_4c3a_8e9d_41c8ddc6d70e.slice. Jul 12 10:24:00.019423 systemd[1]: Created slice kubepods-burstable-poda2850aa5_dd6d_4817_b6b8_f8a76320c95c.slice - libcontainer container kubepods-burstable-poda2850aa5_dd6d_4817_b6b8_f8a76320c95c.slice. Jul 12 10:24:00.025152 systemd[1]: Created slice kubepods-besteffort-podd7530ec4_ef51_4337_ab9f_6e8f00c29a8e.slice - libcontainer container kubepods-besteffort-podd7530ec4_ef51_4337_ab9f_6e8f00c29a8e.slice. Jul 12 10:24:00.028961 systemd[1]: Created slice kubepods-besteffort-pode48e02da_19e4_41ee_ac20_7f6f2fb189de.slice - libcontainer container kubepods-besteffort-pode48e02da_19e4_41ee_ac20_7f6f2fb189de.slice. Jul 12 10:24:00.035001 systemd[1]: Created slice kubepods-besteffort-pod153be907_6581_4138_b29b_e67e9e609b4f.slice - libcontainer container kubepods-besteffort-pod153be907_6581_4138_b29b_e67e9e609b4f.slice. Jul 12 10:24:00.039927 systemd[1]: Created slice kubepods-besteffort-pod0a81c59f_5f69_4f57_a191_e15066abbd4b.slice - libcontainer container kubepods-besteffort-pod0a81c59f_5f69_4f57_a191_e15066abbd4b.slice. Jul 12 10:24:00.044216 systemd[1]: Created slice kubepods-besteffort-podcb03ae40_baa6_4128_a2f4_201bd683dff9.slice - libcontainer container kubepods-besteffort-podcb03ae40_baa6_4128_a2f4_201bd683dff9.slice. Jul 12 10:24:00.182465 kubelet[2707]: I0712 10:24:00.182409 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/d7530ec4-ef51-4337-ab9f-6e8f00c29a8e-goldmane-key-pair\") pod \"goldmane-768f4c5c69-cdh9p\" (UID: \"d7530ec4-ef51-4337-ab9f-6e8f00c29a8e\") " pod="calico-system/goldmane-768f4c5c69-cdh9p" Jul 12 10:24:00.182465 kubelet[2707]: I0712 10:24:00.182464 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e48e02da-19e4-41ee-ac20-7f6f2fb189de-whisker-backend-key-pair\") pod \"whisker-79848c4678-ntvwk\" (UID: \"e48e02da-19e4-41ee-ac20-7f6f2fb189de\") " pod="calico-system/whisker-79848c4678-ntvwk" Jul 12 10:24:00.182659 kubelet[2707]: I0712 10:24:00.182481 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nktmh\" (UniqueName: \"kubernetes.io/projected/e48e02da-19e4-41ee-ac20-7f6f2fb189de-kube-api-access-nktmh\") pod \"whisker-79848c4678-ntvwk\" (UID: \"e48e02da-19e4-41ee-ac20-7f6f2fb189de\") " pod="calico-system/whisker-79848c4678-ntvwk" Jul 12 10:24:00.182659 kubelet[2707]: I0712 10:24:00.182500 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a81c59f-5f69-4f57-a191-e15066abbd4b-tigera-ca-bundle\") pod \"calico-kube-controllers-778bc96f59-rcg6j\" (UID: \"0a81c59f-5f69-4f57-a191-e15066abbd4b\") " pod="calico-system/calico-kube-controllers-778bc96f59-rcg6j" Jul 12 10:24:00.182659 kubelet[2707]: I0712 10:24:00.182518 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wscr\" (UniqueName: \"kubernetes.io/projected/a2850aa5-dd6d-4817-b6b8-f8a76320c95c-kube-api-access-6wscr\") pod \"coredns-668d6bf9bc-s729n\" (UID: \"a2850aa5-dd6d-4817-b6b8-f8a76320c95c\") " pod="kube-system/coredns-668d6bf9bc-s729n" Jul 12 10:24:00.182659 kubelet[2707]: I0712 10:24:00.182575 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/153be907-6581-4138-b29b-e67e9e609b4f-calico-apiserver-certs\") pod \"calico-apiserver-6d68fd648b-nbrdq\" (UID: \"153be907-6581-4138-b29b-e67e9e609b4f\") " pod="calico-apiserver/calico-apiserver-6d68fd648b-nbrdq" Jul 12 10:24:00.182777 kubelet[2707]: I0712 10:24:00.182654 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8g52f\" (UniqueName: \"kubernetes.io/projected/0a81c59f-5f69-4f57-a191-e15066abbd4b-kube-api-access-8g52f\") pod \"calico-kube-controllers-778bc96f59-rcg6j\" (UID: \"0a81c59f-5f69-4f57-a191-e15066abbd4b\") " pod="calico-system/calico-kube-controllers-778bc96f59-rcg6j" Jul 12 10:24:00.182777 kubelet[2707]: I0712 10:24:00.182683 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7530ec4-ef51-4337-ab9f-6e8f00c29a8e-config\") pod \"goldmane-768f4c5c69-cdh9p\" (UID: \"d7530ec4-ef51-4337-ab9f-6e8f00c29a8e\") " pod="calico-system/goldmane-768f4c5c69-cdh9p" Jul 12 10:24:00.182777 kubelet[2707]: I0712 10:24:00.182769 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9c226913-f0b1-4c3a-8e9d-41c8ddc6d70e-config-volume\") pod \"coredns-668d6bf9bc-5hcl7\" (UID: \"9c226913-f0b1-4c3a-8e9d-41c8ddc6d70e\") " pod="kube-system/coredns-668d6bf9bc-5hcl7" Jul 12 10:24:00.182848 kubelet[2707]: I0712 10:24:00.182790 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d7530ec4-ef51-4337-ab9f-6e8f00c29a8e-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-cdh9p\" (UID: \"d7530ec4-ef51-4337-ab9f-6e8f00c29a8e\") " pod="calico-system/goldmane-768f4c5c69-cdh9p" Jul 12 10:24:00.182848 kubelet[2707]: I0712 10:24:00.182806 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a2850aa5-dd6d-4817-b6b8-f8a76320c95c-config-volume\") pod \"coredns-668d6bf9bc-s729n\" (UID: \"a2850aa5-dd6d-4817-b6b8-f8a76320c95c\") " pod="kube-system/coredns-668d6bf9bc-s729n" Jul 12 10:24:00.182899 kubelet[2707]: I0712 10:24:00.182844 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e48e02da-19e4-41ee-ac20-7f6f2fb189de-whisker-ca-bundle\") pod \"whisker-79848c4678-ntvwk\" (UID: \"e48e02da-19e4-41ee-ac20-7f6f2fb189de\") " pod="calico-system/whisker-79848c4678-ntvwk" Jul 12 10:24:00.182899 kubelet[2707]: I0712 10:24:00.182875 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hm6dt\" (UniqueName: \"kubernetes.io/projected/153be907-6581-4138-b29b-e67e9e609b4f-kube-api-access-hm6dt\") pod \"calico-apiserver-6d68fd648b-nbrdq\" (UID: \"153be907-6581-4138-b29b-e67e9e609b4f\") " pod="calico-apiserver/calico-apiserver-6d68fd648b-nbrdq" Jul 12 10:24:00.182991 kubelet[2707]: I0712 10:24:00.182952 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/cb03ae40-baa6-4128-a2f4-201bd683dff9-calico-apiserver-certs\") pod \"calico-apiserver-6d68fd648b-shqdk\" (UID: \"cb03ae40-baa6-4128-a2f4-201bd683dff9\") " pod="calico-apiserver/calico-apiserver-6d68fd648b-shqdk" Jul 12 10:24:00.183133 kubelet[2707]: I0712 10:24:00.183056 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tdj7\" (UniqueName: \"kubernetes.io/projected/9c226913-f0b1-4c3a-8e9d-41c8ddc6d70e-kube-api-access-4tdj7\") pod \"coredns-668d6bf9bc-5hcl7\" (UID: \"9c226913-f0b1-4c3a-8e9d-41c8ddc6d70e\") " pod="kube-system/coredns-668d6bf9bc-5hcl7" Jul 12 10:24:00.183133 kubelet[2707]: I0712 10:24:00.183102 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ww7d\" (UniqueName: \"kubernetes.io/projected/d7530ec4-ef51-4337-ab9f-6e8f00c29a8e-kube-api-access-9ww7d\") pod \"goldmane-768f4c5c69-cdh9p\" (UID: \"d7530ec4-ef51-4337-ab9f-6e8f00c29a8e\") " pod="calico-system/goldmane-768f4c5c69-cdh9p" Jul 12 10:24:00.183133 kubelet[2707]: I0712 10:24:00.183121 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2djww\" (UniqueName: \"kubernetes.io/projected/cb03ae40-baa6-4128-a2f4-201bd683dff9-kube-api-access-2djww\") pod \"calico-apiserver-6d68fd648b-shqdk\" (UID: \"cb03ae40-baa6-4128-a2f4-201bd683dff9\") " pod="calico-apiserver/calico-apiserver-6d68fd648b-shqdk" Jul 12 10:24:00.244406 containerd[1560]: time="2025-07-12T10:24:00.244322223Z" level=error msg="Failed to destroy network for sandbox \"dcf68802cbea6fbdbad62581352ce873f997aee8ec2345c305067b81b3b83279\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 10:24:00.246748 systemd[1]: run-netns-cni\x2d6081155a\x2d856d\x2dc9e8\x2d7067\x2d3aea13aee9be.mount: Deactivated successfully. Jul 12 10:24:00.340274 containerd[1560]: time="2025-07-12T10:24:00.340071633Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mvff8,Uid:4e0c1f65-4f12-4625-8a4a-ef0ef07f6a27,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"dcf68802cbea6fbdbad62581352ce873f997aee8ec2345c305067b81b3b83279\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 10:24:00.340460 kubelet[2707]: E0712 10:24:00.340378 2707 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dcf68802cbea6fbdbad62581352ce873f997aee8ec2345c305067b81b3b83279\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 10:24:00.340836 kubelet[2707]: E0712 10:24:00.340486 2707 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dcf68802cbea6fbdbad62581352ce873f997aee8ec2345c305067b81b3b83279\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mvff8" Jul 12 10:24:00.340836 kubelet[2707]: E0712 10:24:00.340505 2707 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dcf68802cbea6fbdbad62581352ce873f997aee8ec2345c305067b81b3b83279\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mvff8" Jul 12 10:24:00.340836 kubelet[2707]: E0712 10:24:00.340555 2707 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-mvff8_calico-system(4e0c1f65-4f12-4625-8a4a-ef0ef07f6a27)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-mvff8_calico-system(4e0c1f65-4f12-4625-8a4a-ef0ef07f6a27)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dcf68802cbea6fbdbad62581352ce873f997aee8ec2345c305067b81b3b83279\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mvff8" podUID="4e0c1f65-4f12-4625-8a4a-ef0ef07f6a27" Jul 12 10:24:00.596301 containerd[1560]: time="2025-07-12T10:24:00.596151017Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 12 10:24:00.616969 kubelet[2707]: E0712 10:24:00.616922 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 10:24:00.617549 containerd[1560]: time="2025-07-12T10:24:00.617502184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5hcl7,Uid:9c226913-f0b1-4c3a-8e9d-41c8ddc6d70e,Namespace:kube-system,Attempt:0,}" Jul 12 10:24:00.623044 kubelet[2707]: E0712 10:24:00.623002 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 10:24:00.623507 containerd[1560]: time="2025-07-12T10:24:00.623473723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-s729n,Uid:a2850aa5-dd6d-4817-b6b8-f8a76320c95c,Namespace:kube-system,Attempt:0,}" Jul 12 10:24:00.628301 containerd[1560]: time="2025-07-12T10:24:00.628253936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-cdh9p,Uid:d7530ec4-ef51-4337-ab9f-6e8f00c29a8e,Namespace:calico-system,Attempt:0,}" Jul 12 10:24:00.632085 containerd[1560]: time="2025-07-12T10:24:00.632059730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79848c4678-ntvwk,Uid:e48e02da-19e4-41ee-ac20-7f6f2fb189de,Namespace:calico-system,Attempt:0,}" Jul 12 10:24:00.638776 containerd[1560]: time="2025-07-12T10:24:00.638750436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d68fd648b-nbrdq,Uid:153be907-6581-4138-b29b-e67e9e609b4f,Namespace:calico-apiserver,Attempt:0,}" Jul 12 10:24:00.642533 containerd[1560]: time="2025-07-12T10:24:00.642473295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-778bc96f59-rcg6j,Uid:0a81c59f-5f69-4f57-a191-e15066abbd4b,Namespace:calico-system,Attempt:0,}" Jul 12 10:24:00.646647 containerd[1560]: time="2025-07-12T10:24:00.646620063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d68fd648b-shqdk,Uid:cb03ae40-baa6-4128-a2f4-201bd683dff9,Namespace:calico-apiserver,Attempt:0,}" Jul 12 10:24:00.928896 containerd[1560]: time="2025-07-12T10:24:00.928447864Z" level=error msg="Failed to destroy network for sandbox \"6eac075d7a61791fe75a72822ed787ddc7dafdefc70112f8ce1b3423f03f6ae5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 10:24:00.930755 containerd[1560]: time="2025-07-12T10:24:00.930554827Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5hcl7,Uid:9c226913-f0b1-4c3a-8e9d-41c8ddc6d70e,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6eac075d7a61791fe75a72822ed787ddc7dafdefc70112f8ce1b3423f03f6ae5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 10:24:00.933209 kubelet[2707]: E0712 10:24:00.933148 2707 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6eac075d7a61791fe75a72822ed787ddc7dafdefc70112f8ce1b3423f03f6ae5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 10:24:00.933305 kubelet[2707]: E0712 10:24:00.933237 2707 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6eac075d7a61791fe75a72822ed787ddc7dafdefc70112f8ce1b3423f03f6ae5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-5hcl7" Jul 12 10:24:00.933305 kubelet[2707]: E0712 10:24:00.933264 2707 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6eac075d7a61791fe75a72822ed787ddc7dafdefc70112f8ce1b3423f03f6ae5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-5hcl7" Jul 12 10:24:00.934804 kubelet[2707]: E0712 10:24:00.933779 2707 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-5hcl7_kube-system(9c226913-f0b1-4c3a-8e9d-41c8ddc6d70e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-5hcl7_kube-system(9c226913-f0b1-4c3a-8e9d-41c8ddc6d70e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6eac075d7a61791fe75a72822ed787ddc7dafdefc70112f8ce1b3423f03f6ae5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-5hcl7" podUID="9c226913-f0b1-4c3a-8e9d-41c8ddc6d70e" Jul 12 10:24:00.950488 containerd[1560]: time="2025-07-12T10:24:00.950328970Z" level=error msg="Failed to destroy network for sandbox \"c51be50b20cbc7a6754753c734ae967ecf97d1e76130283ea093ad31770d5c27\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 10:24:00.954227 containerd[1560]: time="2025-07-12T10:24:00.954178428Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-s729n,Uid:a2850aa5-dd6d-4817-b6b8-f8a76320c95c,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c51be50b20cbc7a6754753c734ae967ecf97d1e76130283ea093ad31770d5c27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 10:24:00.954868 kubelet[2707]: E0712 10:24:00.954794 2707 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c51be50b20cbc7a6754753c734ae967ecf97d1e76130283ea093ad31770d5c27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 10:24:00.954969 kubelet[2707]: E0712 10:24:00.954897 2707 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c51be50b20cbc7a6754753c734ae967ecf97d1e76130283ea093ad31770d5c27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-s729n" Jul 12 10:24:00.954969 kubelet[2707]: E0712 10:24:00.954925 2707 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c51be50b20cbc7a6754753c734ae967ecf97d1e76130283ea093ad31770d5c27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-s729n" Jul 12 10:24:00.955048 kubelet[2707]: E0712 10:24:00.954979 2707 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-s729n_kube-system(a2850aa5-dd6d-4817-b6b8-f8a76320c95c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-s729n_kube-system(a2850aa5-dd6d-4817-b6b8-f8a76320c95c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c51be50b20cbc7a6754753c734ae967ecf97d1e76130283ea093ad31770d5c27\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-s729n" podUID="a2850aa5-dd6d-4817-b6b8-f8a76320c95c" Jul 12 10:24:00.967257 containerd[1560]: time="2025-07-12T10:24:00.967052962Z" level=error msg="Failed to destroy network for sandbox \"e0085158d3879a75ffa6b33fd2773c3d345fb19f3f61bf048e723df4988f00f2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 10:24:00.968000 containerd[1560]: time="2025-07-12T10:24:00.967940335Z" level=error msg="Failed to destroy network for sandbox \"7924b94fc192620e1a1a600076c60bdf3172a4efad061f51cb9fca5535b45836\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 10:24:00.969711 containerd[1560]: time="2025-07-12T10:24:00.969671048Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d68fd648b-nbrdq,Uid:153be907-6581-4138-b29b-e67e9e609b4f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7924b94fc192620e1a1a600076c60bdf3172a4efad061f51cb9fca5535b45836\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 10:24:00.970293 kubelet[2707]: E0712 10:24:00.970230 2707 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7924b94fc192620e1a1a600076c60bdf3172a4efad061f51cb9fca5535b45836\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 10:24:00.970353 kubelet[2707]: E0712 10:24:00.970307 2707 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7924b94fc192620e1a1a600076c60bdf3172a4efad061f51cb9fca5535b45836\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d68fd648b-nbrdq" Jul 12 10:24:00.970353 kubelet[2707]: E0712 10:24:00.970327 2707 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7924b94fc192620e1a1a600076c60bdf3172a4efad061f51cb9fca5535b45836\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d68fd648b-nbrdq" Jul 12 10:24:00.970424 kubelet[2707]: E0712 10:24:00.970366 2707 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6d68fd648b-nbrdq_calico-apiserver(153be907-6581-4138-b29b-e67e9e609b4f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6d68fd648b-nbrdq_calico-apiserver(153be907-6581-4138-b29b-e67e9e609b4f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7924b94fc192620e1a1a600076c60bdf3172a4efad061f51cb9fca5535b45836\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6d68fd648b-nbrdq" podUID="153be907-6581-4138-b29b-e67e9e609b4f" Jul 12 10:24:00.970943 containerd[1560]: time="2025-07-12T10:24:00.970914563Z" level=error msg="Failed to destroy network for sandbox \"8bb64ed3d0834f3afd9b4e4830b75fe076ad45a1c4b40ca31815bc9b5b9ab40e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 10:24:00.972448 containerd[1560]: time="2025-07-12T10:24:00.972413239Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d68fd648b-shqdk,Uid:cb03ae40-baa6-4128-a2f4-201bd683dff9,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0085158d3879a75ffa6b33fd2773c3d345fb19f3f61bf048e723df4988f00f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 10:24:00.972951 kubelet[2707]: E0712 10:24:00.972860 2707 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0085158d3879a75ffa6b33fd2773c3d345fb19f3f61bf048e723df4988f00f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 10:24:00.973033 kubelet[2707]: E0712 10:24:00.972986 2707 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0085158d3879a75ffa6b33fd2773c3d345fb19f3f61bf048e723df4988f00f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d68fd648b-shqdk" Jul 12 10:24:00.973033 kubelet[2707]: E0712 10:24:00.973012 2707 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0085158d3879a75ffa6b33fd2773c3d345fb19f3f61bf048e723df4988f00f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d68fd648b-shqdk" Jul 12 10:24:00.973202 kubelet[2707]: E0712 10:24:00.973088 2707 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6d68fd648b-shqdk_calico-apiserver(cb03ae40-baa6-4128-a2f4-201bd683dff9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6d68fd648b-shqdk_calico-apiserver(cb03ae40-baa6-4128-a2f4-201bd683dff9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e0085158d3879a75ffa6b33fd2773c3d345fb19f3f61bf048e723df4988f00f2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6d68fd648b-shqdk" podUID="cb03ae40-baa6-4128-a2f4-201bd683dff9" Jul 12 10:24:00.973827 containerd[1560]: time="2025-07-12T10:24:00.973776279Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-cdh9p,Uid:d7530ec4-ef51-4337-ab9f-6e8f00c29a8e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bb64ed3d0834f3afd9b4e4830b75fe076ad45a1c4b40ca31815bc9b5b9ab40e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 10:24:00.974260 kubelet[2707]: E0712 10:24:00.974060 2707 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bb64ed3d0834f3afd9b4e4830b75fe076ad45a1c4b40ca31815bc9b5b9ab40e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 10:24:00.974260 kubelet[2707]: E0712 10:24:00.974124 2707 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bb64ed3d0834f3afd9b4e4830b75fe076ad45a1c4b40ca31815bc9b5b9ab40e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-cdh9p" Jul 12 10:24:00.974260 kubelet[2707]: E0712 10:24:00.974150 2707 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bb64ed3d0834f3afd9b4e4830b75fe076ad45a1c4b40ca31815bc9b5b9ab40e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-cdh9p" Jul 12 10:24:00.974473 kubelet[2707]: E0712 10:24:00.974197 2707 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-cdh9p_calico-system(d7530ec4-ef51-4337-ab9f-6e8f00c29a8e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-cdh9p_calico-system(d7530ec4-ef51-4337-ab9f-6e8f00c29a8e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8bb64ed3d0834f3afd9b4e4830b75fe076ad45a1c4b40ca31815bc9b5b9ab40e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-cdh9p" podUID="d7530ec4-ef51-4337-ab9f-6e8f00c29a8e" Jul 12 10:24:00.982394 containerd[1560]: time="2025-07-12T10:24:00.982299900Z" level=error msg="Failed to destroy network for sandbox \"d998c5c23716bf171dcc11e3c6d4d920375d7c0fbad44a8eecc2b2692b08ddb7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 10:24:00.984147 containerd[1560]: time="2025-07-12T10:24:00.984080687Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-778bc96f59-rcg6j,Uid:0a81c59f-5f69-4f57-a191-e15066abbd4b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d998c5c23716bf171dcc11e3c6d4d920375d7c0fbad44a8eecc2b2692b08ddb7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 10:24:00.984436 kubelet[2707]: E0712 10:24:00.984379 2707 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d998c5c23716bf171dcc11e3c6d4d920375d7c0fbad44a8eecc2b2692b08ddb7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 10:24:00.984436 kubelet[2707]: E0712 10:24:00.984448 2707 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d998c5c23716bf171dcc11e3c6d4d920375d7c0fbad44a8eecc2b2692b08ddb7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-778bc96f59-rcg6j" Jul 12 10:24:00.984611 kubelet[2707]: E0712 10:24:00.984472 2707 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d998c5c23716bf171dcc11e3c6d4d920375d7c0fbad44a8eecc2b2692b08ddb7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-778bc96f59-rcg6j" Jul 12 10:24:00.984611 kubelet[2707]: E0712 10:24:00.984524 2707 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-778bc96f59-rcg6j_calico-system(0a81c59f-5f69-4f57-a191-e15066abbd4b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-778bc96f59-rcg6j_calico-system(0a81c59f-5f69-4f57-a191-e15066abbd4b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d998c5c23716bf171dcc11e3c6d4d920375d7c0fbad44a8eecc2b2692b08ddb7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-778bc96f59-rcg6j" podUID="0a81c59f-5f69-4f57-a191-e15066abbd4b" Jul 12 10:24:00.985268 containerd[1560]: time="2025-07-12T10:24:00.985231717Z" level=error msg="Failed to destroy network for sandbox \"8db77cbd5eaa2a7316f99ae7e0ca6b8fd2a746a26d8f190137ad3644688401e7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 10:24:00.986686 containerd[1560]: time="2025-07-12T10:24:00.986637898Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79848c4678-ntvwk,Uid:e48e02da-19e4-41ee-ac20-7f6f2fb189de,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8db77cbd5eaa2a7316f99ae7e0ca6b8fd2a746a26d8f190137ad3644688401e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 10:24:00.986904 kubelet[2707]: E0712 10:24:00.986866 2707 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8db77cbd5eaa2a7316f99ae7e0ca6b8fd2a746a26d8f190137ad3644688401e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 10:24:00.986904 kubelet[2707]: E0712 10:24:00.986908 2707 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8db77cbd5eaa2a7316f99ae7e0ca6b8fd2a746a26d8f190137ad3644688401e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-79848c4678-ntvwk" Jul 12 10:24:00.987014 kubelet[2707]: E0712 10:24:00.986922 2707 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8db77cbd5eaa2a7316f99ae7e0ca6b8fd2a746a26d8f190137ad3644688401e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-79848c4678-ntvwk" Jul 12 10:24:00.987014 kubelet[2707]: E0712 10:24:00.986952 2707 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-79848c4678-ntvwk_calico-system(e48e02da-19e4-41ee-ac20-7f6f2fb189de)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-79848c4678-ntvwk_calico-system(e48e02da-19e4-41ee-ac20-7f6f2fb189de)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8db77cbd5eaa2a7316f99ae7e0ca6b8fd2a746a26d8f190137ad3644688401e7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-79848c4678-ntvwk" podUID="e48e02da-19e4-41ee-ac20-7f6f2fb189de" Jul 12 10:24:01.564771 kubelet[2707]: I0712 10:24:01.564706 2707 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 10:24:01.565208 kubelet[2707]: E0712 10:24:01.565087 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 10:24:01.596663 kubelet[2707]: E0712 10:24:01.596622 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 10:24:09.645700 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount674713825.mount: Deactivated successfully. Jul 12 10:24:11.434790 containerd[1560]: time="2025-07-12T10:24:11.434665054Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:24:11.503768 containerd[1560]: time="2025-07-12T10:24:11.503688916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d68fd648b-nbrdq,Uid:153be907-6581-4138-b29b-e67e9e609b4f,Namespace:calico-apiserver,Attempt:0,}" Jul 12 10:24:11.521544 containerd[1560]: time="2025-07-12T10:24:11.521464356Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 12 10:24:11.541705 containerd[1560]: time="2025-07-12T10:24:11.541632165Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:24:11.548387 containerd[1560]: time="2025-07-12T10:24:11.548324331Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:24:11.549391 containerd[1560]: time="2025-07-12T10:24:11.549340182Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 10.953143078s" Jul 12 10:24:11.549391 containerd[1560]: time="2025-07-12T10:24:11.549390697Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 12 10:24:11.566170 containerd[1560]: time="2025-07-12T10:24:11.566116863Z" level=info msg="CreateContainer within sandbox \"68fea6d228a5bcff6104d24f4d7fa24e6375014e47a45821d6e1987728cc81b6\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 12 10:24:11.590177 containerd[1560]: time="2025-07-12T10:24:11.590098144Z" level=info msg="Container e294b6b2defc2150e5f26c38d1e32bcce5be00f35b348aeca9a992f3884842e4: CDI devices from CRI Config.CDIDevices: []" Jul 12 10:24:11.601692 containerd[1560]: time="2025-07-12T10:24:11.601639830Z" level=info msg="CreateContainer within sandbox \"68fea6d228a5bcff6104d24f4d7fa24e6375014e47a45821d6e1987728cc81b6\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e294b6b2defc2150e5f26c38d1e32bcce5be00f35b348aeca9a992f3884842e4\"" Jul 12 10:24:11.602833 containerd[1560]: time="2025-07-12T10:24:11.602795905Z" level=info msg="StartContainer for \"e294b6b2defc2150e5f26c38d1e32bcce5be00f35b348aeca9a992f3884842e4\"" Jul 12 10:24:11.604590 containerd[1560]: time="2025-07-12T10:24:11.604467278Z" level=info msg="connecting to shim e294b6b2defc2150e5f26c38d1e32bcce5be00f35b348aeca9a992f3884842e4" address="unix:///run/containerd/s/7df1cf4a1776dd37ee41a54db06d3ff729979c9a0c87faf5f4defe1f958757fe" protocol=ttrpc version=3 Jul 12 10:24:11.612266 containerd[1560]: time="2025-07-12T10:24:11.612209098Z" level=error msg="Failed to destroy network for sandbox \"3978fb24df3601b34f2730b65c8cfb2120ae67174777aa8c131a3b5c6962b42e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 10:24:11.613798 containerd[1560]: time="2025-07-12T10:24:11.613707396Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d68fd648b-nbrdq,Uid:153be907-6581-4138-b29b-e67e9e609b4f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3978fb24df3601b34f2730b65c8cfb2120ae67174777aa8c131a3b5c6962b42e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 10:24:11.614101 kubelet[2707]: E0712 10:24:11.614050 2707 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3978fb24df3601b34f2730b65c8cfb2120ae67174777aa8c131a3b5c6962b42e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 10:24:11.614536 kubelet[2707]: E0712 10:24:11.614119 2707 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3978fb24df3601b34f2730b65c8cfb2120ae67174777aa8c131a3b5c6962b42e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d68fd648b-nbrdq" Jul 12 10:24:11.614536 kubelet[2707]: E0712 10:24:11.614143 2707 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3978fb24df3601b34f2730b65c8cfb2120ae67174777aa8c131a3b5c6962b42e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d68fd648b-nbrdq" Jul 12 10:24:11.614536 kubelet[2707]: E0712 10:24:11.614194 2707 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6d68fd648b-nbrdq_calico-apiserver(153be907-6581-4138-b29b-e67e9e609b4f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6d68fd648b-nbrdq_calico-apiserver(153be907-6581-4138-b29b-e67e9e609b4f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3978fb24df3601b34f2730b65c8cfb2120ae67174777aa8c131a3b5c6962b42e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6d68fd648b-nbrdq" podUID="153be907-6581-4138-b29b-e67e9e609b4f" Jul 12 10:24:11.659126 systemd[1]: Started cri-containerd-e294b6b2defc2150e5f26c38d1e32bcce5be00f35b348aeca9a992f3884842e4.scope - libcontainer container e294b6b2defc2150e5f26c38d1e32bcce5be00f35b348aeca9a992f3884842e4. Jul 12 10:24:11.780773 containerd[1560]: time="2025-07-12T10:24:11.780695978Z" level=info msg="StartContainer for \"e294b6b2defc2150e5f26c38d1e32bcce5be00f35b348aeca9a992f3884842e4\" returns successfully" Jul 12 10:24:11.802754 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 12 10:24:11.804226 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 12 10:24:12.358923 kubelet[2707]: I0712 10:24:12.358865 2707 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e48e02da-19e4-41ee-ac20-7f6f2fb189de-whisker-backend-key-pair\") pod \"e48e02da-19e4-41ee-ac20-7f6f2fb189de\" (UID: \"e48e02da-19e4-41ee-ac20-7f6f2fb189de\") " Jul 12 10:24:12.359155 kubelet[2707]: I0712 10:24:12.358969 2707 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nktmh\" (UniqueName: \"kubernetes.io/projected/e48e02da-19e4-41ee-ac20-7f6f2fb189de-kube-api-access-nktmh\") pod \"e48e02da-19e4-41ee-ac20-7f6f2fb189de\" (UID: \"e48e02da-19e4-41ee-ac20-7f6f2fb189de\") " Jul 12 10:24:12.359155 kubelet[2707]: I0712 10:24:12.358993 2707 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e48e02da-19e4-41ee-ac20-7f6f2fb189de-whisker-ca-bundle\") pod \"e48e02da-19e4-41ee-ac20-7f6f2fb189de\" (UID: \"e48e02da-19e4-41ee-ac20-7f6f2fb189de\") " Jul 12 10:24:12.359481 kubelet[2707]: I0712 10:24:12.359437 2707 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e48e02da-19e4-41ee-ac20-7f6f2fb189de-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "e48e02da-19e4-41ee-ac20-7f6f2fb189de" (UID: "e48e02da-19e4-41ee-ac20-7f6f2fb189de"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 12 10:24:12.363828 kubelet[2707]: I0712 10:24:12.363744 2707 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e48e02da-19e4-41ee-ac20-7f6f2fb189de-kube-api-access-nktmh" (OuterVolumeSpecName: "kube-api-access-nktmh") pod "e48e02da-19e4-41ee-ac20-7f6f2fb189de" (UID: "e48e02da-19e4-41ee-ac20-7f6f2fb189de"). InnerVolumeSpecName "kube-api-access-nktmh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 12 10:24:12.363828 kubelet[2707]: I0712 10:24:12.363735 2707 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e48e02da-19e4-41ee-ac20-7f6f2fb189de-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "e48e02da-19e4-41ee-ac20-7f6f2fb189de" (UID: "e48e02da-19e4-41ee-ac20-7f6f2fb189de"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 12 10:24:12.459803 kubelet[2707]: I0712 10:24:12.459744 2707 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nktmh\" (UniqueName: \"kubernetes.io/projected/e48e02da-19e4-41ee-ac20-7f6f2fb189de-kube-api-access-nktmh\") on node \"localhost\" DevicePath \"\"" Jul 12 10:24:12.459803 kubelet[2707]: I0712 10:24:12.459787 2707 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e48e02da-19e4-41ee-ac20-7f6f2fb189de-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 12 10:24:12.459803 kubelet[2707]: I0712 10:24:12.459795 2707 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e48e02da-19e4-41ee-ac20-7f6f2fb189de-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 12 10:24:12.503471 containerd[1560]: time="2025-07-12T10:24:12.503425788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-cdh9p,Uid:d7530ec4-ef51-4337-ab9f-6e8f00c29a8e,Namespace:calico-system,Attempt:0,}" Jul 12 10:24:12.503859 containerd[1560]: time="2025-07-12T10:24:12.503656031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d68fd648b-shqdk,Uid:cb03ae40-baa6-4128-a2f4-201bd683dff9,Namespace:calico-apiserver,Attempt:0,}" Jul 12 10:24:12.513103 systemd[1]: Removed slice kubepods-besteffort-pode48e02da_19e4_41ee_ac20_7f6f2fb189de.slice - libcontainer container kubepods-besteffort-pode48e02da_19e4_41ee_ac20_7f6f2fb189de.slice. Jul 12 10:24:12.544271 systemd[1]: run-netns-cni\x2d92281138\x2d01ef\x2d315f\x2df3fb\x2de41afa9fe078.mount: Deactivated successfully. Jul 12 10:24:12.544367 systemd[1]: var-lib-kubelet-pods-e48e02da\x2d19e4\x2d41ee\x2dac20\x2d7f6f2fb189de-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnktmh.mount: Deactivated successfully. Jul 12 10:24:12.544445 systemd[1]: var-lib-kubelet-pods-e48e02da\x2d19e4\x2d41ee\x2dac20\x2d7f6f2fb189de-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 12 10:24:12.827585 kubelet[2707]: I0712 10:24:12.827495 2707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-pwlkk" podStartSLOduration=2.030770406 podStartE2EDuration="23.827448497s" podCreationTimestamp="2025-07-12 10:23:49 +0000 UTC" firstStartedPulling="2025-07-12 10:23:49.753889388 +0000 UTC m=+17.329874763" lastFinishedPulling="2025-07-12 10:24:11.550567489 +0000 UTC m=+39.126552854" observedRunningTime="2025-07-12 10:24:12.825252097 +0000 UTC m=+40.401237492" watchObservedRunningTime="2025-07-12 10:24:12.827448497 +0000 UTC m=+40.403433872" Jul 12 10:24:12.850441 systemd[1]: Created slice kubepods-besteffort-pod87934f64_9b26_414a_a389_c171201e8a5c.slice - libcontainer container kubepods-besteffort-pod87934f64_9b26_414a_a389_c171201e8a5c.slice. Jul 12 10:24:12.863149 kubelet[2707]: I0712 10:24:12.863097 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/87934f64-9b26-414a-a389-c171201e8a5c-whisker-backend-key-pair\") pod \"whisker-cf5b9c586-9zbh9\" (UID: \"87934f64-9b26-414a-a389-c171201e8a5c\") " pod="calico-system/whisker-cf5b9c586-9zbh9" Jul 12 10:24:12.863390 kubelet[2707]: I0712 10:24:12.863373 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/87934f64-9b26-414a-a389-c171201e8a5c-whisker-ca-bundle\") pod \"whisker-cf5b9c586-9zbh9\" (UID: \"87934f64-9b26-414a-a389-c171201e8a5c\") " pod="calico-system/whisker-cf5b9c586-9zbh9" Jul 12 10:24:12.863532 kubelet[2707]: I0712 10:24:12.863513 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n87lm\" (UniqueName: \"kubernetes.io/projected/87934f64-9b26-414a-a389-c171201e8a5c-kube-api-access-n87lm\") pod \"whisker-cf5b9c586-9zbh9\" (UID: \"87934f64-9b26-414a-a389-c171201e8a5c\") " pod="calico-system/whisker-cf5b9c586-9zbh9" Jul 12 10:24:12.885164 systemd-networkd[1479]: calif172f1803df: Link UP Jul 12 10:24:12.885805 systemd-networkd[1479]: calif172f1803df: Gained carrier Jul 12 10:24:12.900077 containerd[1560]: 2025-07-12 10:24:12.655 [INFO][3928] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 12 10:24:12.900077 containerd[1560]: 2025-07-12 10:24:12.686 [INFO][3928] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--768f4c5c69--cdh9p-eth0 goldmane-768f4c5c69- calico-system d7530ec4-ef51-4337-ab9f-6e8f00c29a8e 857 0 2025-07-12 10:23:48 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-768f4c5c69-cdh9p eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calif172f1803df [] [] }} ContainerID="7e24b77166577905f4f9868c954eb5a2a1f5ab0e05081d2bf87e3cffaa737663" Namespace="calico-system" Pod="goldmane-768f4c5c69-cdh9p" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--cdh9p-" Jul 12 10:24:12.900077 containerd[1560]: 2025-07-12 10:24:12.687 [INFO][3928] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7e24b77166577905f4f9868c954eb5a2a1f5ab0e05081d2bf87e3cffaa737663" Namespace="calico-system" Pod="goldmane-768f4c5c69-cdh9p" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--cdh9p-eth0" Jul 12 10:24:12.900077 containerd[1560]: 2025-07-12 10:24:12.771 [INFO][3954] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7e24b77166577905f4f9868c954eb5a2a1f5ab0e05081d2bf87e3cffaa737663" HandleID="k8s-pod-network.7e24b77166577905f4f9868c954eb5a2a1f5ab0e05081d2bf87e3cffaa737663" Workload="localhost-k8s-goldmane--768f4c5c69--cdh9p-eth0" Jul 12 10:24:12.900473 containerd[1560]: 2025-07-12 10:24:12.773 [INFO][3954] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7e24b77166577905f4f9868c954eb5a2a1f5ab0e05081d2bf87e3cffaa737663" HandleID="k8s-pod-network.7e24b77166577905f4f9868c954eb5a2a1f5ab0e05081d2bf87e3cffaa737663" Workload="localhost-k8s-goldmane--768f4c5c69--cdh9p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000b94d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-768f4c5c69-cdh9p", "timestamp":"2025-07-12 10:24:12.771372062 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 10:24:12.900473 containerd[1560]: 2025-07-12 10:24:12.773 [INFO][3954] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 10:24:12.900473 containerd[1560]: 2025-07-12 10:24:12.773 [INFO][3954] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 10:24:12.900473 containerd[1560]: 2025-07-12 10:24:12.774 [INFO][3954] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 12 10:24:12.900473 containerd[1560]: 2025-07-12 10:24:12.829 [INFO][3954] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7e24b77166577905f4f9868c954eb5a2a1f5ab0e05081d2bf87e3cffaa737663" host="localhost" Jul 12 10:24:12.900473 containerd[1560]: 2025-07-12 10:24:12.844 [INFO][3954] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 12 10:24:12.900473 containerd[1560]: 2025-07-12 10:24:12.856 [INFO][3954] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 12 10:24:12.900473 containerd[1560]: 2025-07-12 10:24:12.859 [INFO][3954] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 12 10:24:12.900473 containerd[1560]: 2025-07-12 10:24:12.861 [INFO][3954] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 12 10:24:12.900473 containerd[1560]: 2025-07-12 10:24:12.861 [INFO][3954] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7e24b77166577905f4f9868c954eb5a2a1f5ab0e05081d2bf87e3cffaa737663" host="localhost" Jul 12 10:24:12.900781 containerd[1560]: 2025-07-12 10:24:12.862 [INFO][3954] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7e24b77166577905f4f9868c954eb5a2a1f5ab0e05081d2bf87e3cffaa737663 Jul 12 10:24:12.900781 containerd[1560]: 2025-07-12 10:24:12.866 [INFO][3954] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7e24b77166577905f4f9868c954eb5a2a1f5ab0e05081d2bf87e3cffaa737663" host="localhost" Jul 12 10:24:12.900781 containerd[1560]: 2025-07-12 10:24:12.871 [INFO][3954] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.7e24b77166577905f4f9868c954eb5a2a1f5ab0e05081d2bf87e3cffaa737663" host="localhost" Jul 12 10:24:12.900781 containerd[1560]: 2025-07-12 10:24:12.871 [INFO][3954] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.7e24b77166577905f4f9868c954eb5a2a1f5ab0e05081d2bf87e3cffaa737663" host="localhost" Jul 12 10:24:12.900781 containerd[1560]: 2025-07-12 10:24:12.871 [INFO][3954] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 10:24:12.900781 containerd[1560]: 2025-07-12 10:24:12.871 [INFO][3954] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="7e24b77166577905f4f9868c954eb5a2a1f5ab0e05081d2bf87e3cffaa737663" HandleID="k8s-pod-network.7e24b77166577905f4f9868c954eb5a2a1f5ab0e05081d2bf87e3cffaa737663" Workload="localhost-k8s-goldmane--768f4c5c69--cdh9p-eth0" Jul 12 10:24:12.900957 containerd[1560]: 2025-07-12 10:24:12.876 [INFO][3928] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7e24b77166577905f4f9868c954eb5a2a1f5ab0e05081d2bf87e3cffaa737663" Namespace="calico-system" Pod="goldmane-768f4c5c69-cdh9p" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--cdh9p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--cdh9p-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"d7530ec4-ef51-4337-ab9f-6e8f00c29a8e", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 10, 23, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-768f4c5c69-cdh9p", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif172f1803df", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 10:24:12.900957 containerd[1560]: 2025-07-12 10:24:12.876 [INFO][3928] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="7e24b77166577905f4f9868c954eb5a2a1f5ab0e05081d2bf87e3cffaa737663" Namespace="calico-system" Pod="goldmane-768f4c5c69-cdh9p" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--cdh9p-eth0" Jul 12 10:24:12.901070 containerd[1560]: 2025-07-12 10:24:12.876 [INFO][3928] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif172f1803df ContainerID="7e24b77166577905f4f9868c954eb5a2a1f5ab0e05081d2bf87e3cffaa737663" Namespace="calico-system" Pod="goldmane-768f4c5c69-cdh9p" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--cdh9p-eth0" Jul 12 10:24:12.901070 containerd[1560]: 2025-07-12 10:24:12.885 [INFO][3928] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7e24b77166577905f4f9868c954eb5a2a1f5ab0e05081d2bf87e3cffaa737663" Namespace="calico-system" Pod="goldmane-768f4c5c69-cdh9p" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--cdh9p-eth0" Jul 12 10:24:12.901144 containerd[1560]: 2025-07-12 10:24:12.886 [INFO][3928] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7e24b77166577905f4f9868c954eb5a2a1f5ab0e05081d2bf87e3cffaa737663" Namespace="calico-system" Pod="goldmane-768f4c5c69-cdh9p" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--cdh9p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--cdh9p-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"d7530ec4-ef51-4337-ab9f-6e8f00c29a8e", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 10, 23, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7e24b77166577905f4f9868c954eb5a2a1f5ab0e05081d2bf87e3cffaa737663", Pod:"goldmane-768f4c5c69-cdh9p", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif172f1803df", MAC:"f6:d6:6f:98:90:32", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 10:24:12.901219 containerd[1560]: 2025-07-12 10:24:12.895 [INFO][3928] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7e24b77166577905f4f9868c954eb5a2a1f5ab0e05081d2bf87e3cffaa737663" Namespace="calico-system" Pod="goldmane-768f4c5c69-cdh9p" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--cdh9p-eth0" Jul 12 10:24:13.457740 containerd[1560]: time="2025-07-12T10:24:13.457677401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-cf5b9c586-9zbh9,Uid:87934f64-9b26-414a-a389-c171201e8a5c,Namespace:calico-system,Attempt:0,}" Jul 12 10:24:13.503060 containerd[1560]: time="2025-07-12T10:24:13.503016935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mvff8,Uid:4e0c1f65-4f12-4625-8a4a-ef0ef07f6a27,Namespace:calico-system,Attempt:0,}" Jul 12 10:24:13.788642 systemd-networkd[1479]: calid32d9a41fde: Link UP Jul 12 10:24:13.789812 systemd-networkd[1479]: calid32d9a41fde: Gained carrier Jul 12 10:24:13.834857 containerd[1560]: 2025-07-12 10:24:12.695 [INFO][3940] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 12 10:24:13.834857 containerd[1560]: 2025-07-12 10:24:12.830 [INFO][3940] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6d68fd648b--shqdk-eth0 calico-apiserver-6d68fd648b- calico-apiserver cb03ae40-baa6-4128-a2f4-201bd683dff9 859 0 2025-07-12 10:23:46 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6d68fd648b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6d68fd648b-shqdk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid32d9a41fde [] [] }} ContainerID="6d09d81c314ba554554de2e8785a5eb377fa550ac7a24ddfdb134b0024a24d35" Namespace="calico-apiserver" Pod="calico-apiserver-6d68fd648b-shqdk" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d68fd648b--shqdk-" Jul 12 10:24:13.834857 containerd[1560]: 2025-07-12 10:24:12.830 [INFO][3940] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6d09d81c314ba554554de2e8785a5eb377fa550ac7a24ddfdb134b0024a24d35" Namespace="calico-apiserver" Pod="calico-apiserver-6d68fd648b-shqdk" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d68fd648b--shqdk-eth0" Jul 12 10:24:13.834857 containerd[1560]: 2025-07-12 10:24:12.877 [INFO][3962] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6d09d81c314ba554554de2e8785a5eb377fa550ac7a24ddfdb134b0024a24d35" HandleID="k8s-pod-network.6d09d81c314ba554554de2e8785a5eb377fa550ac7a24ddfdb134b0024a24d35" Workload="localhost-k8s-calico--apiserver--6d68fd648b--shqdk-eth0" Jul 12 10:24:13.835424 containerd[1560]: 2025-07-12 10:24:12.877 [INFO][3962] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6d09d81c314ba554554de2e8785a5eb377fa550ac7a24ddfdb134b0024a24d35" HandleID="k8s-pod-network.6d09d81c314ba554554de2e8785a5eb377fa550ac7a24ddfdb134b0024a24d35" Workload="localhost-k8s-calico--apiserver--6d68fd648b--shqdk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f7b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6d68fd648b-shqdk", "timestamp":"2025-07-12 10:24:12.877288826 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 10:24:13.835424 containerd[1560]: 2025-07-12 10:24:12.877 [INFO][3962] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 10:24:13.835424 containerd[1560]: 2025-07-12 10:24:12.877 [INFO][3962] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 10:24:13.835424 containerd[1560]: 2025-07-12 10:24:12.877 [INFO][3962] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 12 10:24:13.835424 containerd[1560]: 2025-07-12 10:24:13.178 [INFO][3962] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6d09d81c314ba554554de2e8785a5eb377fa550ac7a24ddfdb134b0024a24d35" host="localhost" Jul 12 10:24:13.835424 containerd[1560]: 2025-07-12 10:24:13.623 [INFO][3962] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 12 10:24:13.835424 containerd[1560]: 2025-07-12 10:24:13.626 [INFO][3962] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 12 10:24:13.835424 containerd[1560]: 2025-07-12 10:24:13.628 [INFO][3962] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 12 10:24:13.835424 containerd[1560]: 2025-07-12 10:24:13.630 [INFO][3962] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 12 10:24:13.835424 containerd[1560]: 2025-07-12 10:24:13.630 [INFO][3962] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6d09d81c314ba554554de2e8785a5eb377fa550ac7a24ddfdb134b0024a24d35" host="localhost" Jul 12 10:24:13.835651 containerd[1560]: 2025-07-12 10:24:13.631 [INFO][3962] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6d09d81c314ba554554de2e8785a5eb377fa550ac7a24ddfdb134b0024a24d35 Jul 12 10:24:13.835651 containerd[1560]: 2025-07-12 10:24:13.713 [INFO][3962] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6d09d81c314ba554554de2e8785a5eb377fa550ac7a24ddfdb134b0024a24d35" host="localhost" Jul 12 10:24:13.835651 containerd[1560]: 2025-07-12 10:24:13.765 [INFO][3962] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.6d09d81c314ba554554de2e8785a5eb377fa550ac7a24ddfdb134b0024a24d35" host="localhost" Jul 12 10:24:13.835651 containerd[1560]: 2025-07-12 10:24:13.765 [INFO][3962] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.6d09d81c314ba554554de2e8785a5eb377fa550ac7a24ddfdb134b0024a24d35" host="localhost" Jul 12 10:24:13.835651 containerd[1560]: 2025-07-12 10:24:13.765 [INFO][3962] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 10:24:13.835651 containerd[1560]: 2025-07-12 10:24:13.765 [INFO][3962] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="6d09d81c314ba554554de2e8785a5eb377fa550ac7a24ddfdb134b0024a24d35" HandleID="k8s-pod-network.6d09d81c314ba554554de2e8785a5eb377fa550ac7a24ddfdb134b0024a24d35" Workload="localhost-k8s-calico--apiserver--6d68fd648b--shqdk-eth0" Jul 12 10:24:13.835794 containerd[1560]: 2025-07-12 10:24:13.773 [INFO][3940] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6d09d81c314ba554554de2e8785a5eb377fa550ac7a24ddfdb134b0024a24d35" Namespace="calico-apiserver" Pod="calico-apiserver-6d68fd648b-shqdk" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d68fd648b--shqdk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6d68fd648b--shqdk-eth0", GenerateName:"calico-apiserver-6d68fd648b-", Namespace:"calico-apiserver", SelfLink:"", UID:"cb03ae40-baa6-4128-a2f4-201bd683dff9", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 10, 23, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d68fd648b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6d68fd648b-shqdk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid32d9a41fde", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 10:24:13.835847 containerd[1560]: 2025-07-12 10:24:13.777 [INFO][3940] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="6d09d81c314ba554554de2e8785a5eb377fa550ac7a24ddfdb134b0024a24d35" Namespace="calico-apiserver" Pod="calico-apiserver-6d68fd648b-shqdk" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d68fd648b--shqdk-eth0" Jul 12 10:24:13.835847 containerd[1560]: 2025-07-12 10:24:13.777 [INFO][3940] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid32d9a41fde ContainerID="6d09d81c314ba554554de2e8785a5eb377fa550ac7a24ddfdb134b0024a24d35" Namespace="calico-apiserver" Pod="calico-apiserver-6d68fd648b-shqdk" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d68fd648b--shqdk-eth0" Jul 12 10:24:13.835847 containerd[1560]: 2025-07-12 10:24:13.790 [INFO][3940] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6d09d81c314ba554554de2e8785a5eb377fa550ac7a24ddfdb134b0024a24d35" Namespace="calico-apiserver" Pod="calico-apiserver-6d68fd648b-shqdk" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d68fd648b--shqdk-eth0" Jul 12 10:24:13.835912 containerd[1560]: 2025-07-12 10:24:13.793 [INFO][3940] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6d09d81c314ba554554de2e8785a5eb377fa550ac7a24ddfdb134b0024a24d35" Namespace="calico-apiserver" Pod="calico-apiserver-6d68fd648b-shqdk" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d68fd648b--shqdk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6d68fd648b--shqdk-eth0", GenerateName:"calico-apiserver-6d68fd648b-", Namespace:"calico-apiserver", SelfLink:"", UID:"cb03ae40-baa6-4128-a2f4-201bd683dff9", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 10, 23, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d68fd648b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6d09d81c314ba554554de2e8785a5eb377fa550ac7a24ddfdb134b0024a24d35", Pod:"calico-apiserver-6d68fd648b-shqdk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid32d9a41fde", MAC:"6a:fd:bc:71:9f:43", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 10:24:13.835965 containerd[1560]: 2025-07-12 10:24:13.831 [INFO][3940] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6d09d81c314ba554554de2e8785a5eb377fa550ac7a24ddfdb134b0024a24d35" Namespace="calico-apiserver" Pod="calico-apiserver-6d68fd648b-shqdk" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d68fd648b--shqdk-eth0" Jul 12 10:24:13.898297 containerd[1560]: time="2025-07-12T10:24:13.898225776Z" level=info msg="connecting to shim 6d09d81c314ba554554de2e8785a5eb377fa550ac7a24ddfdb134b0024a24d35" address="unix:///run/containerd/s/6a5dea46992ba00f42db18c9e93fb8a85387453de07cb9925fb0affc40fa499e" namespace=k8s.io protocol=ttrpc version=3 Jul 12 10:24:13.899489 containerd[1560]: time="2025-07-12T10:24:13.899454156Z" level=info msg="connecting to shim 7e24b77166577905f4f9868c954eb5a2a1f5ab0e05081d2bf87e3cffaa737663" address="unix:///run/containerd/s/011de2da762df6f8663ca4f8433a91ee823a2ce62c6fee7e6834e0aff2edd8c8" namespace=k8s.io protocol=ttrpc version=3 Jul 12 10:24:13.990916 systemd[1]: Started cri-containerd-7e24b77166577905f4f9868c954eb5a2a1f5ab0e05081d2bf87e3cffaa737663.scope - libcontainer container 7e24b77166577905f4f9868c954eb5a2a1f5ab0e05081d2bf87e3cffaa737663. Jul 12 10:24:14.002437 systemd[1]: Started cri-containerd-6d09d81c314ba554554de2e8785a5eb377fa550ac7a24ddfdb134b0024a24d35.scope - libcontainer container 6d09d81c314ba554554de2e8785a5eb377fa550ac7a24ddfdb134b0024a24d35. Jul 12 10:24:14.018477 systemd-resolved[1410]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 10:24:14.021532 systemd-resolved[1410]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 10:24:14.161636 containerd[1560]: time="2025-07-12T10:24:14.161456822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d68fd648b-shqdk,Uid:cb03ae40-baa6-4128-a2f4-201bd683dff9,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"6d09d81c314ba554554de2e8785a5eb377fa550ac7a24ddfdb134b0024a24d35\"" Jul 12 10:24:14.164213 containerd[1560]: time="2025-07-12T10:24:14.164076076Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 12 10:24:14.200751 containerd[1560]: time="2025-07-12T10:24:14.199806294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-cdh9p,Uid:d7530ec4-ef51-4337-ab9f-6e8f00c29a8e,Namespace:calico-system,Attempt:0,} returns sandbox id \"7e24b77166577905f4f9868c954eb5a2a1f5ab0e05081d2bf87e3cffaa737663\"" Jul 12 10:24:14.222859 systemd-networkd[1479]: vxlan.calico: Link UP Jul 12 10:24:14.222867 systemd-networkd[1479]: vxlan.calico: Gained carrier Jul 12 10:24:14.307411 systemd[1]: Started sshd@7-10.0.0.137:22-10.0.0.1:38820.service - OpenSSH per-connection server daemon (10.0.0.1:38820). Jul 12 10:24:14.378472 sshd[4290]: Accepted publickey for core from 10.0.0.1 port 38820 ssh2: RSA SHA256:ljiWRYFI9rWPiw0u7CK8T0RxaTLaHmCBjbv/AQVYFjw Jul 12 10:24:14.380376 sshd-session[4290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 10:24:14.385639 systemd-logind[1540]: New session 8 of user core. Jul 12 10:24:14.392125 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 12 10:24:14.400122 systemd-networkd[1479]: cali09cc398cab5: Link UP Jul 12 10:24:14.403875 systemd-networkd[1479]: cali09cc398cab5: Gained carrier Jul 12 10:24:14.425421 containerd[1560]: 2025-07-12 10:24:13.858 [INFO][4077] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 12 10:24:14.425421 containerd[1560]: 2025-07-12 10:24:13.909 [INFO][4077] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--cf5b9c586--9zbh9-eth0 whisker-cf5b9c586- calico-system 87934f64-9b26-414a-a389-c171201e8a5c 938 0 2025-07-12 10:24:12 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:cf5b9c586 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-cf5b9c586-9zbh9 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali09cc398cab5 [] [] }} ContainerID="c25f2c8ad668314fd175ea067551e4ee4e96585bcd927d433b4623217a302c8a" Namespace="calico-system" Pod="whisker-cf5b9c586-9zbh9" WorkloadEndpoint="localhost-k8s-whisker--cf5b9c586--9zbh9-" Jul 12 10:24:14.425421 containerd[1560]: 2025-07-12 10:24:13.910 [INFO][4077] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c25f2c8ad668314fd175ea067551e4ee4e96585bcd927d433b4623217a302c8a" Namespace="calico-system" Pod="whisker-cf5b9c586-9zbh9" WorkloadEndpoint="localhost-k8s-whisker--cf5b9c586--9zbh9-eth0" Jul 12 10:24:14.425421 containerd[1560]: 2025-07-12 10:24:13.994 [INFO][4164] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c25f2c8ad668314fd175ea067551e4ee4e96585bcd927d433b4623217a302c8a" HandleID="k8s-pod-network.c25f2c8ad668314fd175ea067551e4ee4e96585bcd927d433b4623217a302c8a" Workload="localhost-k8s-whisker--cf5b9c586--9zbh9-eth0" Jul 12 10:24:14.425705 containerd[1560]: 2025-07-12 10:24:13.995 [INFO][4164] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c25f2c8ad668314fd175ea067551e4ee4e96585bcd927d433b4623217a302c8a" HandleID="k8s-pod-network.c25f2c8ad668314fd175ea067551e4ee4e96585bcd927d433b4623217a302c8a" Workload="localhost-k8s-whisker--cf5b9c586--9zbh9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002de060), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-cf5b9c586-9zbh9", "timestamp":"2025-07-12 10:24:13.994321768 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 10:24:14.425705 containerd[1560]: 2025-07-12 10:24:13.995 [INFO][4164] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 10:24:14.425705 containerd[1560]: 2025-07-12 10:24:13.995 [INFO][4164] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 10:24:14.425705 containerd[1560]: 2025-07-12 10:24:13.995 [INFO][4164] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 12 10:24:14.425705 containerd[1560]: 2025-07-12 10:24:14.253 [INFO][4164] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c25f2c8ad668314fd175ea067551e4ee4e96585bcd927d433b4623217a302c8a" host="localhost" Jul 12 10:24:14.425705 containerd[1560]: 2025-07-12 10:24:14.294 [INFO][4164] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 12 10:24:14.425705 containerd[1560]: 2025-07-12 10:24:14.298 [INFO][4164] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 12 10:24:14.425705 containerd[1560]: 2025-07-12 10:24:14.364 [INFO][4164] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 12 10:24:14.425705 containerd[1560]: 2025-07-12 10:24:14.367 [INFO][4164] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 12 10:24:14.425705 containerd[1560]: 2025-07-12 10:24:14.367 [INFO][4164] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c25f2c8ad668314fd175ea067551e4ee4e96585bcd927d433b4623217a302c8a" host="localhost" Jul 12 10:24:14.426128 containerd[1560]: 2025-07-12 10:24:14.368 [INFO][4164] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c25f2c8ad668314fd175ea067551e4ee4e96585bcd927d433b4623217a302c8a Jul 12 10:24:14.426128 containerd[1560]: 2025-07-12 10:24:14.376 [INFO][4164] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c25f2c8ad668314fd175ea067551e4ee4e96585bcd927d433b4623217a302c8a" host="localhost" Jul 12 10:24:14.426128 containerd[1560]: 2025-07-12 10:24:14.387 [INFO][4164] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.c25f2c8ad668314fd175ea067551e4ee4e96585bcd927d433b4623217a302c8a" host="localhost" Jul 12 10:24:14.426128 containerd[1560]: 2025-07-12 10:24:14.387 [INFO][4164] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.c25f2c8ad668314fd175ea067551e4ee4e96585bcd927d433b4623217a302c8a" host="localhost" Jul 12 10:24:14.426128 containerd[1560]: 2025-07-12 10:24:14.387 [INFO][4164] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 10:24:14.426128 containerd[1560]: 2025-07-12 10:24:14.387 [INFO][4164] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="c25f2c8ad668314fd175ea067551e4ee4e96585bcd927d433b4623217a302c8a" HandleID="k8s-pod-network.c25f2c8ad668314fd175ea067551e4ee4e96585bcd927d433b4623217a302c8a" Workload="localhost-k8s-whisker--cf5b9c586--9zbh9-eth0" Jul 12 10:24:14.426350 containerd[1560]: 2025-07-12 10:24:14.392 [INFO][4077] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c25f2c8ad668314fd175ea067551e4ee4e96585bcd927d433b4623217a302c8a" Namespace="calico-system" Pod="whisker-cf5b9c586-9zbh9" WorkloadEndpoint="localhost-k8s-whisker--cf5b9c586--9zbh9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--cf5b9c586--9zbh9-eth0", GenerateName:"whisker-cf5b9c586-", Namespace:"calico-system", SelfLink:"", UID:"87934f64-9b26-414a-a389-c171201e8a5c", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 10, 24, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"cf5b9c586", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-cf5b9c586-9zbh9", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali09cc398cab5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 10:24:14.426350 containerd[1560]: 2025-07-12 10:24:14.393 [INFO][4077] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="c25f2c8ad668314fd175ea067551e4ee4e96585bcd927d433b4623217a302c8a" Namespace="calico-system" Pod="whisker-cf5b9c586-9zbh9" WorkloadEndpoint="localhost-k8s-whisker--cf5b9c586--9zbh9-eth0" Jul 12 10:24:14.426462 containerd[1560]: 2025-07-12 10:24:14.393 [INFO][4077] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali09cc398cab5 ContainerID="c25f2c8ad668314fd175ea067551e4ee4e96585bcd927d433b4623217a302c8a" Namespace="calico-system" Pod="whisker-cf5b9c586-9zbh9" WorkloadEndpoint="localhost-k8s-whisker--cf5b9c586--9zbh9-eth0" Jul 12 10:24:14.426462 containerd[1560]: 2025-07-12 10:24:14.400 [INFO][4077] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c25f2c8ad668314fd175ea067551e4ee4e96585bcd927d433b4623217a302c8a" Namespace="calico-system" Pod="whisker-cf5b9c586-9zbh9" WorkloadEndpoint="localhost-k8s-whisker--cf5b9c586--9zbh9-eth0" Jul 12 10:24:14.426532 containerd[1560]: 2025-07-12 10:24:14.400 [INFO][4077] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c25f2c8ad668314fd175ea067551e4ee4e96585bcd927d433b4623217a302c8a" Namespace="calico-system" Pod="whisker-cf5b9c586-9zbh9" WorkloadEndpoint="localhost-k8s-whisker--cf5b9c586--9zbh9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--cf5b9c586--9zbh9-eth0", GenerateName:"whisker-cf5b9c586-", Namespace:"calico-system", SelfLink:"", UID:"87934f64-9b26-414a-a389-c171201e8a5c", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 10, 24, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"cf5b9c586", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c25f2c8ad668314fd175ea067551e4ee4e96585bcd927d433b4623217a302c8a", Pod:"whisker-cf5b9c586-9zbh9", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali09cc398cab5", MAC:"ba:14:b3:2a:0d:25", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 10:24:14.426600 containerd[1560]: 2025-07-12 10:24:14.421 [INFO][4077] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c25f2c8ad668314fd175ea067551e4ee4e96585bcd927d433b4623217a302c8a" Namespace="calico-system" Pod="whisker-cf5b9c586-9zbh9" WorkloadEndpoint="localhost-k8s-whisker--cf5b9c586--9zbh9-eth0" Jul 12 10:24:14.498316 systemd-networkd[1479]: cali99048db36c2: Link UP Jul 12 10:24:14.499682 systemd-networkd[1479]: cali99048db36c2: Gained carrier Jul 12 10:24:14.504937 kubelet[2707]: E0712 10:24:14.504907 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 10:24:14.509052 containerd[1560]: time="2025-07-12T10:24:14.508883489Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-778bc96f59-rcg6j,Uid:0a81c59f-5f69-4f57-a191-e15066abbd4b,Namespace:calico-system,Attempt:0,}" Jul 12 10:24:14.510111 containerd[1560]: time="2025-07-12T10:24:14.510079547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5hcl7,Uid:9c226913-f0b1-4c3a-8e9d-41c8ddc6d70e,Namespace:kube-system,Attempt:0,}" Jul 12 10:24:14.512559 kubelet[2707]: I0712 10:24:14.512406 2707 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e48e02da-19e4-41ee-ac20-7f6f2fb189de" path="/var/lib/kubelet/pods/e48e02da-19e4-41ee-ac20-7f6f2fb189de/volumes" Jul 12 10:24:14.545351 containerd[1560]: 2025-07-12 10:24:13.846 [INFO][4088] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 12 10:24:14.545351 containerd[1560]: 2025-07-12 10:24:13.913 [INFO][4088] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--mvff8-eth0 csi-node-driver- calico-system 4e0c1f65-4f12-4625-8a4a-ef0ef07f6a27 745 0 2025-07-12 10:23:49 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-mvff8 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali99048db36c2 [] [] }} ContainerID="97f4c5f229e41267bfd26bbc6e7d599a373b357615841f3dd9f0aee9645d1e1c" Namespace="calico-system" Pod="csi-node-driver-mvff8" WorkloadEndpoint="localhost-k8s-csi--node--driver--mvff8-" Jul 12 10:24:14.545351 containerd[1560]: 2025-07-12 10:24:13.913 [INFO][4088] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="97f4c5f229e41267bfd26bbc6e7d599a373b357615841f3dd9f0aee9645d1e1c" Namespace="calico-system" Pod="csi-node-driver-mvff8" WorkloadEndpoint="localhost-k8s-csi--node--driver--mvff8-eth0" Jul 12 10:24:14.545351 containerd[1560]: 2025-07-12 10:24:14.012 [INFO][4166] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="97f4c5f229e41267bfd26bbc6e7d599a373b357615841f3dd9f0aee9645d1e1c" HandleID="k8s-pod-network.97f4c5f229e41267bfd26bbc6e7d599a373b357615841f3dd9f0aee9645d1e1c" Workload="localhost-k8s-csi--node--driver--mvff8-eth0" Jul 12 10:24:14.545700 containerd[1560]: 2025-07-12 10:24:14.014 [INFO][4166] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="97f4c5f229e41267bfd26bbc6e7d599a373b357615841f3dd9f0aee9645d1e1c" HandleID="k8s-pod-network.97f4c5f229e41267bfd26bbc6e7d599a373b357615841f3dd9f0aee9645d1e1c" Workload="localhost-k8s-csi--node--driver--mvff8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f610), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-mvff8", "timestamp":"2025-07-12 10:24:14.009969295 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 10:24:14.545700 containerd[1560]: 2025-07-12 10:24:14.015 [INFO][4166] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 10:24:14.545700 containerd[1560]: 2025-07-12 10:24:14.388 [INFO][4166] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 10:24:14.545700 containerd[1560]: 2025-07-12 10:24:14.388 [INFO][4166] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 12 10:24:14.545700 containerd[1560]: 2025-07-12 10:24:14.405 [INFO][4166] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.97f4c5f229e41267bfd26bbc6e7d599a373b357615841f3dd9f0aee9645d1e1c" host="localhost" Jul 12 10:24:14.545700 containerd[1560]: 2025-07-12 10:24:14.421 [INFO][4166] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 12 10:24:14.545700 containerd[1560]: 2025-07-12 10:24:14.427 [INFO][4166] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 12 10:24:14.545700 containerd[1560]: 2025-07-12 10:24:14.429 [INFO][4166] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 12 10:24:14.545700 containerd[1560]: 2025-07-12 10:24:14.431 [INFO][4166] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 12 10:24:14.545700 containerd[1560]: 2025-07-12 10:24:14.431 [INFO][4166] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.97f4c5f229e41267bfd26bbc6e7d599a373b357615841f3dd9f0aee9645d1e1c" host="localhost" Jul 12 10:24:14.546241 containerd[1560]: 2025-07-12 10:24:14.432 [INFO][4166] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.97f4c5f229e41267bfd26bbc6e7d599a373b357615841f3dd9f0aee9645d1e1c Jul 12 10:24:14.546241 containerd[1560]: 2025-07-12 10:24:14.454 [INFO][4166] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.97f4c5f229e41267bfd26bbc6e7d599a373b357615841f3dd9f0aee9645d1e1c" host="localhost" Jul 12 10:24:14.546241 containerd[1560]: 2025-07-12 10:24:14.488 [INFO][4166] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.97f4c5f229e41267bfd26bbc6e7d599a373b357615841f3dd9f0aee9645d1e1c" host="localhost" Jul 12 10:24:14.546241 containerd[1560]: 2025-07-12 10:24:14.488 [INFO][4166] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.97f4c5f229e41267bfd26bbc6e7d599a373b357615841f3dd9f0aee9645d1e1c" host="localhost" Jul 12 10:24:14.546241 containerd[1560]: 2025-07-12 10:24:14.488 [INFO][4166] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 10:24:14.546241 containerd[1560]: 2025-07-12 10:24:14.488 [INFO][4166] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="97f4c5f229e41267bfd26bbc6e7d599a373b357615841f3dd9f0aee9645d1e1c" HandleID="k8s-pod-network.97f4c5f229e41267bfd26bbc6e7d599a373b357615841f3dd9f0aee9645d1e1c" Workload="localhost-k8s-csi--node--driver--mvff8-eth0" Jul 12 10:24:14.546489 containerd[1560]: 2025-07-12 10:24:14.493 [INFO][4088] cni-plugin/k8s.go 418: Populated endpoint ContainerID="97f4c5f229e41267bfd26bbc6e7d599a373b357615841f3dd9f0aee9645d1e1c" Namespace="calico-system" Pod="csi-node-driver-mvff8" WorkloadEndpoint="localhost-k8s-csi--node--driver--mvff8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--mvff8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4e0c1f65-4f12-4625-8a4a-ef0ef07f6a27", ResourceVersion:"745", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 10, 23, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-mvff8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali99048db36c2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 10:24:14.546568 containerd[1560]: 2025-07-12 10:24:14.493 [INFO][4088] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="97f4c5f229e41267bfd26bbc6e7d599a373b357615841f3dd9f0aee9645d1e1c" Namespace="calico-system" Pod="csi-node-driver-mvff8" WorkloadEndpoint="localhost-k8s-csi--node--driver--mvff8-eth0" Jul 12 10:24:14.546568 containerd[1560]: 2025-07-12 10:24:14.493 [INFO][4088] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali99048db36c2 ContainerID="97f4c5f229e41267bfd26bbc6e7d599a373b357615841f3dd9f0aee9645d1e1c" Namespace="calico-system" Pod="csi-node-driver-mvff8" WorkloadEndpoint="localhost-k8s-csi--node--driver--mvff8-eth0" Jul 12 10:24:14.546568 containerd[1560]: 2025-07-12 10:24:14.500 [INFO][4088] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="97f4c5f229e41267bfd26bbc6e7d599a373b357615841f3dd9f0aee9645d1e1c" Namespace="calico-system" Pod="csi-node-driver-mvff8" WorkloadEndpoint="localhost-k8s-csi--node--driver--mvff8-eth0" Jul 12 10:24:14.546660 containerd[1560]: 2025-07-12 10:24:14.501 [INFO][4088] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="97f4c5f229e41267bfd26bbc6e7d599a373b357615841f3dd9f0aee9645d1e1c" Namespace="calico-system" Pod="csi-node-driver-mvff8" WorkloadEndpoint="localhost-k8s-csi--node--driver--mvff8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--mvff8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4e0c1f65-4f12-4625-8a4a-ef0ef07f6a27", ResourceVersion:"745", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 10, 23, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"97f4c5f229e41267bfd26bbc6e7d599a373b357615841f3dd9f0aee9645d1e1c", Pod:"csi-node-driver-mvff8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali99048db36c2", MAC:"ba:f3:8d:97:6b:71", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 10:24:14.547451 containerd[1560]: 2025-07-12 10:24:14.540 [INFO][4088] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="97f4c5f229e41267bfd26bbc6e7d599a373b357615841f3dd9f0aee9645d1e1c" Namespace="calico-system" Pod="csi-node-driver-mvff8" WorkloadEndpoint="localhost-k8s-csi--node--driver--mvff8-eth0" Jul 12 10:24:14.686879 systemd-networkd[1479]: calif172f1803df: Gained IPv6LL Jul 12 10:24:14.744625 sshd[4296]: Connection closed by 10.0.0.1 port 38820 Jul 12 10:24:14.745037 sshd-session[4290]: pam_unix(sshd:session): session closed for user core Jul 12 10:24:14.749906 systemd[1]: sshd@7-10.0.0.137:22-10.0.0.1:38820.service: Deactivated successfully. Jul 12 10:24:14.752133 systemd[1]: session-8.scope: Deactivated successfully. Jul 12 10:24:14.753232 systemd-logind[1540]: Session 8 logged out. Waiting for processes to exit. Jul 12 10:24:14.754658 systemd-logind[1540]: Removed session 8. Jul 12 10:24:15.071014 systemd-networkd[1479]: calid32d9a41fde: Gained IPv6LL Jul 12 10:24:15.503448 kubelet[2707]: E0712 10:24:15.503409 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 10:24:15.504072 containerd[1560]: time="2025-07-12T10:24:15.503941548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-s729n,Uid:a2850aa5-dd6d-4817-b6b8-f8a76320c95c,Namespace:kube-system,Attempt:0,}" Jul 12 10:24:15.582945 systemd-networkd[1479]: cali09cc398cab5: Gained IPv6LL Jul 12 10:24:15.647675 containerd[1560]: time="2025-07-12T10:24:15.647562092Z" level=info msg="connecting to shim c25f2c8ad668314fd175ea067551e4ee4e96585bcd927d433b4623217a302c8a" address="unix:///run/containerd/s/08f7d01f43a41d7a1ddd7c9b3a2306dee62da8b5fd18390957fa96ff607b95a3" namespace=k8s.io protocol=ttrpc version=3 Jul 12 10:24:15.657306 containerd[1560]: time="2025-07-12T10:24:15.657244401Z" level=info msg="connecting to shim 97f4c5f229e41267bfd26bbc6e7d599a373b357615841f3dd9f0aee9645d1e1c" address="unix:///run/containerd/s/89f0528e9a5f0ed84ca6acc69db9b5630c655f7f039b35bc7af350c5961eb5fa" namespace=k8s.io protocol=ttrpc version=3 Jul 12 10:24:15.657764 systemd-networkd[1479]: cali11500488eb2: Link UP Jul 12 10:24:15.658562 systemd-networkd[1479]: cali11500488eb2: Gained carrier Jul 12 10:24:15.690968 systemd[1]: Started cri-containerd-c25f2c8ad668314fd175ea067551e4ee4e96585bcd927d433b4623217a302c8a.scope - libcontainer container c25f2c8ad668314fd175ea067551e4ee4e96585bcd927d433b4623217a302c8a. Jul 12 10:24:15.694309 systemd[1]: Started cri-containerd-97f4c5f229e41267bfd26bbc6e7d599a373b357615841f3dd9f0aee9645d1e1c.scope - libcontainer container 97f4c5f229e41267bfd26bbc6e7d599a373b357615841f3dd9f0aee9645d1e1c. Jul 12 10:24:15.704831 systemd-resolved[1410]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 10:24:15.707114 systemd-resolved[1410]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 10:24:15.724895 containerd[1560]: 2025-07-12 10:24:15.217 [INFO][4364] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--5hcl7-eth0 coredns-668d6bf9bc- kube-system 9c226913-f0b1-4c3a-8e9d-41c8ddc6d70e 846 0 2025-07-12 10:23:39 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-5hcl7 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali11500488eb2 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="b1db839ddb5cbdb0e7984a5a4825998422de8c98de48ebb8674a0160ca417a7e" Namespace="kube-system" Pod="coredns-668d6bf9bc-5hcl7" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--5hcl7-" Jul 12 10:24:15.724895 containerd[1560]: 2025-07-12 10:24:15.217 [INFO][4364] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b1db839ddb5cbdb0e7984a5a4825998422de8c98de48ebb8674a0160ca417a7e" Namespace="kube-system" Pod="coredns-668d6bf9bc-5hcl7" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--5hcl7-eth0" Jul 12 10:24:15.724895 containerd[1560]: 2025-07-12 10:24:15.246 [INFO][4379] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b1db839ddb5cbdb0e7984a5a4825998422de8c98de48ebb8674a0160ca417a7e" HandleID="k8s-pod-network.b1db839ddb5cbdb0e7984a5a4825998422de8c98de48ebb8674a0160ca417a7e" Workload="localhost-k8s-coredns--668d6bf9bc--5hcl7-eth0" Jul 12 10:24:15.725140 containerd[1560]: 2025-07-12 10:24:15.246 [INFO][4379] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b1db839ddb5cbdb0e7984a5a4825998422de8c98de48ebb8674a0160ca417a7e" HandleID="k8s-pod-network.b1db839ddb5cbdb0e7984a5a4825998422de8c98de48ebb8674a0160ca417a7e" Workload="localhost-k8s-coredns--668d6bf9bc--5hcl7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001395b0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-5hcl7", "timestamp":"2025-07-12 10:24:15.246222742 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 10:24:15.725140 containerd[1560]: 2025-07-12 10:24:15.246 [INFO][4379] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 10:24:15.725140 containerd[1560]: 2025-07-12 10:24:15.246 [INFO][4379] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 10:24:15.725140 containerd[1560]: 2025-07-12 10:24:15.246 [INFO][4379] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 12 10:24:15.725140 containerd[1560]: 2025-07-12 10:24:15.544 [INFO][4379] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b1db839ddb5cbdb0e7984a5a4825998422de8c98de48ebb8674a0160ca417a7e" host="localhost" Jul 12 10:24:15.725140 containerd[1560]: 2025-07-12 10:24:15.609 [INFO][4379] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 12 10:24:15.725140 containerd[1560]: 2025-07-12 10:24:15.614 [INFO][4379] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 12 10:24:15.725140 containerd[1560]: 2025-07-12 10:24:15.616 [INFO][4379] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 12 10:24:15.725140 containerd[1560]: 2025-07-12 10:24:15.618 [INFO][4379] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 12 10:24:15.725140 containerd[1560]: 2025-07-12 10:24:15.618 [INFO][4379] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b1db839ddb5cbdb0e7984a5a4825998422de8c98de48ebb8674a0160ca417a7e" host="localhost" Jul 12 10:24:15.725342 containerd[1560]: 2025-07-12 10:24:15.620 [INFO][4379] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b1db839ddb5cbdb0e7984a5a4825998422de8c98de48ebb8674a0160ca417a7e Jul 12 10:24:15.725342 containerd[1560]: 2025-07-12 10:24:15.625 [INFO][4379] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b1db839ddb5cbdb0e7984a5a4825998422de8c98de48ebb8674a0160ca417a7e" host="localhost" Jul 12 10:24:15.725342 containerd[1560]: 2025-07-12 10:24:15.632 [INFO][4379] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.b1db839ddb5cbdb0e7984a5a4825998422de8c98de48ebb8674a0160ca417a7e" host="localhost" Jul 12 10:24:15.725342 containerd[1560]: 2025-07-12 10:24:15.632 [INFO][4379] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.b1db839ddb5cbdb0e7984a5a4825998422de8c98de48ebb8674a0160ca417a7e" host="localhost" Jul 12 10:24:15.725342 containerd[1560]: 2025-07-12 10:24:15.632 [INFO][4379] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 10:24:15.725342 containerd[1560]: 2025-07-12 10:24:15.632 [INFO][4379] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="b1db839ddb5cbdb0e7984a5a4825998422de8c98de48ebb8674a0160ca417a7e" HandleID="k8s-pod-network.b1db839ddb5cbdb0e7984a5a4825998422de8c98de48ebb8674a0160ca417a7e" Workload="localhost-k8s-coredns--668d6bf9bc--5hcl7-eth0" Jul 12 10:24:15.725464 containerd[1560]: 2025-07-12 10:24:15.649 [INFO][4364] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b1db839ddb5cbdb0e7984a5a4825998422de8c98de48ebb8674a0160ca417a7e" Namespace="kube-system" Pod="coredns-668d6bf9bc-5hcl7" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--5hcl7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--5hcl7-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"9c226913-f0b1-4c3a-8e9d-41c8ddc6d70e", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 10, 23, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-5hcl7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali11500488eb2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 10:24:15.725538 containerd[1560]: 2025-07-12 10:24:15.650 [INFO][4364] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="b1db839ddb5cbdb0e7984a5a4825998422de8c98de48ebb8674a0160ca417a7e" Namespace="kube-system" Pod="coredns-668d6bf9bc-5hcl7" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--5hcl7-eth0" Jul 12 10:24:15.725538 containerd[1560]: 2025-07-12 10:24:15.650 [INFO][4364] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali11500488eb2 ContainerID="b1db839ddb5cbdb0e7984a5a4825998422de8c98de48ebb8674a0160ca417a7e" Namespace="kube-system" Pod="coredns-668d6bf9bc-5hcl7" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--5hcl7-eth0" Jul 12 10:24:15.725538 containerd[1560]: 2025-07-12 10:24:15.659 [INFO][4364] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b1db839ddb5cbdb0e7984a5a4825998422de8c98de48ebb8674a0160ca417a7e" Namespace="kube-system" Pod="coredns-668d6bf9bc-5hcl7" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--5hcl7-eth0" Jul 12 10:24:15.725605 containerd[1560]: 2025-07-12 10:24:15.659 [INFO][4364] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b1db839ddb5cbdb0e7984a5a4825998422de8c98de48ebb8674a0160ca417a7e" Namespace="kube-system" Pod="coredns-668d6bf9bc-5hcl7" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--5hcl7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--5hcl7-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"9c226913-f0b1-4c3a-8e9d-41c8ddc6d70e", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 10, 23, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b1db839ddb5cbdb0e7984a5a4825998422de8c98de48ebb8674a0160ca417a7e", Pod:"coredns-668d6bf9bc-5hcl7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali11500488eb2", MAC:"8a:b7:12:2b:25:fe", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 10:24:15.725605 containerd[1560]: 2025-07-12 10:24:15.719 [INFO][4364] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b1db839ddb5cbdb0e7984a5a4825998422de8c98de48ebb8674a0160ca417a7e" Namespace="kube-system" Pod="coredns-668d6bf9bc-5hcl7" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--5hcl7-eth0" Jul 12 10:24:15.846872 containerd[1560]: time="2025-07-12T10:24:15.846693105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mvff8,Uid:4e0c1f65-4f12-4625-8a4a-ef0ef07f6a27,Namespace:calico-system,Attempt:0,} returns sandbox id \"97f4c5f229e41267bfd26bbc6e7d599a373b357615841f3dd9f0aee9645d1e1c\"" Jul 12 10:24:15.948764 systemd-networkd[1479]: cali3627e096c6b: Link UP Jul 12 10:24:15.950131 systemd-networkd[1479]: cali3627e096c6b: Gained carrier Jul 12 10:24:15.999870 containerd[1560]: time="2025-07-12T10:24:15.999833563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-cf5b9c586-9zbh9,Uid:87934f64-9b26-414a-a389-c171201e8a5c,Namespace:calico-system,Attempt:0,} returns sandbox id \"c25f2c8ad668314fd175ea067551e4ee4e96585bcd927d433b4623217a302c8a\"" Jul 12 10:24:16.000759 containerd[1560]: 2025-07-12 10:24:15.607 [INFO][4387] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--778bc96f59--rcg6j-eth0 calico-kube-controllers-778bc96f59- calico-system 0a81c59f-5f69-4f57-a191-e15066abbd4b 856 0 2025-07-12 10:23:49 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:778bc96f59 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-778bc96f59-rcg6j eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali3627e096c6b [] [] }} ContainerID="e880233127c2bf815a76f292c7c48f56d12bafc31fa12b021c2e3064479693a3" Namespace="calico-system" Pod="calico-kube-controllers-778bc96f59-rcg6j" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--778bc96f59--rcg6j-" Jul 12 10:24:16.000759 containerd[1560]: 2025-07-12 10:24:15.607 [INFO][4387] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e880233127c2bf815a76f292c7c48f56d12bafc31fa12b021c2e3064479693a3" Namespace="calico-system" Pod="calico-kube-controllers-778bc96f59-rcg6j" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--778bc96f59--rcg6j-eth0" Jul 12 10:24:16.000759 containerd[1560]: 2025-07-12 10:24:15.645 [INFO][4402] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e880233127c2bf815a76f292c7c48f56d12bafc31fa12b021c2e3064479693a3" HandleID="k8s-pod-network.e880233127c2bf815a76f292c7c48f56d12bafc31fa12b021c2e3064479693a3" Workload="localhost-k8s-calico--kube--controllers--778bc96f59--rcg6j-eth0" Jul 12 10:24:16.000759 containerd[1560]: 2025-07-12 10:24:15.645 [INFO][4402] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e880233127c2bf815a76f292c7c48f56d12bafc31fa12b021c2e3064479693a3" HandleID="k8s-pod-network.e880233127c2bf815a76f292c7c48f56d12bafc31fa12b021c2e3064479693a3" Workload="localhost-k8s-calico--kube--controllers--778bc96f59--rcg6j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7030), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-778bc96f59-rcg6j", "timestamp":"2025-07-12 10:24:15.645286695 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 10:24:16.000759 containerd[1560]: 2025-07-12 10:24:15.647 [INFO][4402] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 10:24:16.000759 containerd[1560]: 2025-07-12 10:24:15.647 [INFO][4402] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 10:24:16.000759 containerd[1560]: 2025-07-12 10:24:15.647 [INFO][4402] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 12 10:24:16.000759 containerd[1560]: 2025-07-12 10:24:15.715 [INFO][4402] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e880233127c2bf815a76f292c7c48f56d12bafc31fa12b021c2e3064479693a3" host="localhost" Jul 12 10:24:16.000759 containerd[1560]: 2025-07-12 10:24:15.726 [INFO][4402] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 12 10:24:16.000759 containerd[1560]: 2025-07-12 10:24:15.734 [INFO][4402] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 12 10:24:16.000759 containerd[1560]: 2025-07-12 10:24:15.736 [INFO][4402] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 12 10:24:16.000759 containerd[1560]: 2025-07-12 10:24:15.739 [INFO][4402] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 12 10:24:16.000759 containerd[1560]: 2025-07-12 10:24:15.739 [INFO][4402] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e880233127c2bf815a76f292c7c48f56d12bafc31fa12b021c2e3064479693a3" host="localhost" Jul 12 10:24:16.000759 containerd[1560]: 2025-07-12 10:24:15.741 [INFO][4402] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e880233127c2bf815a76f292c7c48f56d12bafc31fa12b021c2e3064479693a3 Jul 12 10:24:16.000759 containerd[1560]: 2025-07-12 10:24:15.778 [INFO][4402] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e880233127c2bf815a76f292c7c48f56d12bafc31fa12b021c2e3064479693a3" host="localhost" Jul 12 10:24:16.000759 containerd[1560]: 2025-07-12 10:24:15.943 [INFO][4402] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.e880233127c2bf815a76f292c7c48f56d12bafc31fa12b021c2e3064479693a3" host="localhost" Jul 12 10:24:16.000759 containerd[1560]: 2025-07-12 10:24:15.943 [INFO][4402] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.e880233127c2bf815a76f292c7c48f56d12bafc31fa12b021c2e3064479693a3" host="localhost" Jul 12 10:24:16.000759 containerd[1560]: 2025-07-12 10:24:15.943 [INFO][4402] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 10:24:16.000759 containerd[1560]: 2025-07-12 10:24:15.943 [INFO][4402] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="e880233127c2bf815a76f292c7c48f56d12bafc31fa12b021c2e3064479693a3" HandleID="k8s-pod-network.e880233127c2bf815a76f292c7c48f56d12bafc31fa12b021c2e3064479693a3" Workload="localhost-k8s-calico--kube--controllers--778bc96f59--rcg6j-eth0" Jul 12 10:24:16.001618 containerd[1560]: 2025-07-12 10:24:15.946 [INFO][4387] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e880233127c2bf815a76f292c7c48f56d12bafc31fa12b021c2e3064479693a3" Namespace="calico-system" Pod="calico-kube-controllers-778bc96f59-rcg6j" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--778bc96f59--rcg6j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--778bc96f59--rcg6j-eth0", GenerateName:"calico-kube-controllers-778bc96f59-", Namespace:"calico-system", SelfLink:"", UID:"0a81c59f-5f69-4f57-a191-e15066abbd4b", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 10, 23, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"778bc96f59", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-778bc96f59-rcg6j", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3627e096c6b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 10:24:16.001618 containerd[1560]: 2025-07-12 10:24:15.946 [INFO][4387] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="e880233127c2bf815a76f292c7c48f56d12bafc31fa12b021c2e3064479693a3" Namespace="calico-system" Pod="calico-kube-controllers-778bc96f59-rcg6j" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--778bc96f59--rcg6j-eth0" Jul 12 10:24:16.001618 containerd[1560]: 2025-07-12 10:24:15.946 [INFO][4387] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3627e096c6b ContainerID="e880233127c2bf815a76f292c7c48f56d12bafc31fa12b021c2e3064479693a3" Namespace="calico-system" Pod="calico-kube-controllers-778bc96f59-rcg6j" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--778bc96f59--rcg6j-eth0" Jul 12 10:24:16.001618 containerd[1560]: 2025-07-12 10:24:15.950 [INFO][4387] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e880233127c2bf815a76f292c7c48f56d12bafc31fa12b021c2e3064479693a3" Namespace="calico-system" Pod="calico-kube-controllers-778bc96f59-rcg6j" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--778bc96f59--rcg6j-eth0" Jul 12 10:24:16.001618 containerd[1560]: 2025-07-12 10:24:15.952 [INFO][4387] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e880233127c2bf815a76f292c7c48f56d12bafc31fa12b021c2e3064479693a3" Namespace="calico-system" Pod="calico-kube-controllers-778bc96f59-rcg6j" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--778bc96f59--rcg6j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--778bc96f59--rcg6j-eth0", GenerateName:"calico-kube-controllers-778bc96f59-", Namespace:"calico-system", SelfLink:"", UID:"0a81c59f-5f69-4f57-a191-e15066abbd4b", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 10, 23, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"778bc96f59", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e880233127c2bf815a76f292c7c48f56d12bafc31fa12b021c2e3064479693a3", Pod:"calico-kube-controllers-778bc96f59-rcg6j", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3627e096c6b", MAC:"ce:fb:a7:e9:99:ac", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 10:24:16.001618 containerd[1560]: 2025-07-12 10:24:15.994 [INFO][4387] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e880233127c2bf815a76f292c7c48f56d12bafc31fa12b021c2e3064479693a3" Namespace="calico-system" Pod="calico-kube-controllers-778bc96f59-rcg6j" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--778bc96f59--rcg6j-eth0" Jul 12 10:24:16.026732 systemd-networkd[1479]: calid136e698914: Link UP Jul 12 10:24:16.027884 systemd-networkd[1479]: calid136e698914: Gained carrier Jul 12 10:24:16.044930 containerd[1560]: time="2025-07-12T10:24:16.044548841Z" level=info msg="connecting to shim b1db839ddb5cbdb0e7984a5a4825998422de8c98de48ebb8674a0160ca417a7e" address="unix:///run/containerd/s/b220dd5f05102c41b5a9145a79409bc416b487a342c8baa2c3dca6e88353c43e" namespace=k8s.io protocol=ttrpc version=3 Jul 12 10:24:16.054610 containerd[1560]: time="2025-07-12T10:24:16.054566588Z" level=info msg="connecting to shim e880233127c2bf815a76f292c7c48f56d12bafc31fa12b021c2e3064479693a3" address="unix:///run/containerd/s/c7a936ed7d1f5802ec0cf0be9e0b024d283d5d0a751581348ba86a97f44dfc44" namespace=k8s.io protocol=ttrpc version=3 Jul 12 10:24:16.055104 containerd[1560]: 2025-07-12 10:24:15.721 [INFO][4409] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--s729n-eth0 coredns-668d6bf9bc- kube-system a2850aa5-dd6d-4817-b6b8-f8a76320c95c 860 0 2025-07-12 10:23:39 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-s729n eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid136e698914 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="2f5073577efebd2d340b1fe757d73847f0cd951ec38dc109a24f88743fc539ba" Namespace="kube-system" Pod="coredns-668d6bf9bc-s729n" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--s729n-" Jul 12 10:24:16.055104 containerd[1560]: 2025-07-12 10:24:15.721 [INFO][4409] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2f5073577efebd2d340b1fe757d73847f0cd951ec38dc109a24f88743fc539ba" Namespace="kube-system" Pod="coredns-668d6bf9bc-s729n" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--s729n-eth0" Jul 12 10:24:16.055104 containerd[1560]: 2025-07-12 10:24:15.753 [INFO][4515] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2f5073577efebd2d340b1fe757d73847f0cd951ec38dc109a24f88743fc539ba" HandleID="k8s-pod-network.2f5073577efebd2d340b1fe757d73847f0cd951ec38dc109a24f88743fc539ba" Workload="localhost-k8s-coredns--668d6bf9bc--s729n-eth0" Jul 12 10:24:16.055104 containerd[1560]: 2025-07-12 10:24:15.754 [INFO][4515] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2f5073577efebd2d340b1fe757d73847f0cd951ec38dc109a24f88743fc539ba" HandleID="k8s-pod-network.2f5073577efebd2d340b1fe757d73847f0cd951ec38dc109a24f88743fc539ba" Workload="localhost-k8s-coredns--668d6bf9bc--s729n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002df630), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-s729n", "timestamp":"2025-07-12 10:24:15.753819707 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 10:24:16.055104 containerd[1560]: 2025-07-12 10:24:15.754 [INFO][4515] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 10:24:16.055104 containerd[1560]: 2025-07-12 10:24:15.943 [INFO][4515] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 10:24:16.055104 containerd[1560]: 2025-07-12 10:24:15.943 [INFO][4515] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 12 10:24:16.055104 containerd[1560]: 2025-07-12 10:24:15.951 [INFO][4515] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2f5073577efebd2d340b1fe757d73847f0cd951ec38dc109a24f88743fc539ba" host="localhost" Jul 12 10:24:16.055104 containerd[1560]: 2025-07-12 10:24:15.956 [INFO][4515] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 12 10:24:16.055104 containerd[1560]: 2025-07-12 10:24:15.995 [INFO][4515] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 12 10:24:16.055104 containerd[1560]: 2025-07-12 10:24:15.999 [INFO][4515] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 12 10:24:16.055104 containerd[1560]: 2025-07-12 10:24:16.003 [INFO][4515] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 12 10:24:16.055104 containerd[1560]: 2025-07-12 10:24:16.003 [INFO][4515] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2f5073577efebd2d340b1fe757d73847f0cd951ec38dc109a24f88743fc539ba" host="localhost" Jul 12 10:24:16.055104 containerd[1560]: 2025-07-12 10:24:16.007 [INFO][4515] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2f5073577efebd2d340b1fe757d73847f0cd951ec38dc109a24f88743fc539ba Jul 12 10:24:16.055104 containerd[1560]: 2025-07-12 10:24:16.010 [INFO][4515] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2f5073577efebd2d340b1fe757d73847f0cd951ec38dc109a24f88743fc539ba" host="localhost" Jul 12 10:24:16.055104 containerd[1560]: 2025-07-12 10:24:16.018 [INFO][4515] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.2f5073577efebd2d340b1fe757d73847f0cd951ec38dc109a24f88743fc539ba" host="localhost" Jul 12 10:24:16.055104 containerd[1560]: 2025-07-12 10:24:16.018 [INFO][4515] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.2f5073577efebd2d340b1fe757d73847f0cd951ec38dc109a24f88743fc539ba" host="localhost" Jul 12 10:24:16.055104 containerd[1560]: 2025-07-12 10:24:16.018 [INFO][4515] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 10:24:16.055104 containerd[1560]: 2025-07-12 10:24:16.018 [INFO][4515] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="2f5073577efebd2d340b1fe757d73847f0cd951ec38dc109a24f88743fc539ba" HandleID="k8s-pod-network.2f5073577efebd2d340b1fe757d73847f0cd951ec38dc109a24f88743fc539ba" Workload="localhost-k8s-coredns--668d6bf9bc--s729n-eth0" Jul 12 10:24:16.055615 containerd[1560]: 2025-07-12 10:24:16.021 [INFO][4409] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2f5073577efebd2d340b1fe757d73847f0cd951ec38dc109a24f88743fc539ba" Namespace="kube-system" Pod="coredns-668d6bf9bc-s729n" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--s729n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--s729n-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"a2850aa5-dd6d-4817-b6b8-f8a76320c95c", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 10, 23, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-s729n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid136e698914", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 10:24:16.055615 containerd[1560]: 2025-07-12 10:24:16.022 [INFO][4409] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="2f5073577efebd2d340b1fe757d73847f0cd951ec38dc109a24f88743fc539ba" Namespace="kube-system" Pod="coredns-668d6bf9bc-s729n" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--s729n-eth0" Jul 12 10:24:16.055615 containerd[1560]: 2025-07-12 10:24:16.022 [INFO][4409] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid136e698914 ContainerID="2f5073577efebd2d340b1fe757d73847f0cd951ec38dc109a24f88743fc539ba" Namespace="kube-system" Pod="coredns-668d6bf9bc-s729n" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--s729n-eth0" Jul 12 10:24:16.055615 containerd[1560]: 2025-07-12 10:24:16.028 [INFO][4409] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2f5073577efebd2d340b1fe757d73847f0cd951ec38dc109a24f88743fc539ba" Namespace="kube-system" Pod="coredns-668d6bf9bc-s729n" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--s729n-eth0" Jul 12 10:24:16.055615 containerd[1560]: 2025-07-12 10:24:16.028 [INFO][4409] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2f5073577efebd2d340b1fe757d73847f0cd951ec38dc109a24f88743fc539ba" Namespace="kube-system" Pod="coredns-668d6bf9bc-s729n" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--s729n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--s729n-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"a2850aa5-dd6d-4817-b6b8-f8a76320c95c", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 10, 23, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2f5073577efebd2d340b1fe757d73847f0cd951ec38dc109a24f88743fc539ba", Pod:"coredns-668d6bf9bc-s729n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid136e698914", MAC:"fa:c2:ea:b3:6c:09", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 10:24:16.055615 containerd[1560]: 2025-07-12 10:24:16.043 [INFO][4409] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2f5073577efebd2d340b1fe757d73847f0cd951ec38dc109a24f88743fc539ba" Namespace="kube-system" Pod="coredns-668d6bf9bc-s729n" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--s729n-eth0" Jul 12 10:24:16.085522 containerd[1560]: time="2025-07-12T10:24:16.085469969Z" level=info msg="connecting to shim 2f5073577efebd2d340b1fe757d73847f0cd951ec38dc109a24f88743fc539ba" address="unix:///run/containerd/s/4413111b940ae9f428fd13bb4e3580264c43bb48f830250ab6bb7c500de308b0" namespace=k8s.io protocol=ttrpc version=3 Jul 12 10:24:16.087905 systemd[1]: Started cri-containerd-b1db839ddb5cbdb0e7984a5a4825998422de8c98de48ebb8674a0160ca417a7e.scope - libcontainer container b1db839ddb5cbdb0e7984a5a4825998422de8c98de48ebb8674a0160ca417a7e. Jul 12 10:24:16.091784 systemd[1]: Started cri-containerd-e880233127c2bf815a76f292c7c48f56d12bafc31fa12b021c2e3064479693a3.scope - libcontainer container e880233127c2bf815a76f292c7c48f56d12bafc31fa12b021c2e3064479693a3. Jul 12 10:24:16.094907 systemd-networkd[1479]: vxlan.calico: Gained IPv6LL Jul 12 10:24:16.104694 systemd-resolved[1410]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 10:24:16.116874 systemd[1]: Started cri-containerd-2f5073577efebd2d340b1fe757d73847f0cd951ec38dc109a24f88743fc539ba.scope - libcontainer container 2f5073577efebd2d340b1fe757d73847f0cd951ec38dc109a24f88743fc539ba. Jul 12 10:24:16.123363 systemd-resolved[1410]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 10:24:16.136340 systemd-resolved[1410]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 10:24:16.142734 containerd[1560]: time="2025-07-12T10:24:16.142675992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5hcl7,Uid:9c226913-f0b1-4c3a-8e9d-41c8ddc6d70e,Namespace:kube-system,Attempt:0,} returns sandbox id \"b1db839ddb5cbdb0e7984a5a4825998422de8c98de48ebb8674a0160ca417a7e\"" Jul 12 10:24:16.144093 kubelet[2707]: E0712 10:24:16.144060 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 10:24:16.147637 containerd[1560]: time="2025-07-12T10:24:16.147603152Z" level=info msg="CreateContainer within sandbox \"b1db839ddb5cbdb0e7984a5a4825998422de8c98de48ebb8674a0160ca417a7e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 12 10:24:16.168500 containerd[1560]: time="2025-07-12T10:24:16.168386072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-778bc96f59-rcg6j,Uid:0a81c59f-5f69-4f57-a191-e15066abbd4b,Namespace:calico-system,Attempt:0,} returns sandbox id \"e880233127c2bf815a76f292c7c48f56d12bafc31fa12b021c2e3064479693a3\"" Jul 12 10:24:16.178832 containerd[1560]: time="2025-07-12T10:24:16.178778925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-s729n,Uid:a2850aa5-dd6d-4817-b6b8-f8a76320c95c,Namespace:kube-system,Attempt:0,} returns sandbox id \"2f5073577efebd2d340b1fe757d73847f0cd951ec38dc109a24f88743fc539ba\"" Jul 12 10:24:16.179512 kubelet[2707]: E0712 10:24:16.179487 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 10:24:16.180971 containerd[1560]: time="2025-07-12T10:24:16.180937041Z" level=info msg="CreateContainer within sandbox \"2f5073577efebd2d340b1fe757d73847f0cd951ec38dc109a24f88743fc539ba\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 12 10:24:16.414912 systemd-networkd[1479]: cali99048db36c2: Gained IPv6LL Jul 12 10:24:16.760629 containerd[1560]: time="2025-07-12T10:24:16.760563125Z" level=info msg="Container 96647a935c96d2bda3b6c8e211df4cc20b6a978e3383f199a80f7891adc59ffa: CDI devices from CRI Config.CDIDevices: []" Jul 12 10:24:16.763974 containerd[1560]: time="2025-07-12T10:24:16.763929643Z" level=info msg="Container 2a166d460a8b75620c3f0310da84051d5f6389564fc51bb0d5ddbd22bd11a44d: CDI devices from CRI Config.CDIDevices: []" Jul 12 10:24:16.770858 containerd[1560]: time="2025-07-12T10:24:16.770822637Z" level=info msg="CreateContainer within sandbox \"b1db839ddb5cbdb0e7984a5a4825998422de8c98de48ebb8674a0160ca417a7e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"96647a935c96d2bda3b6c8e211df4cc20b6a978e3383f199a80f7891adc59ffa\"" Jul 12 10:24:16.771471 containerd[1560]: time="2025-07-12T10:24:16.771319681Z" level=info msg="StartContainer for \"96647a935c96d2bda3b6c8e211df4cc20b6a978e3383f199a80f7891adc59ffa\"" Jul 12 10:24:16.772204 containerd[1560]: time="2025-07-12T10:24:16.772174898Z" level=info msg="connecting to shim 96647a935c96d2bda3b6c8e211df4cc20b6a978e3383f199a80f7891adc59ffa" address="unix:///run/containerd/s/b220dd5f05102c41b5a9145a79409bc416b487a342c8baa2c3dca6e88353c43e" protocol=ttrpc version=3 Jul 12 10:24:16.783846 containerd[1560]: time="2025-07-12T10:24:16.783800648Z" level=info msg="CreateContainer within sandbox \"2f5073577efebd2d340b1fe757d73847f0cd951ec38dc109a24f88743fc539ba\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2a166d460a8b75620c3f0310da84051d5f6389564fc51bb0d5ddbd22bd11a44d\"" Jul 12 10:24:16.784861 containerd[1560]: time="2025-07-12T10:24:16.784834441Z" level=info msg="StartContainer for \"2a166d460a8b75620c3f0310da84051d5f6389564fc51bb0d5ddbd22bd11a44d\"" Jul 12 10:24:16.787106 containerd[1560]: time="2025-07-12T10:24:16.787060895Z" level=info msg="connecting to shim 2a166d460a8b75620c3f0310da84051d5f6389564fc51bb0d5ddbd22bd11a44d" address="unix:///run/containerd/s/4413111b940ae9f428fd13bb4e3580264c43bb48f830250ab6bb7c500de308b0" protocol=ttrpc version=3 Jul 12 10:24:16.796946 systemd[1]: Started cri-containerd-96647a935c96d2bda3b6c8e211df4cc20b6a978e3383f199a80f7891adc59ffa.scope - libcontainer container 96647a935c96d2bda3b6c8e211df4cc20b6a978e3383f199a80f7891adc59ffa. Jul 12 10:24:16.818842 systemd[1]: Started cri-containerd-2a166d460a8b75620c3f0310da84051d5f6389564fc51bb0d5ddbd22bd11a44d.scope - libcontainer container 2a166d460a8b75620c3f0310da84051d5f6389564fc51bb0d5ddbd22bd11a44d. Jul 12 10:24:16.843256 containerd[1560]: time="2025-07-12T10:24:16.843148588Z" level=info msg="StartContainer for \"96647a935c96d2bda3b6c8e211df4cc20b6a978e3383f199a80f7891adc59ffa\" returns successfully" Jul 12 10:24:16.850269 containerd[1560]: time="2025-07-12T10:24:16.850230066Z" level=info msg="StartContainer for \"2a166d460a8b75620c3f0310da84051d5f6389564fc51bb0d5ddbd22bd11a44d\" returns successfully" Jul 12 10:24:17.246912 systemd-networkd[1479]: cali3627e096c6b: Gained IPv6LL Jul 12 10:24:17.631050 systemd-networkd[1479]: cali11500488eb2: Gained IPv6LL Jul 12 10:24:17.644081 kubelet[2707]: E0712 10:24:17.644058 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 10:24:17.648466 kubelet[2707]: E0712 10:24:17.647985 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 10:24:17.677015 kubelet[2707]: I0712 10:24:17.676932 2707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-s729n" podStartSLOduration=38.676910547 podStartE2EDuration="38.676910547s" podCreationTimestamp="2025-07-12 10:23:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 10:24:17.659251862 +0000 UTC m=+45.235237237" watchObservedRunningTime="2025-07-12 10:24:17.676910547 +0000 UTC m=+45.252895932" Jul 12 10:24:17.691858 kubelet[2707]: I0712 10:24:17.691764 2707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-5hcl7" podStartSLOduration=38.691699107 podStartE2EDuration="38.691699107s" podCreationTimestamp="2025-07-12 10:23:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 10:24:17.690939539 +0000 UTC m=+45.266924914" watchObservedRunningTime="2025-07-12 10:24:17.691699107 +0000 UTC m=+45.267684502" Jul 12 10:24:17.796902 containerd[1560]: time="2025-07-12T10:24:17.796852281Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:24:17.797635 containerd[1560]: time="2025-07-12T10:24:17.797599405Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 12 10:24:17.798705 containerd[1560]: time="2025-07-12T10:24:17.798649307Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:24:17.800687 containerd[1560]: time="2025-07-12T10:24:17.800646951Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:24:17.801236 containerd[1560]: time="2025-07-12T10:24:17.801192927Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 3.637062649s" Jul 12 10:24:17.801277 containerd[1560]: time="2025-07-12T10:24:17.801241418Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 12 10:24:17.802960 containerd[1560]: time="2025-07-12T10:24:17.802934711Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 12 10:24:17.803876 containerd[1560]: time="2025-07-12T10:24:17.803841514Z" level=info msg="CreateContainer within sandbox \"6d09d81c314ba554554de2e8785a5eb377fa550ac7a24ddfdb134b0024a24d35\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 12 10:24:17.822205 containerd[1560]: time="2025-07-12T10:24:17.822159578Z" level=info msg="Container 38e981e8a17b291737e2f9e7f70528a1a2904a0ed637dddc85fd02c7802fd2e0: CDI devices from CRI Config.CDIDevices: []" Jul 12 10:24:17.829366 containerd[1560]: time="2025-07-12T10:24:17.829322478Z" level=info msg="CreateContainer within sandbox \"6d09d81c314ba554554de2e8785a5eb377fa550ac7a24ddfdb134b0024a24d35\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"38e981e8a17b291737e2f9e7f70528a1a2904a0ed637dddc85fd02c7802fd2e0\"" Jul 12 10:24:17.829973 containerd[1560]: time="2025-07-12T10:24:17.829949557Z" level=info msg="StartContainer for \"38e981e8a17b291737e2f9e7f70528a1a2904a0ed637dddc85fd02c7802fd2e0\"" Jul 12 10:24:17.830894 containerd[1560]: time="2025-07-12T10:24:17.830868233Z" level=info msg="connecting to shim 38e981e8a17b291737e2f9e7f70528a1a2904a0ed637dddc85fd02c7802fd2e0" address="unix:///run/containerd/s/6a5dea46992ba00f42db18c9e93fb8a85387453de07cb9925fb0affc40fa499e" protocol=ttrpc version=3 Jul 12 10:24:17.857855 systemd[1]: Started cri-containerd-38e981e8a17b291737e2f9e7f70528a1a2904a0ed637dddc85fd02c7802fd2e0.scope - libcontainer container 38e981e8a17b291737e2f9e7f70528a1a2904a0ed637dddc85fd02c7802fd2e0. Jul 12 10:24:17.905535 containerd[1560]: time="2025-07-12T10:24:17.905420132Z" level=info msg="StartContainer for \"38e981e8a17b291737e2f9e7f70528a1a2904a0ed637dddc85fd02c7802fd2e0\" returns successfully" Jul 12 10:24:18.015459 systemd-networkd[1479]: calid136e698914: Gained IPv6LL Jul 12 10:24:18.652376 kubelet[2707]: E0712 10:24:18.652218 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 10:24:18.658812 kubelet[2707]: E0712 10:24:18.658165 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 10:24:18.664706 kubelet[2707]: I0712 10:24:18.664641 2707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6d68fd648b-shqdk" podStartSLOduration=29.02590249 podStartE2EDuration="32.6646253s" podCreationTimestamp="2025-07-12 10:23:46 +0000 UTC" firstStartedPulling="2025-07-12 10:24:14.163564555 +0000 UTC m=+41.739549930" lastFinishedPulling="2025-07-12 10:24:17.802287365 +0000 UTC m=+45.378272740" observedRunningTime="2025-07-12 10:24:18.664210221 +0000 UTC m=+46.240195596" watchObservedRunningTime="2025-07-12 10:24:18.6646253 +0000 UTC m=+46.240610675" Jul 12 10:24:19.669746 kubelet[2707]: E0712 10:24:19.669412 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 10:24:19.669746 kubelet[2707]: E0712 10:24:19.669673 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 10:24:19.764880 systemd[1]: Started sshd@8-10.0.0.137:22-10.0.0.1:36944.service - OpenSSH per-connection server daemon (10.0.0.1:36944). Jul 12 10:24:19.943011 sshd[4818]: Accepted publickey for core from 10.0.0.1 port 36944 ssh2: RSA SHA256:ljiWRYFI9rWPiw0u7CK8T0RxaTLaHmCBjbv/AQVYFjw Jul 12 10:24:19.945168 sshd-session[4818]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 10:24:19.950535 systemd-logind[1540]: New session 9 of user core. Jul 12 10:24:19.965841 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 12 10:24:20.121293 sshd[4825]: Connection closed by 10.0.0.1 port 36944 Jul 12 10:24:20.123506 sshd-session[4818]: pam_unix(sshd:session): session closed for user core Jul 12 10:24:20.128002 systemd-logind[1540]: Session 9 logged out. Waiting for processes to exit. Jul 12 10:24:20.128349 systemd[1]: sshd@8-10.0.0.137:22-10.0.0.1:36944.service: Deactivated successfully. Jul 12 10:24:20.130781 systemd[1]: session-9.scope: Deactivated successfully. Jul 12 10:24:20.133477 systemd-logind[1540]: Removed session 9. Jul 12 10:24:20.317473 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1420829157.mount: Deactivated successfully. Jul 12 10:24:20.846570 containerd[1560]: time="2025-07-12T10:24:20.846086284Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:24:20.862949 containerd[1560]: time="2025-07-12T10:24:20.846905813Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 12 10:24:20.862949 containerd[1560]: time="2025-07-12T10:24:20.848547187Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:24:20.862949 containerd[1560]: time="2025-07-12T10:24:20.851770913Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 3.048807838s" Jul 12 10:24:20.862949 containerd[1560]: time="2025-07-12T10:24:20.862784133Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 12 10:24:20.864023 containerd[1560]: time="2025-07-12T10:24:20.863687179Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:24:20.865581 containerd[1560]: time="2025-07-12T10:24:20.865499685Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 12 10:24:20.867343 containerd[1560]: time="2025-07-12T10:24:20.867309194Z" level=info msg="CreateContainer within sandbox \"7e24b77166577905f4f9868c954eb5a2a1f5ab0e05081d2bf87e3cffaa737663\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 12 10:24:20.877138 containerd[1560]: time="2025-07-12T10:24:20.877097463Z" level=info msg="Container 3c5ff7073be3e2212401dd0fd617d79b039b06c9fc9242659d628520e0803e50: CDI devices from CRI Config.CDIDevices: []" Jul 12 10:24:20.889590 containerd[1560]: time="2025-07-12T10:24:20.889530991Z" level=info msg="CreateContainer within sandbox \"7e24b77166577905f4f9868c954eb5a2a1f5ab0e05081d2bf87e3cffaa737663\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"3c5ff7073be3e2212401dd0fd617d79b039b06c9fc9242659d628520e0803e50\"" Jul 12 10:24:20.890317 containerd[1560]: time="2025-07-12T10:24:20.890265080Z" level=info msg="StartContainer for \"3c5ff7073be3e2212401dd0fd617d79b039b06c9fc9242659d628520e0803e50\"" Jul 12 10:24:20.891574 containerd[1560]: time="2025-07-12T10:24:20.891545956Z" level=info msg="connecting to shim 3c5ff7073be3e2212401dd0fd617d79b039b06c9fc9242659d628520e0803e50" address="unix:///run/containerd/s/011de2da762df6f8663ca4f8433a91ee823a2ce62c6fee7e6834e0aff2edd8c8" protocol=ttrpc version=3 Jul 12 10:24:20.922889 systemd[1]: Started cri-containerd-3c5ff7073be3e2212401dd0fd617d79b039b06c9fc9242659d628520e0803e50.scope - libcontainer container 3c5ff7073be3e2212401dd0fd617d79b039b06c9fc9242659d628520e0803e50. Jul 12 10:24:20.976070 containerd[1560]: time="2025-07-12T10:24:20.976029190Z" level=info msg="StartContainer for \"3c5ff7073be3e2212401dd0fd617d79b039b06c9fc9242659d628520e0803e50\" returns successfully" Jul 12 10:24:21.746777 kubelet[2707]: I0712 10:24:21.746572 2707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-cdh9p" podStartSLOduration=27.083022491 podStartE2EDuration="33.746431758s" podCreationTimestamp="2025-07-12 10:23:48 +0000 UTC" firstStartedPulling="2025-07-12 10:24:14.20175085 +0000 UTC m=+41.777736225" lastFinishedPulling="2025-07-12 10:24:20.865160117 +0000 UTC m=+48.441145492" observedRunningTime="2025-07-12 10:24:21.745606718 +0000 UTC m=+49.321592093" watchObservedRunningTime="2025-07-12 10:24:21.746431758 +0000 UTC m=+49.322417133" Jul 12 10:24:21.886941 containerd[1560]: time="2025-07-12T10:24:21.886877964Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3c5ff7073be3e2212401dd0fd617d79b039b06c9fc9242659d628520e0803e50\" id:\"86dbd446f533ed8b636854f4dfc3002c858c14350d840c3864fe49ebd46b96b1\" pid:4896 exit_status:1 exited_at:{seconds:1752315861 nanos:886430694}" Jul 12 10:24:22.759275 containerd[1560]: time="2025-07-12T10:24:22.759205208Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3c5ff7073be3e2212401dd0fd617d79b039b06c9fc9242659d628520e0803e50\" id:\"4b8105b7b045ad21571ad96a274ddb13e230ce7aabff00d7157231d3075ae9e9\" pid:4920 exit_status:1 exited_at:{seconds:1752315862 nanos:758843969}" Jul 12 10:24:23.178391 containerd[1560]: time="2025-07-12T10:24:23.178239520Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:24:23.179357 containerd[1560]: time="2025-07-12T10:24:23.179294983Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 12 10:24:23.181395 containerd[1560]: time="2025-07-12T10:24:23.181358368Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:24:23.188877 containerd[1560]: time="2025-07-12T10:24:23.188821315Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:24:23.189234 containerd[1560]: time="2025-07-12T10:24:23.189192813Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 2.323658463s" Jul 12 10:24:23.189234 containerd[1560]: time="2025-07-12T10:24:23.189223580Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 12 10:24:23.190887 containerd[1560]: time="2025-07-12T10:24:23.190570390Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 12 10:24:23.191554 containerd[1560]: time="2025-07-12T10:24:23.191505727Z" level=info msg="CreateContainer within sandbox \"97f4c5f229e41267bfd26bbc6e7d599a373b357615841f3dd9f0aee9645d1e1c\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 12 10:24:23.203604 containerd[1560]: time="2025-07-12T10:24:23.203552332Z" level=info msg="Container 71b5614795bd276c6209c9caa4492fab01c406091ffcf2fd381781800bccc2b1: CDI devices from CRI Config.CDIDevices: []" Jul 12 10:24:23.211992 containerd[1560]: time="2025-07-12T10:24:23.211919668Z" level=info msg="CreateContainer within sandbox \"97f4c5f229e41267bfd26bbc6e7d599a373b357615841f3dd9f0aee9645d1e1c\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"71b5614795bd276c6209c9caa4492fab01c406091ffcf2fd381781800bccc2b1\"" Jul 12 10:24:23.212845 containerd[1560]: time="2025-07-12T10:24:23.212515076Z" level=info msg="StartContainer for \"71b5614795bd276c6209c9caa4492fab01c406091ffcf2fd381781800bccc2b1\"" Jul 12 10:24:23.213940 containerd[1560]: time="2025-07-12T10:24:23.213910557Z" level=info msg="connecting to shim 71b5614795bd276c6209c9caa4492fab01c406091ffcf2fd381781800bccc2b1" address="unix:///run/containerd/s/89f0528e9a5f0ed84ca6acc69db9b5630c655f7f039b35bc7af350c5961eb5fa" protocol=ttrpc version=3 Jul 12 10:24:23.233894 systemd[1]: Started cri-containerd-71b5614795bd276c6209c9caa4492fab01c406091ffcf2fd381781800bccc2b1.scope - libcontainer container 71b5614795bd276c6209c9caa4492fab01c406091ffcf2fd381781800bccc2b1. Jul 12 10:24:23.297074 containerd[1560]: time="2025-07-12T10:24:23.297024401Z" level=info msg="StartContainer for \"71b5614795bd276c6209c9caa4492fab01c406091ffcf2fd381781800bccc2b1\" returns successfully" Jul 12 10:24:23.539137 containerd[1560]: time="2025-07-12T10:24:23.539077093Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3c5ff7073be3e2212401dd0fd617d79b039b06c9fc9242659d628520e0803e50\" id:\"99346d1268d8933306483a6a365500f3c5a32ba3339f6dbab9e5e356f41b2c4c\" pid:4979 exited_at:{seconds:1752315863 nanos:538447451}" Jul 12 10:24:24.520681 containerd[1560]: time="2025-07-12T10:24:24.520629542Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:24:24.521524 containerd[1560]: time="2025-07-12T10:24:24.521489497Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 12 10:24:24.522568 containerd[1560]: time="2025-07-12T10:24:24.522529640Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:24:24.524599 containerd[1560]: time="2025-07-12T10:24:24.524564612Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:24:24.525142 containerd[1560]: time="2025-07-12T10:24:24.525097854Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 1.334495543s" Jul 12 10:24:24.525142 containerd[1560]: time="2025-07-12T10:24:24.525138199Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 12 10:24:24.527656 containerd[1560]: time="2025-07-12T10:24:24.527397723Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 12 10:24:24.529237 containerd[1560]: time="2025-07-12T10:24:24.529206971Z" level=info msg="CreateContainer within sandbox \"c25f2c8ad668314fd175ea067551e4ee4e96585bcd927d433b4623217a302c8a\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 12 10:24:24.536964 containerd[1560]: time="2025-07-12T10:24:24.536927591Z" level=info msg="Container bb1ac13c899cdb533b217b06c7fc01e8a293b8fc2ebd6fd493d144c1d32e75a1: CDI devices from CRI Config.CDIDevices: []" Jul 12 10:24:24.549488 containerd[1560]: time="2025-07-12T10:24:24.549438046Z" level=info msg="CreateContainer within sandbox \"c25f2c8ad668314fd175ea067551e4ee4e96585bcd927d433b4623217a302c8a\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"bb1ac13c899cdb533b217b06c7fc01e8a293b8fc2ebd6fd493d144c1d32e75a1\"" Jul 12 10:24:24.549998 containerd[1560]: time="2025-07-12T10:24:24.549921705Z" level=info msg="StartContainer for \"bb1ac13c899cdb533b217b06c7fc01e8a293b8fc2ebd6fd493d144c1d32e75a1\"" Jul 12 10:24:24.551007 containerd[1560]: time="2025-07-12T10:24:24.550969533Z" level=info msg="connecting to shim bb1ac13c899cdb533b217b06c7fc01e8a293b8fc2ebd6fd493d144c1d32e75a1" address="unix:///run/containerd/s/08f7d01f43a41d7a1ddd7c9b3a2306dee62da8b5fd18390957fa96ff607b95a3" protocol=ttrpc version=3 Jul 12 10:24:24.571873 systemd[1]: Started cri-containerd-bb1ac13c899cdb533b217b06c7fc01e8a293b8fc2ebd6fd493d144c1d32e75a1.scope - libcontainer container bb1ac13c899cdb533b217b06c7fc01e8a293b8fc2ebd6fd493d144c1d32e75a1. Jul 12 10:24:24.622301 containerd[1560]: time="2025-07-12T10:24:24.622249561Z" level=info msg="StartContainer for \"bb1ac13c899cdb533b217b06c7fc01e8a293b8fc2ebd6fd493d144c1d32e75a1\" returns successfully" Jul 12 10:24:25.137271 systemd[1]: Started sshd@9-10.0.0.137:22-10.0.0.1:36950.service - OpenSSH per-connection server daemon (10.0.0.1:36950). Jul 12 10:24:25.205520 sshd[5037]: Accepted publickey for core from 10.0.0.1 port 36950 ssh2: RSA SHA256:ljiWRYFI9rWPiw0u7CK8T0RxaTLaHmCBjbv/AQVYFjw Jul 12 10:24:25.207093 sshd-session[5037]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 10:24:25.211290 systemd-logind[1540]: New session 10 of user core. Jul 12 10:24:25.217850 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 12 10:24:25.394970 sshd[5040]: Connection closed by 10.0.0.1 port 36950 Jul 12 10:24:25.395238 sshd-session[5037]: pam_unix(sshd:session): session closed for user core Jul 12 10:24:25.399948 systemd[1]: sshd@9-10.0.0.137:22-10.0.0.1:36950.service: Deactivated successfully. Jul 12 10:24:25.402051 systemd[1]: session-10.scope: Deactivated successfully. Jul 12 10:24:25.402828 systemd-logind[1540]: Session 10 logged out. Waiting for processes to exit. Jul 12 10:24:25.404109 systemd-logind[1540]: Removed session 10. Jul 12 10:24:25.650776 containerd[1560]: time="2025-07-12T10:24:25.650631436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d68fd648b-nbrdq,Uid:153be907-6581-4138-b29b-e67e9e609b4f,Namespace:calico-apiserver,Attempt:0,}" Jul 12 10:24:25.971621 systemd-networkd[1479]: cali1e90f333de3: Link UP Jul 12 10:24:25.972397 systemd-networkd[1479]: cali1e90f333de3: Gained carrier Jul 12 10:24:26.101692 containerd[1560]: 2025-07-12 10:24:25.763 [INFO][5053] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6d68fd648b--nbrdq-eth0 calico-apiserver-6d68fd648b- calico-apiserver 153be907-6581-4138-b29b-e67e9e609b4f 858 0 2025-07-12 10:23:46 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6d68fd648b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6d68fd648b-nbrdq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1e90f333de3 [] [] }} ContainerID="89439f107fa33459aafd059b8c9cda985707d74cba99967255e3c0a684e570bb" Namespace="calico-apiserver" Pod="calico-apiserver-6d68fd648b-nbrdq" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d68fd648b--nbrdq-" Jul 12 10:24:26.101692 containerd[1560]: 2025-07-12 10:24:25.764 [INFO][5053] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="89439f107fa33459aafd059b8c9cda985707d74cba99967255e3c0a684e570bb" Namespace="calico-apiserver" Pod="calico-apiserver-6d68fd648b-nbrdq" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d68fd648b--nbrdq-eth0" Jul 12 10:24:26.101692 containerd[1560]: 2025-07-12 10:24:25.790 [INFO][5068] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="89439f107fa33459aafd059b8c9cda985707d74cba99967255e3c0a684e570bb" HandleID="k8s-pod-network.89439f107fa33459aafd059b8c9cda985707d74cba99967255e3c0a684e570bb" Workload="localhost-k8s-calico--apiserver--6d68fd648b--nbrdq-eth0" Jul 12 10:24:26.101692 containerd[1560]: 2025-07-12 10:24:25.790 [INFO][5068] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="89439f107fa33459aafd059b8c9cda985707d74cba99967255e3c0a684e570bb" HandleID="k8s-pod-network.89439f107fa33459aafd059b8c9cda985707d74cba99967255e3c0a684e570bb" Workload="localhost-k8s-calico--apiserver--6d68fd648b--nbrdq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e6e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6d68fd648b-nbrdq", "timestamp":"2025-07-12 10:24:25.790428617 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 10:24:26.101692 containerd[1560]: 2025-07-12 10:24:25.790 [INFO][5068] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 10:24:26.101692 containerd[1560]: 2025-07-12 10:24:25.790 [INFO][5068] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 10:24:26.101692 containerd[1560]: 2025-07-12 10:24:25.790 [INFO][5068] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 12 10:24:26.101692 containerd[1560]: 2025-07-12 10:24:25.797 [INFO][5068] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.89439f107fa33459aafd059b8c9cda985707d74cba99967255e3c0a684e570bb" host="localhost" Jul 12 10:24:26.101692 containerd[1560]: 2025-07-12 10:24:25.802 [INFO][5068] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 12 10:24:26.101692 containerd[1560]: 2025-07-12 10:24:25.806 [INFO][5068] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 12 10:24:26.101692 containerd[1560]: 2025-07-12 10:24:25.807 [INFO][5068] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 12 10:24:26.101692 containerd[1560]: 2025-07-12 10:24:25.809 [INFO][5068] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 12 10:24:26.101692 containerd[1560]: 2025-07-12 10:24:25.810 [INFO][5068] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.89439f107fa33459aafd059b8c9cda985707d74cba99967255e3c0a684e570bb" host="localhost" Jul 12 10:24:26.101692 containerd[1560]: 2025-07-12 10:24:25.811 [INFO][5068] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.89439f107fa33459aafd059b8c9cda985707d74cba99967255e3c0a684e570bb Jul 12 10:24:26.101692 containerd[1560]: 2025-07-12 10:24:25.856 [INFO][5068] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.89439f107fa33459aafd059b8c9cda985707d74cba99967255e3c0a684e570bb" host="localhost" Jul 12 10:24:26.101692 containerd[1560]: 2025-07-12 10:24:25.965 [INFO][5068] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.89439f107fa33459aafd059b8c9cda985707d74cba99967255e3c0a684e570bb" host="localhost" Jul 12 10:24:26.101692 containerd[1560]: 2025-07-12 10:24:25.965 [INFO][5068] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.89439f107fa33459aafd059b8c9cda985707d74cba99967255e3c0a684e570bb" host="localhost" Jul 12 10:24:26.101692 containerd[1560]: 2025-07-12 10:24:25.965 [INFO][5068] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 10:24:26.101692 containerd[1560]: 2025-07-12 10:24:25.965 [INFO][5068] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="89439f107fa33459aafd059b8c9cda985707d74cba99967255e3c0a684e570bb" HandleID="k8s-pod-network.89439f107fa33459aafd059b8c9cda985707d74cba99967255e3c0a684e570bb" Workload="localhost-k8s-calico--apiserver--6d68fd648b--nbrdq-eth0" Jul 12 10:24:26.102357 containerd[1560]: 2025-07-12 10:24:25.969 [INFO][5053] cni-plugin/k8s.go 418: Populated endpoint ContainerID="89439f107fa33459aafd059b8c9cda985707d74cba99967255e3c0a684e570bb" Namespace="calico-apiserver" Pod="calico-apiserver-6d68fd648b-nbrdq" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d68fd648b--nbrdq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6d68fd648b--nbrdq-eth0", GenerateName:"calico-apiserver-6d68fd648b-", Namespace:"calico-apiserver", SelfLink:"", UID:"153be907-6581-4138-b29b-e67e9e609b4f", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 10, 23, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d68fd648b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6d68fd648b-nbrdq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1e90f333de3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 10:24:26.102357 containerd[1560]: 2025-07-12 10:24:25.969 [INFO][5053] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="89439f107fa33459aafd059b8c9cda985707d74cba99967255e3c0a684e570bb" Namespace="calico-apiserver" Pod="calico-apiserver-6d68fd648b-nbrdq" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d68fd648b--nbrdq-eth0" Jul 12 10:24:26.102357 containerd[1560]: 2025-07-12 10:24:25.969 [INFO][5053] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1e90f333de3 ContainerID="89439f107fa33459aafd059b8c9cda985707d74cba99967255e3c0a684e570bb" Namespace="calico-apiserver" Pod="calico-apiserver-6d68fd648b-nbrdq" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d68fd648b--nbrdq-eth0" Jul 12 10:24:26.102357 containerd[1560]: 2025-07-12 10:24:25.972 [INFO][5053] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="89439f107fa33459aafd059b8c9cda985707d74cba99967255e3c0a684e570bb" Namespace="calico-apiserver" Pod="calico-apiserver-6d68fd648b-nbrdq" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d68fd648b--nbrdq-eth0" Jul 12 10:24:26.102357 containerd[1560]: 2025-07-12 10:24:25.973 [INFO][5053] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="89439f107fa33459aafd059b8c9cda985707d74cba99967255e3c0a684e570bb" Namespace="calico-apiserver" Pod="calico-apiserver-6d68fd648b-nbrdq" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d68fd648b--nbrdq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6d68fd648b--nbrdq-eth0", GenerateName:"calico-apiserver-6d68fd648b-", Namespace:"calico-apiserver", SelfLink:"", UID:"153be907-6581-4138-b29b-e67e9e609b4f", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 10, 23, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d68fd648b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"89439f107fa33459aafd059b8c9cda985707d74cba99967255e3c0a684e570bb", Pod:"calico-apiserver-6d68fd648b-nbrdq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1e90f333de3", MAC:"ca:fe:3a:44:b5:5a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 10:24:26.102357 containerd[1560]: 2025-07-12 10:24:26.097 [INFO][5053] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="89439f107fa33459aafd059b8c9cda985707d74cba99967255e3c0a684e570bb" Namespace="calico-apiserver" Pod="calico-apiserver-6d68fd648b-nbrdq" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d68fd648b--nbrdq-eth0" Jul 12 10:24:26.132992 containerd[1560]: time="2025-07-12T10:24:26.132920069Z" level=info msg="connecting to shim 89439f107fa33459aafd059b8c9cda985707d74cba99967255e3c0a684e570bb" address="unix:///run/containerd/s/332e5326b291b595be0382203886b4036b020b3b2a2ea4ade769f5b8012c39df" namespace=k8s.io protocol=ttrpc version=3 Jul 12 10:24:26.168874 systemd[1]: Started cri-containerd-89439f107fa33459aafd059b8c9cda985707d74cba99967255e3c0a684e570bb.scope - libcontainer container 89439f107fa33459aafd059b8c9cda985707d74cba99967255e3c0a684e570bb. Jul 12 10:24:26.182292 systemd-resolved[1410]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 10:24:26.293615 containerd[1560]: time="2025-07-12T10:24:26.293448193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d68fd648b-nbrdq,Uid:153be907-6581-4138-b29b-e67e9e609b4f,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"89439f107fa33459aafd059b8c9cda985707d74cba99967255e3c0a684e570bb\"" Jul 12 10:24:26.296965 containerd[1560]: time="2025-07-12T10:24:26.296918259Z" level=info msg="CreateContainer within sandbox \"89439f107fa33459aafd059b8c9cda985707d74cba99967255e3c0a684e570bb\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 12 10:24:26.310595 containerd[1560]: time="2025-07-12T10:24:26.310555377Z" level=info msg="Container 2639e23cc7d127e293260603137b8a13ac2e7db98f74846bc82781bbc9aa8bad: CDI devices from CRI Config.CDIDevices: []" Jul 12 10:24:26.318585 containerd[1560]: time="2025-07-12T10:24:26.318542896Z" level=info msg="CreateContainer within sandbox \"89439f107fa33459aafd059b8c9cda985707d74cba99967255e3c0a684e570bb\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2639e23cc7d127e293260603137b8a13ac2e7db98f74846bc82781bbc9aa8bad\"" Jul 12 10:24:26.319265 containerd[1560]: time="2025-07-12T10:24:26.319059406Z" level=info msg="StartContainer for \"2639e23cc7d127e293260603137b8a13ac2e7db98f74846bc82781bbc9aa8bad\"" Jul 12 10:24:26.320113 containerd[1560]: time="2025-07-12T10:24:26.320068421Z" level=info msg="connecting to shim 2639e23cc7d127e293260603137b8a13ac2e7db98f74846bc82781bbc9aa8bad" address="unix:///run/containerd/s/332e5326b291b595be0382203886b4036b020b3b2a2ea4ade769f5b8012c39df" protocol=ttrpc version=3 Jul 12 10:24:26.347882 systemd[1]: Started cri-containerd-2639e23cc7d127e293260603137b8a13ac2e7db98f74846bc82781bbc9aa8bad.scope - libcontainer container 2639e23cc7d127e293260603137b8a13ac2e7db98f74846bc82781bbc9aa8bad. Jul 12 10:24:26.397845 containerd[1560]: time="2025-07-12T10:24:26.397793986Z" level=info msg="StartContainer for \"2639e23cc7d127e293260603137b8a13ac2e7db98f74846bc82781bbc9aa8bad\" returns successfully" Jul 12 10:24:26.824981 kubelet[2707]: I0712 10:24:26.824908 2707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6d68fd648b-nbrdq" podStartSLOduration=40.82489241 podStartE2EDuration="40.82489241s" podCreationTimestamp="2025-07-12 10:23:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 10:24:26.82431177 +0000 UTC m=+54.400297145" watchObservedRunningTime="2025-07-12 10:24:26.82489241 +0000 UTC m=+54.400877785" Jul 12 10:24:27.487023 systemd-networkd[1479]: cali1e90f333de3: Gained IPv6LL Jul 12 10:24:27.702337 kubelet[2707]: I0712 10:24:27.702285 2707 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 10:24:29.218101 containerd[1560]: time="2025-07-12T10:24:29.218037562Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:24:29.218983 containerd[1560]: time="2025-07-12T10:24:29.218942301Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 12 10:24:29.220584 containerd[1560]: time="2025-07-12T10:24:29.220531264Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:24:29.222835 containerd[1560]: time="2025-07-12T10:24:29.222785877Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:24:29.223373 containerd[1560]: time="2025-07-12T10:24:29.223325660Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 4.695899063s" Jul 12 10:24:29.223373 containerd[1560]: time="2025-07-12T10:24:29.223365645Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 12 10:24:29.224652 containerd[1560]: time="2025-07-12T10:24:29.224571099Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 12 10:24:29.232904 containerd[1560]: time="2025-07-12T10:24:29.232850764Z" level=info msg="CreateContainer within sandbox \"e880233127c2bf815a76f292c7c48f56d12bafc31fa12b021c2e3064479693a3\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 12 10:24:29.242044 containerd[1560]: time="2025-07-12T10:24:29.241998819Z" level=info msg="Container 9f2903395d1af2d3dc5ebbfc3fbebedabaec15497fdbdb15c2fbb60226a57233: CDI devices from CRI Config.CDIDevices: []" Jul 12 10:24:29.251170 containerd[1560]: time="2025-07-12T10:24:29.251120064Z" level=info msg="CreateContainer within sandbox \"e880233127c2bf815a76f292c7c48f56d12bafc31fa12b021c2e3064479693a3\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"9f2903395d1af2d3dc5ebbfc3fbebedabaec15497fdbdb15c2fbb60226a57233\"" Jul 12 10:24:29.251742 containerd[1560]: time="2025-07-12T10:24:29.251684174Z" level=info msg="StartContainer for \"9f2903395d1af2d3dc5ebbfc3fbebedabaec15497fdbdb15c2fbb60226a57233\"" Jul 12 10:24:29.252954 containerd[1560]: time="2025-07-12T10:24:29.252912911Z" level=info msg="connecting to shim 9f2903395d1af2d3dc5ebbfc3fbebedabaec15497fdbdb15c2fbb60226a57233" address="unix:///run/containerd/s/c7a936ed7d1f5802ec0cf0be9e0b024d283d5d0a751581348ba86a97f44dfc44" protocol=ttrpc version=3 Jul 12 10:24:29.276874 systemd[1]: Started cri-containerd-9f2903395d1af2d3dc5ebbfc3fbebedabaec15497fdbdb15c2fbb60226a57233.scope - libcontainer container 9f2903395d1af2d3dc5ebbfc3fbebedabaec15497fdbdb15c2fbb60226a57233. Jul 12 10:24:29.324668 containerd[1560]: time="2025-07-12T10:24:29.324623829Z" level=info msg="StartContainer for \"9f2903395d1af2d3dc5ebbfc3fbebedabaec15497fdbdb15c2fbb60226a57233\" returns successfully" Jul 12 10:24:29.719021 kubelet[2707]: I0712 10:24:29.718833 2707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-778bc96f59-rcg6j" podStartSLOduration=27.666366934 podStartE2EDuration="40.718808746s" podCreationTimestamp="2025-07-12 10:23:49 +0000 UTC" firstStartedPulling="2025-07-12 10:24:16.171904525 +0000 UTC m=+43.747889900" lastFinishedPulling="2025-07-12 10:24:29.224346337 +0000 UTC m=+56.800331712" observedRunningTime="2025-07-12 10:24:29.718121366 +0000 UTC m=+57.294106741" watchObservedRunningTime="2025-07-12 10:24:29.718808746 +0000 UTC m=+57.294794122" Jul 12 10:24:29.754312 containerd[1560]: time="2025-07-12T10:24:29.754158606Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9f2903395d1af2d3dc5ebbfc3fbebedabaec15497fdbdb15c2fbb60226a57233\" id:\"7b0e0cd37efe2c7eb5edb85c4e072b50819918101edc4c7eae6f6ad606a0b2f2\" pid:5233 exited_at:{seconds:1752315869 nanos:752207344}" Jul 12 10:24:30.411168 systemd[1]: Started sshd@10-10.0.0.137:22-10.0.0.1:42760.service - OpenSSH per-connection server daemon (10.0.0.1:42760). Jul 12 10:24:30.542601 sshd[5244]: Accepted publickey for core from 10.0.0.1 port 42760 ssh2: RSA SHA256:ljiWRYFI9rWPiw0u7CK8T0RxaTLaHmCBjbv/AQVYFjw Jul 12 10:24:30.545209 sshd-session[5244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 10:24:30.549805 systemd-logind[1540]: New session 11 of user core. Jul 12 10:24:30.560856 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 12 10:24:30.780666 sshd[5247]: Connection closed by 10.0.0.1 port 42760 Jul 12 10:24:30.789334 systemd[1]: sshd@10-10.0.0.137:22-10.0.0.1:42760.service: Deactivated successfully. Jul 12 10:24:30.781206 sshd-session[5244]: pam_unix(sshd:session): session closed for user core Jul 12 10:24:30.791198 systemd[1]: session-11.scope: Deactivated successfully. Jul 12 10:24:30.791938 systemd-logind[1540]: Session 11 logged out. Waiting for processes to exit. Jul 12 10:24:30.795554 systemd[1]: Started sshd@11-10.0.0.137:22-10.0.0.1:42766.service - OpenSSH per-connection server daemon (10.0.0.1:42766). Jul 12 10:24:30.796479 systemd-logind[1540]: Removed session 11. Jul 12 10:24:30.857225 sshd[5261]: Accepted publickey for core from 10.0.0.1 port 42766 ssh2: RSA SHA256:ljiWRYFI9rWPiw0u7CK8T0RxaTLaHmCBjbv/AQVYFjw Jul 12 10:24:30.858909 sshd-session[5261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 10:24:30.863923 systemd-logind[1540]: New session 12 of user core. Jul 12 10:24:30.874873 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 12 10:24:31.053280 sshd[5264]: Connection closed by 10.0.0.1 port 42766 Jul 12 10:24:31.052399 sshd-session[5261]: pam_unix(sshd:session): session closed for user core Jul 12 10:24:31.062521 systemd[1]: sshd@11-10.0.0.137:22-10.0.0.1:42766.service: Deactivated successfully. Jul 12 10:24:31.068322 systemd[1]: session-12.scope: Deactivated successfully. Jul 12 10:24:31.073322 systemd-logind[1540]: Session 12 logged out. Waiting for processes to exit. Jul 12 10:24:31.075870 systemd[1]: Started sshd@12-10.0.0.137:22-10.0.0.1:42778.service - OpenSSH per-connection server daemon (10.0.0.1:42778). Jul 12 10:24:31.077142 systemd-logind[1540]: Removed session 12. Jul 12 10:24:31.130164 sshd[5280]: Accepted publickey for core from 10.0.0.1 port 42778 ssh2: RSA SHA256:ljiWRYFI9rWPiw0u7CK8T0RxaTLaHmCBjbv/AQVYFjw Jul 12 10:24:31.131459 sshd-session[5280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 10:24:31.140442 systemd-logind[1540]: New session 13 of user core. Jul 12 10:24:31.147936 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 12 10:24:31.407762 containerd[1560]: time="2025-07-12T10:24:31.407589726Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:24:31.408700 containerd[1560]: time="2025-07-12T10:24:31.408672148Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 12 10:24:31.410553 containerd[1560]: time="2025-07-12T10:24:31.410522472Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:24:31.412565 containerd[1560]: time="2025-07-12T10:24:31.412532525Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:24:31.413291 containerd[1560]: time="2025-07-12T10:24:31.413260572Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 2.188648957s" Jul 12 10:24:31.413364 containerd[1560]: time="2025-07-12T10:24:31.413297501Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 12 10:24:31.414551 containerd[1560]: time="2025-07-12T10:24:31.414452079Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 12 10:24:31.416170 containerd[1560]: time="2025-07-12T10:24:31.416143825Z" level=info msg="CreateContainer within sandbox \"97f4c5f229e41267bfd26bbc6e7d599a373b357615841f3dd9f0aee9645d1e1c\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 12 10:24:31.429306 containerd[1560]: time="2025-07-12T10:24:31.429255499Z" level=info msg="Container 4b58bfbbe8a934555f9784339cfe9b54af72f96f9e937490c8676e128ad78398: CDI devices from CRI Config.CDIDevices: []" Jul 12 10:24:31.442573 sshd[5285]: Connection closed by 10.0.0.1 port 42778 Jul 12 10:24:31.443138 sshd-session[5280]: pam_unix(sshd:session): session closed for user core Jul 12 10:24:31.444088 containerd[1560]: time="2025-07-12T10:24:31.444037440Z" level=info msg="CreateContainer within sandbox \"97f4c5f229e41267bfd26bbc6e7d599a373b357615841f3dd9f0aee9645d1e1c\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"4b58bfbbe8a934555f9784339cfe9b54af72f96f9e937490c8676e128ad78398\"" Jul 12 10:24:31.445054 containerd[1560]: time="2025-07-12T10:24:31.444985019Z" level=info msg="StartContainer for \"4b58bfbbe8a934555f9784339cfe9b54af72f96f9e937490c8676e128ad78398\"" Jul 12 10:24:31.446440 containerd[1560]: time="2025-07-12T10:24:31.446388314Z" level=info msg="connecting to shim 4b58bfbbe8a934555f9784339cfe9b54af72f96f9e937490c8676e128ad78398" address="unix:///run/containerd/s/89f0528e9a5f0ed84ca6acc69db9b5630c655f7f039b35bc7af350c5961eb5fa" protocol=ttrpc version=3 Jul 12 10:24:31.454529 systemd[1]: sshd@12-10.0.0.137:22-10.0.0.1:42778.service: Deactivated successfully. Jul 12 10:24:31.457317 systemd[1]: session-13.scope: Deactivated successfully. Jul 12 10:24:31.458331 systemd-logind[1540]: Session 13 logged out. Waiting for processes to exit. Jul 12 10:24:31.460134 systemd-logind[1540]: Removed session 13. Jul 12 10:24:31.479924 systemd[1]: Started cri-containerd-4b58bfbbe8a934555f9784339cfe9b54af72f96f9e937490c8676e128ad78398.scope - libcontainer container 4b58bfbbe8a934555f9784339cfe9b54af72f96f9e937490c8676e128ad78398. Jul 12 10:24:31.526547 containerd[1560]: time="2025-07-12T10:24:31.526388151Z" level=info msg="StartContainer for \"4b58bfbbe8a934555f9784339cfe9b54af72f96f9e937490c8676e128ad78398\" returns successfully" Jul 12 10:24:31.582633 kubelet[2707]: I0712 10:24:31.582567 2707 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 12 10:24:31.582633 kubelet[2707]: I0712 10:24:31.582610 2707 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 12 10:24:33.194500 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2843936130.mount: Deactivated successfully. Jul 12 10:24:33.832514 containerd[1560]: time="2025-07-12T10:24:33.832456957Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:24:33.833858 containerd[1560]: time="2025-07-12T10:24:33.833804618Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 12 10:24:33.834998 containerd[1560]: time="2025-07-12T10:24:33.834959906Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:24:33.837563 containerd[1560]: time="2025-07-12T10:24:33.837518690Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:24:33.838179 containerd[1560]: time="2025-07-12T10:24:33.838143693Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 2.423637222s" Jul 12 10:24:33.838179 containerd[1560]: time="2025-07-12T10:24:33.838172908Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 12 10:24:33.840274 containerd[1560]: time="2025-07-12T10:24:33.840247692Z" level=info msg="CreateContainer within sandbox \"c25f2c8ad668314fd175ea067551e4ee4e96585bcd927d433b4623217a302c8a\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 12 10:24:33.846197 containerd[1560]: time="2025-07-12T10:24:33.846157395Z" level=info msg="Container f9ff9e6e62cadde329de23dbd9f5fcc02a192d078da7414f8d40e2ca83203180: CDI devices from CRI Config.CDIDevices: []" Jul 12 10:24:33.855817 containerd[1560]: time="2025-07-12T10:24:33.855762116Z" level=info msg="CreateContainer within sandbox \"c25f2c8ad668314fd175ea067551e4ee4e96585bcd927d433b4623217a302c8a\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"f9ff9e6e62cadde329de23dbd9f5fcc02a192d078da7414f8d40e2ca83203180\"" Jul 12 10:24:33.856343 containerd[1560]: time="2025-07-12T10:24:33.856278996Z" level=info msg="StartContainer for \"f9ff9e6e62cadde329de23dbd9f5fcc02a192d078da7414f8d40e2ca83203180\"" Jul 12 10:24:33.857389 containerd[1560]: time="2025-07-12T10:24:33.857360176Z" level=info msg="connecting to shim f9ff9e6e62cadde329de23dbd9f5fcc02a192d078da7414f8d40e2ca83203180" address="unix:///run/containerd/s/08f7d01f43a41d7a1ddd7c9b3a2306dee62da8b5fd18390957fa96ff607b95a3" protocol=ttrpc version=3 Jul 12 10:24:33.882906 systemd[1]: Started cri-containerd-f9ff9e6e62cadde329de23dbd9f5fcc02a192d078da7414f8d40e2ca83203180.scope - libcontainer container f9ff9e6e62cadde329de23dbd9f5fcc02a192d078da7414f8d40e2ca83203180. Jul 12 10:24:33.933675 containerd[1560]: time="2025-07-12T10:24:33.933579674Z" level=info msg="StartContainer for \"f9ff9e6e62cadde329de23dbd9f5fcc02a192d078da7414f8d40e2ca83203180\" returns successfully" Jul 12 10:24:34.815387 kubelet[2707]: I0712 10:24:34.814869 2707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-mvff8" podStartSLOduration=30.249049292 podStartE2EDuration="45.814850356s" podCreationTimestamp="2025-07-12 10:23:49 +0000 UTC" firstStartedPulling="2025-07-12 10:24:15.848437995 +0000 UTC m=+43.424423370" lastFinishedPulling="2025-07-12 10:24:31.414239058 +0000 UTC m=+58.990224434" observedRunningTime="2025-07-12 10:24:31.735164585 +0000 UTC m=+59.311149960" watchObservedRunningTime="2025-07-12 10:24:34.814850356 +0000 UTC m=+62.390835731" Jul 12 10:24:34.816009 kubelet[2707]: I0712 10:24:34.815490 2707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-cf5b9c586-9zbh9" podStartSLOduration=4.982071643 podStartE2EDuration="22.815480008s" podCreationTimestamp="2025-07-12 10:24:12 +0000 UTC" firstStartedPulling="2025-07-12 10:24:16.005501154 +0000 UTC m=+43.581486530" lastFinishedPulling="2025-07-12 10:24:33.83890953 +0000 UTC m=+61.414894895" observedRunningTime="2025-07-12 10:24:34.813654331 +0000 UTC m=+62.389639706" watchObservedRunningTime="2025-07-12 10:24:34.815480008 +0000 UTC m=+62.391465383" Jul 12 10:24:36.459707 systemd[1]: Started sshd@13-10.0.0.137:22-10.0.0.1:57804.service - OpenSSH per-connection server daemon (10.0.0.1:57804). Jul 12 10:24:36.527812 sshd[5391]: Accepted publickey for core from 10.0.0.1 port 57804 ssh2: RSA SHA256:ljiWRYFI9rWPiw0u7CK8T0RxaTLaHmCBjbv/AQVYFjw Jul 12 10:24:36.529441 sshd-session[5391]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 10:24:36.533687 systemd-logind[1540]: New session 14 of user core. Jul 12 10:24:36.546865 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 12 10:24:36.706609 sshd[5394]: Connection closed by 10.0.0.1 port 57804 Jul 12 10:24:36.706970 sshd-session[5391]: pam_unix(sshd:session): session closed for user core Jul 12 10:24:36.711668 systemd[1]: sshd@13-10.0.0.137:22-10.0.0.1:57804.service: Deactivated successfully. Jul 12 10:24:36.713710 systemd[1]: session-14.scope: Deactivated successfully. Jul 12 10:24:36.715102 systemd-logind[1540]: Session 14 logged out. Waiting for processes to exit. Jul 12 10:24:36.716307 systemd-logind[1540]: Removed session 14. Jul 12 10:24:41.719423 systemd[1]: Started sshd@14-10.0.0.137:22-10.0.0.1:57820.service - OpenSSH per-connection server daemon (10.0.0.1:57820). Jul 12 10:24:41.774137 sshd[5409]: Accepted publickey for core from 10.0.0.1 port 57820 ssh2: RSA SHA256:ljiWRYFI9rWPiw0u7CK8T0RxaTLaHmCBjbv/AQVYFjw Jul 12 10:24:41.775554 sshd-session[5409]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 10:24:41.779803 systemd-logind[1540]: New session 15 of user core. Jul 12 10:24:41.787854 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 12 10:24:41.906262 sshd[5412]: Connection closed by 10.0.0.1 port 57820 Jul 12 10:24:41.906593 sshd-session[5409]: pam_unix(sshd:session): session closed for user core Jul 12 10:24:41.913859 systemd[1]: sshd@14-10.0.0.137:22-10.0.0.1:57820.service: Deactivated successfully. Jul 12 10:24:41.916269 systemd[1]: session-15.scope: Deactivated successfully. Jul 12 10:24:41.917178 systemd-logind[1540]: Session 15 logged out. Waiting for processes to exit. Jul 12 10:24:41.918873 systemd-logind[1540]: Removed session 15. Jul 12 10:24:42.722061 containerd[1560]: time="2025-07-12T10:24:42.722012356Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e294b6b2defc2150e5f26c38d1e32bcce5be00f35b348aeca9a992f3884842e4\" id:\"ce244c09b3e07b367ac9a8bf8e7a0b9c19b845d9509338a4d1140e9f6ac4a14f\" pid:5436 exited_at:{seconds:1752315882 nanos:721571544}" Jul 12 10:24:42.823501 containerd[1560]: time="2025-07-12T10:24:42.823447219Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e294b6b2defc2150e5f26c38d1e32bcce5be00f35b348aeca9a992f3884842e4\" id:\"a669adc52488785f42ec8cff44d7a065312a7cc44ca5b60d063af751f56cb9db\" pid:5461 exited_at:{seconds:1752315882 nanos:823103093}" Jul 12 10:24:46.918669 systemd[1]: Started sshd@15-10.0.0.137:22-10.0.0.1:35484.service - OpenSSH per-connection server daemon (10.0.0.1:35484). Jul 12 10:24:46.967674 sshd[5478]: Accepted publickey for core from 10.0.0.1 port 35484 ssh2: RSA SHA256:ljiWRYFI9rWPiw0u7CK8T0RxaTLaHmCBjbv/AQVYFjw Jul 12 10:24:46.969054 sshd-session[5478]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 10:24:46.973479 systemd-logind[1540]: New session 16 of user core. Jul 12 10:24:46.988869 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 12 10:24:47.117514 sshd[5481]: Connection closed by 10.0.0.1 port 35484 Jul 12 10:24:47.117926 sshd-session[5478]: pam_unix(sshd:session): session closed for user core Jul 12 10:24:47.122079 systemd[1]: sshd@15-10.0.0.137:22-10.0.0.1:35484.service: Deactivated successfully. Jul 12 10:24:47.124320 systemd[1]: session-16.scope: Deactivated successfully. Jul 12 10:24:47.125857 systemd-logind[1540]: Session 16 logged out. Waiting for processes to exit. Jul 12 10:24:47.127573 systemd-logind[1540]: Removed session 16. Jul 12 10:24:48.503900 kubelet[2707]: E0712 10:24:48.503818 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 10:24:49.503194 kubelet[2707]: E0712 10:24:49.503141 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 10:24:52.130420 systemd[1]: Started sshd@16-10.0.0.137:22-10.0.0.1:35492.service - OpenSSH per-connection server daemon (10.0.0.1:35492). Jul 12 10:24:52.199642 sshd[5494]: Accepted publickey for core from 10.0.0.1 port 35492 ssh2: RSA SHA256:ljiWRYFI9rWPiw0u7CK8T0RxaTLaHmCBjbv/AQVYFjw Jul 12 10:24:52.201764 sshd-session[5494]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 10:24:52.206794 systemd-logind[1540]: New session 17 of user core. Jul 12 10:24:52.211899 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 12 10:24:52.393882 sshd[5497]: Connection closed by 10.0.0.1 port 35492 Jul 12 10:24:52.394197 sshd-session[5494]: pam_unix(sshd:session): session closed for user core Jul 12 10:24:52.402581 systemd[1]: sshd@16-10.0.0.137:22-10.0.0.1:35492.service: Deactivated successfully. Jul 12 10:24:52.404737 systemd[1]: session-17.scope: Deactivated successfully. Jul 12 10:24:52.405492 systemd-logind[1540]: Session 17 logged out. Waiting for processes to exit. Jul 12 10:24:52.408209 systemd[1]: Started sshd@17-10.0.0.137:22-10.0.0.1:35496.service - OpenSSH per-connection server daemon (10.0.0.1:35496). Jul 12 10:24:52.408874 systemd-logind[1540]: Removed session 17. Jul 12 10:24:52.464463 sshd[5515]: Accepted publickey for core from 10.0.0.1 port 35496 ssh2: RSA SHA256:ljiWRYFI9rWPiw0u7CK8T0RxaTLaHmCBjbv/AQVYFjw Jul 12 10:24:52.465982 sshd-session[5515]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 10:24:52.470301 systemd-logind[1540]: New session 18 of user core. Jul 12 10:24:52.487875 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 12 10:24:52.769345 containerd[1560]: time="2025-07-12T10:24:52.769295218Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3c5ff7073be3e2212401dd0fd617d79b039b06c9fc9242659d628520e0803e50\" id:\"1a51108295d9315af23e5799b6f03ea89e29468521818e6bbd25bfcf5986263c\" pid:5538 exited_at:{seconds:1752315892 nanos:768965055}" Jul 12 10:24:52.790930 sshd[5518]: Connection closed by 10.0.0.1 port 35496 Jul 12 10:24:52.793010 sshd-session[5515]: pam_unix(sshd:session): session closed for user core Jul 12 10:24:52.805089 systemd[1]: sshd@17-10.0.0.137:22-10.0.0.1:35496.service: Deactivated successfully. Jul 12 10:24:52.807092 systemd[1]: session-18.scope: Deactivated successfully. Jul 12 10:24:52.807956 systemd-logind[1540]: Session 18 logged out. Waiting for processes to exit. Jul 12 10:24:52.811450 systemd[1]: Started sshd@18-10.0.0.137:22-10.0.0.1:35510.service - OpenSSH per-connection server daemon (10.0.0.1:35510). Jul 12 10:24:52.812361 systemd-logind[1540]: Removed session 18. Jul 12 10:24:52.863495 sshd[5557]: Accepted publickey for core from 10.0.0.1 port 35510 ssh2: RSA SHA256:ljiWRYFI9rWPiw0u7CK8T0RxaTLaHmCBjbv/AQVYFjw Jul 12 10:24:52.865406 sshd-session[5557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 10:24:52.869852 systemd-logind[1540]: New session 19 of user core. Jul 12 10:24:52.881873 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 12 10:24:53.488368 sshd[5560]: Connection closed by 10.0.0.1 port 35510 Jul 12 10:24:53.488902 sshd-session[5557]: pam_unix(sshd:session): session closed for user core Jul 12 10:24:53.497504 systemd[1]: sshd@18-10.0.0.137:22-10.0.0.1:35510.service: Deactivated successfully. Jul 12 10:24:53.500784 systemd[1]: session-19.scope: Deactivated successfully. Jul 12 10:24:53.502259 systemd-logind[1540]: Session 19 logged out. Waiting for processes to exit. Jul 12 10:24:53.506574 systemd-logind[1540]: Removed session 19. Jul 12 10:24:53.509281 systemd[1]: Started sshd@19-10.0.0.137:22-10.0.0.1:35524.service - OpenSSH per-connection server daemon (10.0.0.1:35524). Jul 12 10:24:53.567675 sshd[5578]: Accepted publickey for core from 10.0.0.1 port 35524 ssh2: RSA SHA256:ljiWRYFI9rWPiw0u7CK8T0RxaTLaHmCBjbv/AQVYFjw Jul 12 10:24:53.569469 sshd-session[5578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 10:24:53.574109 systemd-logind[1540]: New session 20 of user core. Jul 12 10:24:53.584854 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 12 10:24:53.943558 sshd[5581]: Connection closed by 10.0.0.1 port 35524 Jul 12 10:24:53.944058 sshd-session[5578]: pam_unix(sshd:session): session closed for user core Jul 12 10:24:53.957138 systemd[1]: sshd@19-10.0.0.137:22-10.0.0.1:35524.service: Deactivated successfully. Jul 12 10:24:53.959600 systemd[1]: session-20.scope: Deactivated successfully. Jul 12 10:24:53.960563 systemd-logind[1540]: Session 20 logged out. Waiting for processes to exit. Jul 12 10:24:53.963672 systemd[1]: Started sshd@20-10.0.0.137:22-10.0.0.1:35538.service - OpenSSH per-connection server daemon (10.0.0.1:35538). Jul 12 10:24:53.964456 systemd-logind[1540]: Removed session 20. Jul 12 10:24:54.017552 sshd[5593]: Accepted publickey for core from 10.0.0.1 port 35538 ssh2: RSA SHA256:ljiWRYFI9rWPiw0u7CK8T0RxaTLaHmCBjbv/AQVYFjw Jul 12 10:24:54.019561 sshd-session[5593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 10:24:54.024656 systemd-logind[1540]: New session 21 of user core. Jul 12 10:24:54.037925 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 12 10:24:54.150564 sshd[5596]: Connection closed by 10.0.0.1 port 35538 Jul 12 10:24:54.150926 sshd-session[5593]: pam_unix(sshd:session): session closed for user core Jul 12 10:24:54.155676 systemd[1]: sshd@20-10.0.0.137:22-10.0.0.1:35538.service: Deactivated successfully. Jul 12 10:24:54.157971 systemd[1]: session-21.scope: Deactivated successfully. Jul 12 10:24:54.158927 systemd-logind[1540]: Session 21 logged out. Waiting for processes to exit. Jul 12 10:24:54.160754 systemd-logind[1540]: Removed session 21. Jul 12 10:24:55.503221 kubelet[2707]: E0712 10:24:55.503159 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 10:24:58.014222 kubelet[2707]: I0712 10:24:58.014162 2707 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 10:24:59.166922 systemd[1]: Started sshd@21-10.0.0.137:22-10.0.0.1:54404.service - OpenSSH per-connection server daemon (10.0.0.1:54404). Jul 12 10:24:59.215654 sshd[5621]: Accepted publickey for core from 10.0.0.1 port 54404 ssh2: RSA SHA256:ljiWRYFI9rWPiw0u7CK8T0RxaTLaHmCBjbv/AQVYFjw Jul 12 10:24:59.217536 sshd-session[5621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 10:24:59.222103 systemd-logind[1540]: New session 22 of user core. Jul 12 10:24:59.236854 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 12 10:24:59.342693 sshd[5624]: Connection closed by 10.0.0.1 port 54404 Jul 12 10:24:59.343069 sshd-session[5621]: pam_unix(sshd:session): session closed for user core Jul 12 10:24:59.346565 systemd[1]: sshd@21-10.0.0.137:22-10.0.0.1:54404.service: Deactivated successfully. Jul 12 10:24:59.348755 systemd[1]: session-22.scope: Deactivated successfully. Jul 12 10:24:59.349679 systemd-logind[1540]: Session 22 logged out. Waiting for processes to exit. Jul 12 10:24:59.351532 systemd-logind[1540]: Removed session 22. Jul 12 10:24:59.770514 containerd[1560]: time="2025-07-12T10:24:59.770463772Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9f2903395d1af2d3dc5ebbfc3fbebedabaec15497fdbdb15c2fbb60226a57233\" id:\"50da06d0a46caf009dfbaa94d51ba75efe399fa168e05fa7184888a44ee3f35b\" pid:5649 exited_at:{seconds:1752315899 nanos:760277825}" Jul 12 10:25:03.503637 kubelet[2707]: E0712 10:25:03.503592 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 10:25:04.355573 systemd[1]: Started sshd@22-10.0.0.137:22-10.0.0.1:54406.service - OpenSSH per-connection server daemon (10.0.0.1:54406). Jul 12 10:25:04.395278 sshd[5661]: Accepted publickey for core from 10.0.0.1 port 54406 ssh2: RSA SHA256:ljiWRYFI9rWPiw0u7CK8T0RxaTLaHmCBjbv/AQVYFjw Jul 12 10:25:04.396992 sshd-session[5661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 10:25:04.401441 systemd-logind[1540]: New session 23 of user core. Jul 12 10:25:04.412931 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 12 10:25:04.518291 sshd[5664]: Connection closed by 10.0.0.1 port 54406 Jul 12 10:25:04.518647 sshd-session[5661]: pam_unix(sshd:session): session closed for user core Jul 12 10:25:04.523410 systemd[1]: sshd@22-10.0.0.137:22-10.0.0.1:54406.service: Deactivated successfully. Jul 12 10:25:04.525535 systemd[1]: session-23.scope: Deactivated successfully. Jul 12 10:25:04.526350 systemd-logind[1540]: Session 23 logged out. Waiting for processes to exit. Jul 12 10:25:04.527522 systemd-logind[1540]: Removed session 23. Jul 12 10:25:09.538505 systemd[1]: Started sshd@23-10.0.0.137:22-10.0.0.1:54530.service - OpenSSH per-connection server daemon (10.0.0.1:54530). Jul 12 10:25:09.589141 sshd[5679]: Accepted publickey for core from 10.0.0.1 port 54530 ssh2: RSA SHA256:ljiWRYFI9rWPiw0u7CK8T0RxaTLaHmCBjbv/AQVYFjw Jul 12 10:25:09.590384 sshd-session[5679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 10:25:09.594535 systemd-logind[1540]: New session 24 of user core. Jul 12 10:25:09.600827 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 12 10:25:09.757564 sshd[5682]: Connection closed by 10.0.0.1 port 54530 Jul 12 10:25:09.758039 sshd-session[5679]: pam_unix(sshd:session): session closed for user core Jul 12 10:25:09.762777 systemd[1]: sshd@23-10.0.0.137:22-10.0.0.1:54530.service: Deactivated successfully. Jul 12 10:25:09.764863 systemd[1]: session-24.scope: Deactivated successfully. Jul 12 10:25:09.765858 systemd-logind[1540]: Session 24 logged out. Waiting for processes to exit. Jul 12 10:25:09.767016 systemd-logind[1540]: Removed session 24. Jul 12 10:25:10.504113 kubelet[2707]: E0712 10:25:10.504049 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"