Jul 12 10:12:03.876809 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Sat Jul 12 08:25:04 -00 2025 Jul 12 10:12:03.876834 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4aa07c6f7fdf02f2e05d879e4d058ee0cec0fba29acc0516234352104ac4e6c4 Jul 12 10:12:03.876846 kernel: BIOS-provided physical RAM map: Jul 12 10:12:03.876853 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 12 10:12:03.876860 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jul 12 10:12:03.876866 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jul 12 10:12:03.876874 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jul 12 10:12:03.876881 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jul 12 10:12:03.876894 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Jul 12 10:12:03.876900 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jul 12 10:12:03.876907 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Jul 12 10:12:03.876914 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jul 12 10:12:03.876921 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jul 12 10:12:03.876927 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jul 12 10:12:03.876938 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jul 12 10:12:03.876946 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jul 12 10:12:03.876956 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jul 12 10:12:03.876963 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jul 12 10:12:03.876970 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jul 12 10:12:03.876977 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jul 12 10:12:03.876985 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jul 12 10:12:03.877003 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jul 12 10:12:03.877027 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jul 12 10:12:03.877035 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 12 10:12:03.877047 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jul 12 10:12:03.877071 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 12 10:12:03.877079 kernel: NX (Execute Disable) protection: active Jul 12 10:12:03.877086 kernel: APIC: Static calls initialized Jul 12 10:12:03.877093 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Jul 12 10:12:03.877100 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Jul 12 10:12:03.877107 kernel: extended physical RAM map: Jul 12 10:12:03.877115 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 12 10:12:03.877131 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Jul 12 10:12:03.877155 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jul 12 10:12:03.877175 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Jul 12 10:12:03.877184 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jul 12 10:12:03.877196 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Jul 12 10:12:03.877203 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jul 12 10:12:03.877210 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Jul 12 10:12:03.877217 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Jul 12 10:12:03.877228 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Jul 12 10:12:03.877235 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Jul 12 10:12:03.877245 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Jul 12 10:12:03.877253 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jul 12 10:12:03.877260 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jul 12 10:12:03.877267 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jul 12 10:12:03.877275 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jul 12 10:12:03.877282 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jul 12 10:12:03.877290 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jul 12 10:12:03.877297 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jul 12 10:12:03.877304 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jul 12 10:12:03.877312 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jul 12 10:12:03.877322 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jul 12 10:12:03.877329 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jul 12 10:12:03.877336 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jul 12 10:12:03.877344 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 12 10:12:03.877351 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jul 12 10:12:03.877358 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 12 10:12:03.877368 kernel: efi: EFI v2.7 by EDK II Jul 12 10:12:03.877376 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Jul 12 10:12:03.877383 kernel: random: crng init done Jul 12 10:12:03.877393 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Jul 12 10:12:03.877401 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Jul 12 10:12:03.877412 kernel: secureboot: Secure boot disabled Jul 12 10:12:03.877419 kernel: SMBIOS 2.8 present. Jul 12 10:12:03.877427 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Jul 12 10:12:03.877434 kernel: DMI: Memory slots populated: 1/1 Jul 12 10:12:03.877441 kernel: Hypervisor detected: KVM Jul 12 10:12:03.877449 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 12 10:12:03.877457 kernel: kvm-clock: using sched offset of 4471337293 cycles Jul 12 10:12:03.877476 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 12 10:12:03.877486 kernel: tsc: Detected 2794.746 MHz processor Jul 12 10:12:03.877496 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 12 10:12:03.877506 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 12 10:12:03.877519 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jul 12 10:12:03.877534 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jul 12 10:12:03.877545 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 12 10:12:03.877554 kernel: Using GB pages for direct mapping Jul 12 10:12:03.877564 kernel: ACPI: Early table checksum verification disabled Jul 12 10:12:03.877574 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jul 12 10:12:03.877583 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jul 12 10:12:03.877591 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 10:12:03.877599 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 10:12:03.877609 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jul 12 10:12:03.877617 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 10:12:03.877625 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 10:12:03.877632 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 10:12:03.877640 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 10:12:03.877648 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jul 12 10:12:03.877655 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jul 12 10:12:03.877663 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Jul 12 10:12:03.877673 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jul 12 10:12:03.877680 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jul 12 10:12:03.877688 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jul 12 10:12:03.877695 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jul 12 10:12:03.877703 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jul 12 10:12:03.877710 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jul 12 10:12:03.877718 kernel: No NUMA configuration found Jul 12 10:12:03.877725 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Jul 12 10:12:03.877733 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Jul 12 10:12:03.877741 kernel: Zone ranges: Jul 12 10:12:03.877751 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 12 10:12:03.877758 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Jul 12 10:12:03.877766 kernel: Normal empty Jul 12 10:12:03.877774 kernel: Device empty Jul 12 10:12:03.877781 kernel: Movable zone start for each node Jul 12 10:12:03.877789 kernel: Early memory node ranges Jul 12 10:12:03.877796 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jul 12 10:12:03.877804 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jul 12 10:12:03.877815 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jul 12 10:12:03.877825 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Jul 12 10:12:03.877832 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Jul 12 10:12:03.877840 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Jul 12 10:12:03.877847 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Jul 12 10:12:03.877855 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Jul 12 10:12:03.877862 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Jul 12 10:12:03.877870 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 12 10:12:03.877880 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jul 12 10:12:03.877896 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jul 12 10:12:03.877904 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 12 10:12:03.877912 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Jul 12 10:12:03.877920 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Jul 12 10:12:03.877930 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jul 12 10:12:03.877938 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Jul 12 10:12:03.877946 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Jul 12 10:12:03.877954 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 12 10:12:03.877962 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 12 10:12:03.877972 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 12 10:12:03.877980 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 12 10:12:03.877988 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 12 10:12:03.877997 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 12 10:12:03.878004 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 12 10:12:03.878012 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 12 10:12:03.878020 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 12 10:12:03.878028 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 12 10:12:03.878036 kernel: TSC deadline timer available Jul 12 10:12:03.878046 kernel: CPU topo: Max. logical packages: 1 Jul 12 10:12:03.878054 kernel: CPU topo: Max. logical dies: 1 Jul 12 10:12:03.878075 kernel: CPU topo: Max. dies per package: 1 Jul 12 10:12:03.878083 kernel: CPU topo: Max. threads per core: 1 Jul 12 10:12:03.878091 kernel: CPU topo: Num. cores per package: 4 Jul 12 10:12:03.878099 kernel: CPU topo: Num. threads per package: 4 Jul 12 10:12:03.878106 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jul 12 10:12:03.878114 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 12 10:12:03.878122 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 12 10:12:03.878133 kernel: kvm-guest: setup PV sched yield Jul 12 10:12:03.878141 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Jul 12 10:12:03.878149 kernel: Booting paravirtualized kernel on KVM Jul 12 10:12:03.878157 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 12 10:12:03.878165 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jul 12 10:12:03.878177 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jul 12 10:12:03.878188 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jul 12 10:12:03.878199 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 12 10:12:03.878218 kernel: kvm-guest: PV spinlocks enabled Jul 12 10:12:03.878239 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 12 10:12:03.878248 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4aa07c6f7fdf02f2e05d879e4d058ee0cec0fba29acc0516234352104ac4e6c4 Jul 12 10:12:03.878259 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 12 10:12:03.878268 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 12 10:12:03.878276 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 12 10:12:03.878284 kernel: Fallback order for Node 0: 0 Jul 12 10:12:03.878292 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Jul 12 10:12:03.878300 kernel: Policy zone: DMA32 Jul 12 10:12:03.878307 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 12 10:12:03.878318 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 12 10:12:03.878326 kernel: ftrace: allocating 40097 entries in 157 pages Jul 12 10:12:03.878334 kernel: ftrace: allocated 157 pages with 5 groups Jul 12 10:12:03.878342 kernel: Dynamic Preempt: voluntary Jul 12 10:12:03.878350 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 12 10:12:03.878358 kernel: rcu: RCU event tracing is enabled. Jul 12 10:12:03.878366 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 12 10:12:03.878375 kernel: Trampoline variant of Tasks RCU enabled. Jul 12 10:12:03.878383 kernel: Rude variant of Tasks RCU enabled. Jul 12 10:12:03.878393 kernel: Tracing variant of Tasks RCU enabled. Jul 12 10:12:03.878401 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 12 10:12:03.878411 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 12 10:12:03.878419 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 12 10:12:03.878427 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 12 10:12:03.878436 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 12 10:12:03.878444 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 12 10:12:03.878451 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 12 10:12:03.878459 kernel: Console: colour dummy device 80x25 Jul 12 10:12:03.878488 kernel: printk: legacy console [ttyS0] enabled Jul 12 10:12:03.878497 kernel: ACPI: Core revision 20240827 Jul 12 10:12:03.878505 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 12 10:12:03.878513 kernel: APIC: Switch to symmetric I/O mode setup Jul 12 10:12:03.878521 kernel: x2apic enabled Jul 12 10:12:03.878529 kernel: APIC: Switched APIC routing to: physical x2apic Jul 12 10:12:03.878537 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jul 12 10:12:03.878545 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jul 12 10:12:03.878552 kernel: kvm-guest: setup PV IPIs Jul 12 10:12:03.878564 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 12 10:12:03.878572 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848ddd4e75, max_idle_ns: 440795346320 ns Jul 12 10:12:03.878580 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794746) Jul 12 10:12:03.878588 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 12 10:12:03.878596 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 12 10:12:03.878604 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 12 10:12:03.878612 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 12 10:12:03.878619 kernel: Spectre V2 : Mitigation: Retpolines Jul 12 10:12:03.878630 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 12 10:12:03.878638 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 12 10:12:03.878645 kernel: RETBleed: Mitigation: untrained return thunk Jul 12 10:12:03.878653 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 12 10:12:03.878664 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 12 10:12:03.878672 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jul 12 10:12:03.878680 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jul 12 10:12:03.878688 kernel: x86/bugs: return thunk changed Jul 12 10:12:03.878696 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jul 12 10:12:03.878706 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 12 10:12:03.878714 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 12 10:12:03.878721 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 12 10:12:03.878739 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 12 10:12:03.878747 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 12 10:12:03.878755 kernel: Freeing SMP alternatives memory: 32K Jul 12 10:12:03.878763 kernel: pid_max: default: 32768 minimum: 301 Jul 12 10:12:03.878771 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 12 10:12:03.878778 kernel: landlock: Up and running. Jul 12 10:12:03.878789 kernel: SELinux: Initializing. Jul 12 10:12:03.878797 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 12 10:12:03.878805 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 12 10:12:03.878813 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 12 10:12:03.878821 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 12 10:12:03.878829 kernel: ... version: 0 Jul 12 10:12:03.878836 kernel: ... bit width: 48 Jul 12 10:12:03.878844 kernel: ... generic registers: 6 Jul 12 10:12:03.878852 kernel: ... value mask: 0000ffffffffffff Jul 12 10:12:03.878862 kernel: ... max period: 00007fffffffffff Jul 12 10:12:03.878869 kernel: ... fixed-purpose events: 0 Jul 12 10:12:03.878877 kernel: ... event mask: 000000000000003f Jul 12 10:12:03.878885 kernel: signal: max sigframe size: 1776 Jul 12 10:12:03.878892 kernel: rcu: Hierarchical SRCU implementation. Jul 12 10:12:03.878900 kernel: rcu: Max phase no-delay instances is 400. Jul 12 10:12:03.878910 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 12 10:12:03.878918 kernel: smp: Bringing up secondary CPUs ... Jul 12 10:12:03.878926 kernel: smpboot: x86: Booting SMP configuration: Jul 12 10:12:03.878936 kernel: .... node #0, CPUs: #1 #2 #3 Jul 12 10:12:03.878944 kernel: smp: Brought up 1 node, 4 CPUs Jul 12 10:12:03.878951 kernel: smpboot: Total of 4 processors activated (22357.96 BogoMIPS) Jul 12 10:12:03.878959 kernel: Memory: 2422668K/2565800K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54608K init, 2360K bss, 137196K reserved, 0K cma-reserved) Jul 12 10:12:03.878967 kernel: devtmpfs: initialized Jul 12 10:12:03.878975 kernel: x86/mm: Memory block size: 128MB Jul 12 10:12:03.878983 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jul 12 10:12:03.878991 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jul 12 10:12:03.878999 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Jul 12 10:12:03.879009 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jul 12 10:12:03.879017 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Jul 12 10:12:03.879025 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jul 12 10:12:03.879032 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 12 10:12:03.879040 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 12 10:12:03.879048 kernel: pinctrl core: initialized pinctrl subsystem Jul 12 10:12:03.879069 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 12 10:12:03.879078 kernel: audit: initializing netlink subsys (disabled) Jul 12 10:12:03.879086 kernel: audit: type=2000 audit(1752315121.481:1): state=initialized audit_enabled=0 res=1 Jul 12 10:12:03.879096 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 12 10:12:03.879104 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 12 10:12:03.879112 kernel: cpuidle: using governor menu Jul 12 10:12:03.879120 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 12 10:12:03.879128 kernel: dca service started, version 1.12.1 Jul 12 10:12:03.879136 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Jul 12 10:12:03.879144 kernel: PCI: Using configuration type 1 for base access Jul 12 10:12:03.879152 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 12 10:12:03.879162 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 12 10:12:03.879170 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 12 10:12:03.879178 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 12 10:12:03.879186 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 12 10:12:03.879193 kernel: ACPI: Added _OSI(Module Device) Jul 12 10:12:03.879201 kernel: ACPI: Added _OSI(Processor Device) Jul 12 10:12:03.879209 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 12 10:12:03.879217 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 12 10:12:03.879225 kernel: ACPI: Interpreter enabled Jul 12 10:12:03.879235 kernel: ACPI: PM: (supports S0 S3 S5) Jul 12 10:12:03.879243 kernel: ACPI: Using IOAPIC for interrupt routing Jul 12 10:12:03.879251 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 12 10:12:03.879259 kernel: PCI: Using E820 reservations for host bridge windows Jul 12 10:12:03.879267 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 12 10:12:03.879274 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 12 10:12:03.879491 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 12 10:12:03.879620 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jul 12 10:12:03.879745 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jul 12 10:12:03.879756 kernel: PCI host bridge to bus 0000:00 Jul 12 10:12:03.879906 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 12 10:12:03.880021 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 12 10:12:03.880193 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 12 10:12:03.880306 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Jul 12 10:12:03.880438 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jul 12 10:12:03.880610 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Jul 12 10:12:03.880722 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 12 10:12:03.880878 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jul 12 10:12:03.881024 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jul 12 10:12:03.881173 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Jul 12 10:12:03.881295 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Jul 12 10:12:03.881458 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Jul 12 10:12:03.881591 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 12 10:12:03.881753 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jul 12 10:12:03.881881 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Jul 12 10:12:03.882003 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Jul 12 10:12:03.882145 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Jul 12 10:12:03.882287 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jul 12 10:12:03.882416 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Jul 12 10:12:03.882546 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Jul 12 10:12:03.882667 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Jul 12 10:12:03.882804 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jul 12 10:12:03.882926 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Jul 12 10:12:03.883070 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Jul 12 10:12:03.883242 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Jul 12 10:12:03.883370 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Jul 12 10:12:03.883521 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jul 12 10:12:03.883644 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 12 10:12:03.883779 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jul 12 10:12:03.883900 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Jul 12 10:12:03.884020 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Jul 12 10:12:03.884177 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jul 12 10:12:03.884310 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Jul 12 10:12:03.884321 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 12 10:12:03.884330 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 12 10:12:03.884338 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 12 10:12:03.884346 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 12 10:12:03.884355 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 12 10:12:03.884363 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 12 10:12:03.884371 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 12 10:12:03.884382 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 12 10:12:03.884390 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 12 10:12:03.884398 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 12 10:12:03.884406 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 12 10:12:03.884414 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 12 10:12:03.884422 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 12 10:12:03.884430 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 12 10:12:03.884438 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 12 10:12:03.884446 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 12 10:12:03.884457 kernel: iommu: Default domain type: Translated Jul 12 10:12:03.884474 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 12 10:12:03.884482 kernel: efivars: Registered efivars operations Jul 12 10:12:03.884490 kernel: PCI: Using ACPI for IRQ routing Jul 12 10:12:03.884498 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 12 10:12:03.884507 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jul 12 10:12:03.884514 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Jul 12 10:12:03.884522 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Jul 12 10:12:03.884530 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Jul 12 10:12:03.884540 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Jul 12 10:12:03.884548 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Jul 12 10:12:03.884556 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Jul 12 10:12:03.884564 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Jul 12 10:12:03.884686 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 12 10:12:03.884807 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 12 10:12:03.884936 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 12 10:12:03.884948 kernel: vgaarb: loaded Jul 12 10:12:03.884960 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 12 10:12:03.884968 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 12 10:12:03.884976 kernel: clocksource: Switched to clocksource kvm-clock Jul 12 10:12:03.884984 kernel: VFS: Disk quotas dquot_6.6.0 Jul 12 10:12:03.884993 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 12 10:12:03.885000 kernel: pnp: PnP ACPI init Jul 12 10:12:03.885199 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Jul 12 10:12:03.885230 kernel: pnp: PnP ACPI: found 6 devices Jul 12 10:12:03.885243 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 12 10:12:03.885251 kernel: NET: Registered PF_INET protocol family Jul 12 10:12:03.885260 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 12 10:12:03.885269 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 12 10:12:03.885277 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 12 10:12:03.885286 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 12 10:12:03.885294 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 12 10:12:03.885303 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 12 10:12:03.885311 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 12 10:12:03.885322 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 12 10:12:03.885331 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 12 10:12:03.885339 kernel: NET: Registered PF_XDP protocol family Jul 12 10:12:03.885473 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Jul 12 10:12:03.885598 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Jul 12 10:12:03.885711 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 12 10:12:03.885822 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 12 10:12:03.885932 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 12 10:12:03.886047 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Jul 12 10:12:03.886195 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jul 12 10:12:03.886308 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Jul 12 10:12:03.886319 kernel: PCI: CLS 0 bytes, default 64 Jul 12 10:12:03.886328 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848ddd4e75, max_idle_ns: 440795346320 ns Jul 12 10:12:03.886336 kernel: Initialise system trusted keyrings Jul 12 10:12:03.886345 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 12 10:12:03.886353 kernel: Key type asymmetric registered Jul 12 10:12:03.886365 kernel: Asymmetric key parser 'x509' registered Jul 12 10:12:03.886373 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 12 10:12:03.886382 kernel: io scheduler mq-deadline registered Jul 12 10:12:03.886392 kernel: io scheduler kyber registered Jul 12 10:12:03.886400 kernel: io scheduler bfq registered Jul 12 10:12:03.886409 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 12 10:12:03.886420 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 12 10:12:03.886428 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 12 10:12:03.886437 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jul 12 10:12:03.886445 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 12 10:12:03.886453 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 12 10:12:03.886470 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 12 10:12:03.886478 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 12 10:12:03.886487 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 12 10:12:03.886636 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 12 10:12:03.886759 kernel: rtc_cmos 00:04: registered as rtc0 Jul 12 10:12:03.886873 kernel: rtc_cmos 00:04: setting system clock to 2025-07-12T10:12:03 UTC (1752315123) Jul 12 10:12:03.886884 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 12 10:12:03.886996 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jul 12 10:12:03.887006 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 12 10:12:03.887015 kernel: efifb: probing for efifb Jul 12 10:12:03.887024 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Jul 12 10:12:03.887032 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jul 12 10:12:03.887044 kernel: efifb: scrolling: redraw Jul 12 10:12:03.887052 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 12 10:12:03.887078 kernel: Console: switching to colour frame buffer device 160x50 Jul 12 10:12:03.887086 kernel: fb0: EFI VGA frame buffer device Jul 12 10:12:03.887095 kernel: pstore: Using crash dump compression: deflate Jul 12 10:12:03.887103 kernel: pstore: Registered efi_pstore as persistent store backend Jul 12 10:12:03.887112 kernel: NET: Registered PF_INET6 protocol family Jul 12 10:12:03.887120 kernel: Segment Routing with IPv6 Jul 12 10:12:03.887129 kernel: In-situ OAM (IOAM) with IPv6 Jul 12 10:12:03.887140 kernel: NET: Registered PF_PACKET protocol family Jul 12 10:12:03.887148 kernel: Key type dns_resolver registered Jul 12 10:12:03.887156 kernel: IPI shorthand broadcast: enabled Jul 12 10:12:03.887165 kernel: sched_clock: Marking stable (3591002887, 157158778)->(3779173038, -31011373) Jul 12 10:12:03.887173 kernel: registered taskstats version 1 Jul 12 10:12:03.887182 kernel: Loading compiled-in X.509 certificates Jul 12 10:12:03.887190 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: 0b66546913a05d1e6699856b7b667f16de808d3b' Jul 12 10:12:03.887198 kernel: Demotion targets for Node 0: null Jul 12 10:12:03.887206 kernel: Key type .fscrypt registered Jul 12 10:12:03.887217 kernel: Key type fscrypt-provisioning registered Jul 12 10:12:03.887225 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 12 10:12:03.887234 kernel: ima: Allocated hash algorithm: sha1 Jul 12 10:12:03.887242 kernel: ima: No architecture policies found Jul 12 10:12:03.887250 kernel: clk: Disabling unused clocks Jul 12 10:12:03.887259 kernel: Warning: unable to open an initial console. Jul 12 10:12:03.887267 kernel: Freeing unused kernel image (initmem) memory: 54608K Jul 12 10:12:03.887275 kernel: Write protecting the kernel read-only data: 24576k Jul 12 10:12:03.887286 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 12 10:12:03.887294 kernel: Run /init as init process Jul 12 10:12:03.887303 kernel: with arguments: Jul 12 10:12:03.887311 kernel: /init Jul 12 10:12:03.887320 kernel: with environment: Jul 12 10:12:03.887328 kernel: HOME=/ Jul 12 10:12:03.887338 kernel: TERM=linux Jul 12 10:12:03.887346 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 12 10:12:03.887359 systemd[1]: Successfully made /usr/ read-only. Jul 12 10:12:03.887373 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 12 10:12:03.887383 systemd[1]: Detected virtualization kvm. Jul 12 10:12:03.887391 systemd[1]: Detected architecture x86-64. Jul 12 10:12:03.887400 systemd[1]: Running in initrd. Jul 12 10:12:03.887408 systemd[1]: No hostname configured, using default hostname. Jul 12 10:12:03.887418 systemd[1]: Hostname set to . Jul 12 10:12:03.887426 systemd[1]: Initializing machine ID from VM UUID. Jul 12 10:12:03.887435 systemd[1]: Queued start job for default target initrd.target. Jul 12 10:12:03.887446 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 12 10:12:03.887455 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 12 10:12:03.887472 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 12 10:12:03.887481 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 12 10:12:03.887490 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 12 10:12:03.887500 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 12 10:12:03.887512 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 12 10:12:03.887521 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 12 10:12:03.887530 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 12 10:12:03.887539 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 12 10:12:03.887547 systemd[1]: Reached target paths.target - Path Units. Jul 12 10:12:03.887556 systemd[1]: Reached target slices.target - Slice Units. Jul 12 10:12:03.887565 systemd[1]: Reached target swap.target - Swaps. Jul 12 10:12:03.887574 systemd[1]: Reached target timers.target - Timer Units. Jul 12 10:12:03.887582 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 12 10:12:03.887595 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 12 10:12:03.887603 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 12 10:12:03.887612 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 12 10:12:03.887621 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 12 10:12:03.887630 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 12 10:12:03.887638 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 12 10:12:03.887647 systemd[1]: Reached target sockets.target - Socket Units. Jul 12 10:12:03.887655 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 12 10:12:03.887666 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 12 10:12:03.887675 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 12 10:12:03.887684 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 12 10:12:03.887693 systemd[1]: Starting systemd-fsck-usr.service... Jul 12 10:12:03.887702 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 12 10:12:03.887712 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 12 10:12:03.887721 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 10:12:03.887730 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 12 10:12:03.887741 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 12 10:12:03.887774 systemd-journald[220]: Collecting audit messages is disabled. Jul 12 10:12:03.887799 systemd[1]: Finished systemd-fsck-usr.service. Jul 12 10:12:03.887808 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 12 10:12:03.887817 systemd-journald[220]: Journal started Jul 12 10:12:03.887842 systemd-journald[220]: Runtime Journal (/run/log/journal/56d182f8b8c04deba45258de05860ad6) is 6M, max 48.5M, 42.4M free. Jul 12 10:12:03.878763 systemd-modules-load[222]: Inserted module 'overlay' Jul 12 10:12:03.892080 systemd[1]: Started systemd-journald.service - Journal Service. Jul 12 10:12:03.895538 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 12 10:12:03.897133 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 10:12:03.900202 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 12 10:12:03.946125 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 12 10:12:03.905423 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 12 10:12:03.950974 kernel: Bridge firewalling registered Jul 12 10:12:03.947758 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 12 10:12:03.949492 systemd-modules-load[222]: Inserted module 'br_netfilter' Jul 12 10:12:03.955231 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 12 10:12:03.956224 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 12 10:12:03.962216 systemd-tmpfiles[238]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 12 10:12:03.965628 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 12 10:12:03.970326 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 12 10:12:03.971746 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 12 10:12:03.972889 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 10:12:03.976391 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 12 10:12:03.979685 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 12 10:12:04.004913 dracut-cmdline[261]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4aa07c6f7fdf02f2e05d879e4d058ee0cec0fba29acc0516234352104ac4e6c4 Jul 12 10:12:04.023712 systemd-resolved[262]: Positive Trust Anchors: Jul 12 10:12:04.023730 systemd-resolved[262]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 12 10:12:04.023760 systemd-resolved[262]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 12 10:12:04.026335 systemd-resolved[262]: Defaulting to hostname 'linux'. Jul 12 10:12:04.027551 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 12 10:12:04.043150 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 12 10:12:04.135104 kernel: SCSI subsystem initialized Jul 12 10:12:04.145092 kernel: Loading iSCSI transport class v2.0-870. Jul 12 10:12:04.156089 kernel: iscsi: registered transport (tcp) Jul 12 10:12:04.178105 kernel: iscsi: registered transport (qla4xxx) Jul 12 10:12:04.178175 kernel: QLogic iSCSI HBA Driver Jul 12 10:12:04.199273 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 12 10:12:04.226623 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 12 10:12:04.230203 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 12 10:12:04.293493 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 12 10:12:04.296824 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 12 10:12:04.359122 kernel: raid6: avx2x4 gen() 28504 MB/s Jul 12 10:12:04.376151 kernel: raid6: avx2x2 gen() 22406 MB/s Jul 12 10:12:04.393408 kernel: raid6: avx2x1 gen() 17879 MB/s Jul 12 10:12:04.393518 kernel: raid6: using algorithm avx2x4 gen() 28504 MB/s Jul 12 10:12:04.411378 kernel: raid6: .... xor() 6141 MB/s, rmw enabled Jul 12 10:12:04.411502 kernel: raid6: using avx2x2 recovery algorithm Jul 12 10:12:04.438135 kernel: xor: automatically using best checksumming function avx Jul 12 10:12:04.611134 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 12 10:12:04.621273 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 12 10:12:04.624323 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 12 10:12:04.664211 systemd-udevd[473]: Using default interface naming scheme 'v255'. Jul 12 10:12:04.670517 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 12 10:12:04.672819 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 12 10:12:04.707034 dracut-pre-trigger[475]: rd.md=0: removing MD RAID activation Jul 12 10:12:04.743145 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 12 10:12:04.744686 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 12 10:12:04.822542 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 12 10:12:04.825849 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 12 10:12:04.873487 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jul 12 10:12:04.873546 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jul 12 10:12:04.876088 kernel: cryptd: max_cpu_qlen set to 1000 Jul 12 10:12:04.881113 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 12 10:12:04.889360 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 12 10:12:04.889389 kernel: GPT:9289727 != 19775487 Jul 12 10:12:04.889404 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 12 10:12:04.890378 kernel: GPT:9289727 != 19775487 Jul 12 10:12:04.890397 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 12 10:12:04.891696 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 12 10:12:04.893083 kernel: libata version 3.00 loaded. Jul 12 10:12:04.900079 kernel: AES CTR mode by8 optimization enabled Jul 12 10:12:04.903816 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 12 10:12:04.903947 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 10:12:04.906196 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 10:12:04.909916 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 10:12:04.912746 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 12 10:12:04.919217 kernel: ahci 0000:00:1f.2: version 3.0 Jul 12 10:12:04.919449 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 12 10:12:04.925454 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jul 12 10:12:04.925749 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jul 12 10:12:04.925904 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 12 10:12:04.933472 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 12 10:12:04.934541 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 10:12:04.961099 kernel: scsi host0: ahci Jul 12 10:12:04.963087 kernel: scsi host1: ahci Jul 12 10:12:04.964865 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 12 10:12:04.969263 kernel: scsi host2: ahci Jul 12 10:12:04.969462 kernel: scsi host3: ahci Jul 12 10:12:04.971570 kernel: scsi host4: ahci Jul 12 10:12:04.974723 kernel: scsi host5: ahci Jul 12 10:12:04.974910 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 0 Jul 12 10:12:04.974929 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 0 Jul 12 10:12:04.976541 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 0 Jul 12 10:12:04.976566 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 0 Jul 12 10:12:04.977409 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 0 Jul 12 10:12:04.978270 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 0 Jul 12 10:12:04.986086 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 12 10:12:05.005476 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 12 10:12:05.006772 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 12 10:12:05.018089 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 12 10:12:05.020588 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 12 10:12:05.023481 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 10:12:05.045752 disk-uuid[633]: Primary Header is updated. Jul 12 10:12:05.045752 disk-uuid[633]: Secondary Entries is updated. Jul 12 10:12:05.045752 disk-uuid[633]: Secondary Header is updated. Jul 12 10:12:05.049088 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 12 10:12:05.061753 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 10:12:05.285019 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 12 10:12:05.285125 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 12 10:12:05.285138 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 12 10:12:05.285149 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 12 10:12:05.286093 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 12 10:12:05.287099 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jul 12 10:12:05.288317 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 12 10:12:05.288337 kernel: ata3.00: applying bridge limits Jul 12 10:12:05.289100 kernel: ata3.00: configured for UDMA/100 Jul 12 10:12:05.291092 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 12 10:12:05.342115 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 12 10:12:05.342492 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 12 10:12:05.363102 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 12 10:12:05.656233 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 12 10:12:05.657906 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 12 10:12:05.659586 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 12 10:12:05.659821 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 12 10:12:05.661139 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 12 10:12:05.696673 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 12 10:12:06.058094 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 12 10:12:06.058463 disk-uuid[638]: The operation has completed successfully. Jul 12 10:12:06.084776 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 12 10:12:06.084905 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 12 10:12:06.124798 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 12 10:12:06.152549 sh[668]: Success Jul 12 10:12:06.170514 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 12 10:12:06.170550 kernel: device-mapper: uevent: version 1.0.3 Jul 12 10:12:06.171665 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 12 10:12:06.181087 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jul 12 10:12:06.215560 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 12 10:12:06.218894 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 12 10:12:06.238601 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 12 10:12:06.246038 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 12 10:12:06.246072 kernel: BTRFS: device fsid 4d28aa26-35d0-4997-8a2e-14597ed98f41 devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (680) Jul 12 10:12:06.248166 kernel: BTRFS info (device dm-0): first mount of filesystem 4d28aa26-35d0-4997-8a2e-14597ed98f41 Jul 12 10:12:06.248189 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 12 10:12:06.248200 kernel: BTRFS info (device dm-0): using free-space-tree Jul 12 10:12:06.252830 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 12 10:12:06.253331 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 12 10:12:06.254577 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 12 10:12:06.255513 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 12 10:12:06.258087 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 12 10:12:06.286953 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (713) Jul 12 10:12:06.287006 kernel: BTRFS info (device vda6): first mount of filesystem 2214f333-d3a1-4dd4-b25f-bf0ce0af42b2 Jul 12 10:12:06.287017 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 12 10:12:06.288627 kernel: BTRFS info (device vda6): using free-space-tree Jul 12 10:12:06.296083 kernel: BTRFS info (device vda6): last unmount of filesystem 2214f333-d3a1-4dd4-b25f-bf0ce0af42b2 Jul 12 10:12:06.297249 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 12 10:12:06.298347 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 12 10:12:06.419034 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 12 10:12:06.421123 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 12 10:12:06.431826 ignition[754]: Ignition 2.21.0 Jul 12 10:12:06.431840 ignition[754]: Stage: fetch-offline Jul 12 10:12:06.431889 ignition[754]: no configs at "/usr/lib/ignition/base.d" Jul 12 10:12:06.431903 ignition[754]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 10:12:06.432017 ignition[754]: parsed url from cmdline: "" Jul 12 10:12:06.432022 ignition[754]: no config URL provided Jul 12 10:12:06.432027 ignition[754]: reading system config file "/usr/lib/ignition/user.ign" Jul 12 10:12:06.432037 ignition[754]: no config at "/usr/lib/ignition/user.ign" Jul 12 10:12:06.432077 ignition[754]: op(1): [started] loading QEMU firmware config module Jul 12 10:12:06.432082 ignition[754]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 12 10:12:06.440840 ignition[754]: op(1): [finished] loading QEMU firmware config module Jul 12 10:12:06.440871 ignition[754]: QEMU firmware config was not found. Ignoring... Jul 12 10:12:06.467469 systemd-networkd[856]: lo: Link UP Jul 12 10:12:06.467478 systemd-networkd[856]: lo: Gained carrier Jul 12 10:12:06.469154 systemd-networkd[856]: Enumeration completed Jul 12 10:12:06.469300 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 12 10:12:06.469530 systemd-networkd[856]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 10:12:06.469535 systemd-networkd[856]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 12 10:12:06.470047 systemd-networkd[856]: eth0: Link UP Jul 12 10:12:06.470051 systemd-networkd[856]: eth0: Gained carrier Jul 12 10:12:06.470182 systemd-networkd[856]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 10:12:06.471847 systemd[1]: Reached target network.target - Network. Jul 12 10:12:06.484128 systemd-networkd[856]: eth0: DHCPv4 address 10.0.0.45/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 12 10:12:06.493651 ignition[754]: parsing config with SHA512: 2a97f52e1e481a453c32a722acfa825a3bb8c2fb39a0a918c99cd6909e55ad9954f35af1d916978ce7773b491c21abc36c490c946e5c68e14f3015b135834fdf Jul 12 10:12:06.500299 unknown[754]: fetched base config from "system" Jul 12 10:12:06.500457 unknown[754]: fetched user config from "qemu" Jul 12 10:12:06.500896 ignition[754]: fetch-offline: fetch-offline passed Jul 12 10:12:06.500960 ignition[754]: Ignition finished successfully Jul 12 10:12:06.504109 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 12 10:12:06.506477 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 12 10:12:06.508385 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 12 10:12:06.588013 ignition[863]: Ignition 2.21.0 Jul 12 10:12:06.588027 ignition[863]: Stage: kargs Jul 12 10:12:06.588168 ignition[863]: no configs at "/usr/lib/ignition/base.d" Jul 12 10:12:06.588180 ignition[863]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 10:12:06.592271 ignition[863]: kargs: kargs passed Jul 12 10:12:06.592975 ignition[863]: Ignition finished successfully Jul 12 10:12:06.597241 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 12 10:12:06.599378 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 12 10:12:06.620563 systemd-resolved[262]: Detected conflict on linux IN A 10.0.0.45 Jul 12 10:12:06.620579 systemd-resolved[262]: Hostname conflict, changing published hostname from 'linux' to 'linux2'. Jul 12 10:12:06.635026 ignition[871]: Ignition 2.21.0 Jul 12 10:12:06.635038 ignition[871]: Stage: disks Jul 12 10:12:06.635183 ignition[871]: no configs at "/usr/lib/ignition/base.d" Jul 12 10:12:06.635193 ignition[871]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 10:12:06.636810 ignition[871]: disks: disks passed Jul 12 10:12:06.636881 ignition[871]: Ignition finished successfully Jul 12 10:12:06.640157 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 12 10:12:06.640903 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 12 10:12:06.643875 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 12 10:12:06.644306 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 12 10:12:06.644649 systemd[1]: Reached target sysinit.target - System Initialization. Jul 12 10:12:06.644962 systemd[1]: Reached target basic.target - Basic System. Jul 12 10:12:06.646631 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 12 10:12:06.676544 systemd-fsck[882]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 12 10:12:06.701366 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 12 10:12:06.704794 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 12 10:12:06.813101 kernel: EXT4-fs (vda9): mounted filesystem e7cb62fe-c14e-444a-ae5a-364f9f21d58c r/w with ordered data mode. Quota mode: none. Jul 12 10:12:06.814165 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 12 10:12:06.814833 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 12 10:12:06.818646 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 12 10:12:06.820351 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 12 10:12:06.822242 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 12 10:12:06.822310 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 12 10:12:06.822348 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 12 10:12:06.839673 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 12 10:12:06.841941 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 12 10:12:06.846108 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (890) Jul 12 10:12:06.848086 kernel: BTRFS info (device vda6): first mount of filesystem 2214f333-d3a1-4dd4-b25f-bf0ce0af42b2 Jul 12 10:12:06.848128 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 12 10:12:06.849081 kernel: BTRFS info (device vda6): using free-space-tree Jul 12 10:12:06.852852 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 12 10:12:06.883447 initrd-setup-root[914]: cut: /sysroot/etc/passwd: No such file or directory Jul 12 10:12:06.888482 initrd-setup-root[921]: cut: /sysroot/etc/group: No such file or directory Jul 12 10:12:06.893398 initrd-setup-root[928]: cut: /sysroot/etc/shadow: No such file or directory Jul 12 10:12:06.897850 initrd-setup-root[935]: cut: /sysroot/etc/gshadow: No such file or directory Jul 12 10:12:06.994830 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 12 10:12:06.998677 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 12 10:12:07.001337 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 12 10:12:07.022083 kernel: BTRFS info (device vda6): last unmount of filesystem 2214f333-d3a1-4dd4-b25f-bf0ce0af42b2 Jul 12 10:12:07.037636 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 12 10:12:07.133581 ignition[1004]: INFO : Ignition 2.21.0 Jul 12 10:12:07.133581 ignition[1004]: INFO : Stage: mount Jul 12 10:12:07.135496 ignition[1004]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 10:12:07.135496 ignition[1004]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 10:12:07.135496 ignition[1004]: INFO : mount: mount passed Jul 12 10:12:07.135496 ignition[1004]: INFO : Ignition finished successfully Jul 12 10:12:07.141719 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 12 10:12:07.144266 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 12 10:12:07.245563 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 12 10:12:07.247571 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 12 10:12:07.276092 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1017) Jul 12 10:12:07.278224 kernel: BTRFS info (device vda6): first mount of filesystem 2214f333-d3a1-4dd4-b25f-bf0ce0af42b2 Jul 12 10:12:07.278251 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 12 10:12:07.278264 kernel: BTRFS info (device vda6): using free-space-tree Jul 12 10:12:07.282631 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 12 10:12:07.323461 ignition[1034]: INFO : Ignition 2.21.0 Jul 12 10:12:07.323461 ignition[1034]: INFO : Stage: files Jul 12 10:12:07.325239 ignition[1034]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 10:12:07.325239 ignition[1034]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 10:12:07.327769 ignition[1034]: DEBUG : files: compiled without relabeling support, skipping Jul 12 10:12:07.329866 ignition[1034]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 12 10:12:07.329866 ignition[1034]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 12 10:12:07.334847 ignition[1034]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 12 10:12:07.336344 ignition[1034]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 12 10:12:07.336344 ignition[1034]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 12 10:12:07.335581 unknown[1034]: wrote ssh authorized keys file for user: core Jul 12 10:12:07.340452 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 12 10:12:07.340452 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 12 10:12:07.378333 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 12 10:12:07.504833 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 12 10:12:07.504833 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 12 10:12:07.508832 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 12 10:12:07.508832 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 12 10:12:07.508832 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 12 10:12:07.508832 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 12 10:12:07.508832 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 12 10:12:07.508832 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 12 10:12:07.508832 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 12 10:12:07.520757 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 12 10:12:07.520757 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 12 10:12:07.520757 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 12 10:12:07.520757 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 12 10:12:07.520757 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 12 10:12:07.520757 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jul 12 10:12:08.141247 systemd-networkd[856]: eth0: Gained IPv6LL Jul 12 10:12:08.230514 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 12 10:12:08.626175 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 12 10:12:08.626175 ignition[1034]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 12 10:12:08.629846 ignition[1034]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 12 10:12:08.636010 ignition[1034]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 12 10:12:08.636010 ignition[1034]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 12 10:12:08.636010 ignition[1034]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jul 12 10:12:08.640189 ignition[1034]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 12 10:12:08.642009 ignition[1034]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 12 10:12:08.642009 ignition[1034]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jul 12 10:12:08.642009 ignition[1034]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jul 12 10:12:08.665850 ignition[1034]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 12 10:12:08.673675 ignition[1034]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 12 10:12:08.675403 ignition[1034]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jul 12 10:12:08.675403 ignition[1034]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jul 12 10:12:08.678242 ignition[1034]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jul 12 10:12:08.678242 ignition[1034]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 12 10:12:08.678242 ignition[1034]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 12 10:12:08.678242 ignition[1034]: INFO : files: files passed Jul 12 10:12:08.678242 ignition[1034]: INFO : Ignition finished successfully Jul 12 10:12:08.682194 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 12 10:12:08.686042 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 12 10:12:08.689020 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 12 10:12:08.709320 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 12 10:12:08.709476 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 12 10:12:08.712597 initrd-setup-root-after-ignition[1062]: grep: /sysroot/oem/oem-release: No such file or directory Jul 12 10:12:08.717106 initrd-setup-root-after-ignition[1066]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 12 10:12:08.717106 initrd-setup-root-after-ignition[1066]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 12 10:12:08.721072 initrd-setup-root-after-ignition[1070]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 12 10:12:08.724006 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 12 10:12:08.724280 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 12 10:12:08.728467 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 12 10:12:08.778121 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 12 10:12:08.778292 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 12 10:12:08.779513 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 12 10:12:08.781526 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 12 10:12:08.781886 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 12 10:12:08.786557 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 12 10:12:08.814604 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 12 10:12:08.817505 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 12 10:12:08.851574 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 12 10:12:08.853832 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 12 10:12:08.853992 systemd[1]: Stopped target timers.target - Timer Units. Jul 12 10:12:08.856122 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 12 10:12:08.856255 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 12 10:12:08.860755 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 12 10:12:08.860890 systemd[1]: Stopped target basic.target - Basic System. Jul 12 10:12:08.863510 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 12 10:12:08.864388 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 12 10:12:08.864692 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 12 10:12:08.865005 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 12 10:12:08.865499 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 12 10:12:08.865809 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 12 10:12:08.866156 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 12 10:12:08.866622 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 12 10:12:08.866929 systemd[1]: Stopped target swap.target - Swaps. Jul 12 10:12:08.867394 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 12 10:12:08.867502 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 12 10:12:08.883346 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 12 10:12:08.883488 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 12 10:12:08.883769 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 12 10:12:08.883939 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 12 10:12:08.887416 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 12 10:12:08.887543 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 12 10:12:08.889695 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 12 10:12:08.889804 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 12 10:12:08.892436 systemd[1]: Stopped target paths.target - Path Units. Jul 12 10:12:08.892661 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 12 10:12:08.892973 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 12 10:12:08.895974 systemd[1]: Stopped target slices.target - Slice Units. Jul 12 10:12:08.896455 systemd[1]: Stopped target sockets.target - Socket Units. Jul 12 10:12:08.896772 systemd[1]: iscsid.socket: Deactivated successfully. Jul 12 10:12:08.896869 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 12 10:12:08.902205 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 12 10:12:08.902290 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 12 10:12:08.904567 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 12 10:12:08.904694 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 12 10:12:08.905432 systemd[1]: ignition-files.service: Deactivated successfully. Jul 12 10:12:08.905537 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 12 10:12:08.909139 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 12 10:12:08.910748 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 12 10:12:08.910870 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 12 10:12:08.914489 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 12 10:12:08.915772 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 12 10:12:08.915932 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 12 10:12:08.917110 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 12 10:12:08.917212 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 12 10:12:08.924631 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 12 10:12:08.925233 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 12 10:12:08.948111 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 12 10:12:08.950427 ignition[1090]: INFO : Ignition 2.21.0 Jul 12 10:12:08.950427 ignition[1090]: INFO : Stage: umount Jul 12 10:12:08.952167 ignition[1090]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 10:12:08.952167 ignition[1090]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 10:12:08.954271 ignition[1090]: INFO : umount: umount passed Jul 12 10:12:08.954271 ignition[1090]: INFO : Ignition finished successfully Jul 12 10:12:08.956302 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 12 10:12:08.956452 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 12 10:12:08.957832 systemd[1]: Stopped target network.target - Network. Jul 12 10:12:08.960441 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 12 10:12:08.960502 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 12 10:12:08.963486 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 12 10:12:08.963551 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 12 10:12:08.964547 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 12 10:12:08.964603 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 12 10:12:08.964917 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 12 10:12:08.965086 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 12 10:12:08.965689 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 12 10:12:08.966014 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 12 10:12:08.975547 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 12 10:12:08.975736 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 12 10:12:08.980050 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 12 10:12:08.980390 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 12 10:12:08.980447 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 12 10:12:08.986684 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 12 10:12:08.989130 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 12 10:12:08.989268 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 12 10:12:08.993614 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 12 10:12:08.993776 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 12 10:12:08.996807 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 12 10:12:08.996860 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 12 10:12:08.999939 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 12 10:12:09.000853 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 12 10:12:09.000906 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 12 10:12:09.005157 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 12 10:12:09.005212 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 12 10:12:09.007157 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 12 10:12:09.007208 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 12 10:12:09.011228 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 12 10:12:09.013372 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 12 10:12:09.038941 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 12 10:12:09.039152 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 12 10:12:09.039692 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 12 10:12:09.039735 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 12 10:12:09.043214 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 12 10:12:09.043251 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 12 10:12:09.045244 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 12 10:12:09.045294 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 12 10:12:09.048167 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 12 10:12:09.048218 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 12 10:12:09.051075 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 12 10:12:09.051126 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 10:12:09.055144 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 12 10:12:09.056040 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 12 10:12:09.056109 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 12 10:12:09.060276 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 12 10:12:09.060336 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 12 10:12:09.064836 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 12 10:12:09.064909 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 10:12:09.069978 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 12 10:12:09.072346 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 12 10:12:09.080681 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 12 10:12:09.080804 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 12 10:12:09.159688 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 12 10:12:09.159843 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 12 10:12:09.161903 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 12 10:12:09.164385 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 12 10:12:09.164459 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 12 10:12:09.165612 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 12 10:12:09.189272 systemd[1]: Switching root. Jul 12 10:12:09.225207 systemd-journald[220]: Journal stopped Jul 12 10:12:10.456996 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). Jul 12 10:12:10.457085 kernel: SELinux: policy capability network_peer_controls=1 Jul 12 10:12:10.457105 kernel: SELinux: policy capability open_perms=1 Jul 12 10:12:10.457117 kernel: SELinux: policy capability extended_socket_class=1 Jul 12 10:12:10.457129 kernel: SELinux: policy capability always_check_network=0 Jul 12 10:12:10.457146 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 12 10:12:10.457158 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 12 10:12:10.457169 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 12 10:12:10.457185 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 12 10:12:10.457201 kernel: SELinux: policy capability userspace_initial_context=0 Jul 12 10:12:10.457213 kernel: audit: type=1403 audit(1752315129.653:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 12 10:12:10.457231 systemd[1]: Successfully loaded SELinux policy in 60.146ms. Jul 12 10:12:10.457257 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.413ms. Jul 12 10:12:10.457276 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 12 10:12:10.457297 systemd[1]: Detected virtualization kvm. Jul 12 10:12:10.457310 systemd[1]: Detected architecture x86-64. Jul 12 10:12:10.457322 systemd[1]: Detected first boot. Jul 12 10:12:10.457335 systemd[1]: Initializing machine ID from VM UUID. Jul 12 10:12:10.457347 zram_generator::config[1140]: No configuration found. Jul 12 10:12:10.457366 kernel: Guest personality initialized and is inactive Jul 12 10:12:10.457384 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 12 10:12:10.457396 kernel: Initialized host personality Jul 12 10:12:10.457407 kernel: NET: Registered PF_VSOCK protocol family Jul 12 10:12:10.457419 systemd[1]: Populated /etc with preset unit settings. Jul 12 10:12:10.457432 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 12 10:12:10.457445 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 12 10:12:10.457457 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 12 10:12:10.457469 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 12 10:12:10.457487 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 12 10:12:10.457499 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 12 10:12:10.457512 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 12 10:12:10.457525 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 12 10:12:10.457538 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 12 10:12:10.457551 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 12 10:12:10.457564 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 12 10:12:10.457576 systemd[1]: Created slice user.slice - User and Session Slice. Jul 12 10:12:10.457593 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 12 10:12:10.457607 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 12 10:12:10.457620 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 12 10:12:10.457633 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 12 10:12:10.457646 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 12 10:12:10.457659 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 12 10:12:10.457671 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 12 10:12:10.457683 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 12 10:12:10.457701 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 12 10:12:10.457713 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 12 10:12:10.457911 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 12 10:12:10.457924 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 12 10:12:10.457937 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 12 10:12:10.457949 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 12 10:12:10.457962 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 12 10:12:10.457974 systemd[1]: Reached target slices.target - Slice Units. Jul 12 10:12:10.457986 systemd[1]: Reached target swap.target - Swaps. Jul 12 10:12:10.458006 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 12 10:12:10.458018 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 12 10:12:10.458031 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 12 10:12:10.458044 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 12 10:12:10.458057 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 12 10:12:10.458112 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 12 10:12:10.458125 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 12 10:12:10.458137 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 12 10:12:10.458149 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 12 10:12:10.458168 systemd[1]: Mounting media.mount - External Media Directory... Jul 12 10:12:10.458181 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 12 10:12:10.458199 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 12 10:12:10.458211 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 12 10:12:10.458224 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 12 10:12:10.458236 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 12 10:12:10.458249 systemd[1]: Reached target machines.target - Containers. Jul 12 10:12:10.458261 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 12 10:12:10.458274 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 12 10:12:10.458301 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 12 10:12:10.458320 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 12 10:12:10.458337 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 12 10:12:10.458350 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 12 10:12:10.458363 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 12 10:12:10.458375 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 12 10:12:10.458388 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 12 10:12:10.458400 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 12 10:12:10.458428 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 12 10:12:10.458440 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 12 10:12:10.458452 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 12 10:12:10.458465 systemd[1]: Stopped systemd-fsck-usr.service. Jul 12 10:12:10.458478 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 12 10:12:10.458491 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 12 10:12:10.458503 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 12 10:12:10.458516 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 12 10:12:10.458528 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 12 10:12:10.458545 kernel: loop: module loaded Jul 12 10:12:10.458558 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 12 10:12:10.458570 kernel: fuse: init (API version 7.41) Jul 12 10:12:10.458583 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 12 10:12:10.458598 systemd[1]: verity-setup.service: Deactivated successfully. Jul 12 10:12:10.458617 systemd[1]: Stopped verity-setup.service. Jul 12 10:12:10.458631 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 12 10:12:10.458643 kernel: ACPI: bus type drm_connector registered Jul 12 10:12:10.458656 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 12 10:12:10.458669 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 12 10:12:10.458682 systemd[1]: Mounted media.mount - External Media Directory. Jul 12 10:12:10.458699 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 12 10:12:10.458718 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 12 10:12:10.458731 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 12 10:12:10.458743 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 12 10:12:10.458779 systemd-journald[1207]: Collecting audit messages is disabled. Jul 12 10:12:10.458803 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 12 10:12:10.458821 systemd-journald[1207]: Journal started Jul 12 10:12:10.458844 systemd-journald[1207]: Runtime Journal (/run/log/journal/56d182f8b8c04deba45258de05860ad6) is 6M, max 48.5M, 42.4M free. Jul 12 10:12:10.190132 systemd[1]: Queued start job for default target multi-user.target. Jul 12 10:12:10.217564 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 12 10:12:10.218110 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 12 10:12:10.460086 systemd[1]: Started systemd-journald.service - Journal Service. Jul 12 10:12:10.461943 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 12 10:12:10.462194 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 12 10:12:10.463684 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 10:12:10.463910 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 12 10:12:10.465619 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 12 10:12:10.465841 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 12 10:12:10.467191 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 10:12:10.467420 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 12 10:12:10.469116 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 12 10:12:10.469389 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 12 10:12:10.470765 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 10:12:10.470989 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 12 10:12:10.472476 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 12 10:12:10.473910 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 12 10:12:10.475499 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 12 10:12:10.477032 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 12 10:12:10.492915 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 12 10:12:10.495692 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 12 10:12:10.498321 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 12 10:12:10.499582 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 12 10:12:10.499619 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 12 10:12:10.501811 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 12 10:12:10.515647 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 12 10:12:10.517333 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 12 10:12:10.519301 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 12 10:12:10.522924 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 12 10:12:10.524190 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 12 10:12:10.526049 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 12 10:12:10.527296 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 12 10:12:10.528594 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 12 10:12:10.530897 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 12 10:12:10.534271 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 12 10:12:10.540956 systemd-journald[1207]: Time spent on flushing to /var/log/journal/56d182f8b8c04deba45258de05860ad6 is 15.365ms for 1064 entries. Jul 12 10:12:10.540956 systemd-journald[1207]: System Journal (/var/log/journal/56d182f8b8c04deba45258de05860ad6) is 8M, max 195.6M, 187.6M free. Jul 12 10:12:10.566388 systemd-journald[1207]: Received client request to flush runtime journal. Jul 12 10:12:10.566448 kernel: loop0: detected capacity change from 0 to 114000 Jul 12 10:12:10.543384 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 12 10:12:10.546613 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 12 10:12:10.548305 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 12 10:12:10.551003 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 12 10:12:10.562016 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 12 10:12:10.566206 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 12 10:12:10.569523 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 12 10:12:10.593569 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 12 10:12:10.598890 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 12 10:12:10.600907 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 12 10:12:10.605683 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 12 10:12:10.617098 kernel: loop1: detected capacity change from 0 to 221472 Jul 12 10:12:10.621180 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 12 10:12:10.640690 systemd-tmpfiles[1271]: ACLs are not supported, ignoring. Jul 12 10:12:10.641098 systemd-tmpfiles[1271]: ACLs are not supported, ignoring. Jul 12 10:12:10.647656 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 12 10:12:10.651090 kernel: loop2: detected capacity change from 0 to 146488 Jul 12 10:12:10.685094 kernel: loop3: detected capacity change from 0 to 114000 Jul 12 10:12:10.694364 kernel: loop4: detected capacity change from 0 to 221472 Jul 12 10:12:10.707114 kernel: loop5: detected capacity change from 0 to 146488 Jul 12 10:12:10.721480 (sd-merge)[1280]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 12 10:12:10.722692 (sd-merge)[1280]: Merged extensions into '/usr'. Jul 12 10:12:10.727153 systemd[1]: Reload requested from client PID 1255 ('systemd-sysext') (unit systemd-sysext.service)... Jul 12 10:12:10.727175 systemd[1]: Reloading... Jul 12 10:12:10.828103 zram_generator::config[1306]: No configuration found. Jul 12 10:12:10.965238 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 10:12:10.970317 ldconfig[1250]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 12 10:12:11.046972 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 12 10:12:11.047183 systemd[1]: Reloading finished in 319 ms. Jul 12 10:12:11.079776 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 12 10:12:11.081352 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 12 10:12:11.095474 systemd[1]: Starting ensure-sysext.service... Jul 12 10:12:11.097398 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 12 10:12:11.137887 systemd[1]: Reload requested from client PID 1343 ('systemctl') (unit ensure-sysext.service)... Jul 12 10:12:11.137905 systemd[1]: Reloading... Jul 12 10:12:11.144147 systemd-tmpfiles[1344]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 12 10:12:11.144188 systemd-tmpfiles[1344]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 12 10:12:11.144645 systemd-tmpfiles[1344]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 12 10:12:11.144926 systemd-tmpfiles[1344]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 12 10:12:11.145870 systemd-tmpfiles[1344]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 12 10:12:11.146170 systemd-tmpfiles[1344]: ACLs are not supported, ignoring. Jul 12 10:12:11.146247 systemd-tmpfiles[1344]: ACLs are not supported, ignoring. Jul 12 10:12:11.150632 systemd-tmpfiles[1344]: Detected autofs mount point /boot during canonicalization of boot. Jul 12 10:12:11.150644 systemd-tmpfiles[1344]: Skipping /boot Jul 12 10:12:11.161812 systemd-tmpfiles[1344]: Detected autofs mount point /boot during canonicalization of boot. Jul 12 10:12:11.161826 systemd-tmpfiles[1344]: Skipping /boot Jul 12 10:12:11.190093 zram_generator::config[1371]: No configuration found. Jul 12 10:12:11.284851 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 10:12:11.365401 systemd[1]: Reloading finished in 227 ms. Jul 12 10:12:11.384609 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 12 10:12:11.405872 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 12 10:12:11.414812 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 12 10:12:11.417223 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 12 10:12:11.419566 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 12 10:12:11.435844 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 12 10:12:11.439409 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 12 10:12:11.442396 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 12 10:12:11.447199 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 12 10:12:11.447479 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 12 10:12:11.453788 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 12 10:12:11.459264 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 12 10:12:11.462497 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 12 10:12:11.463869 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 12 10:12:11.463994 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 12 10:12:11.467162 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 12 10:12:11.470122 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 12 10:12:11.471647 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 12 10:12:11.473629 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 10:12:11.479293 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 12 10:12:11.481146 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 10:12:11.481388 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 12 10:12:11.487860 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 10:12:11.488235 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 12 10:12:11.493027 systemd-udevd[1415]: Using default interface naming scheme 'v255'. Jul 12 10:12:11.495639 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 12 10:12:11.495858 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 12 10:12:11.497580 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 12 10:12:11.501200 augenrules[1443]: No rules Jul 12 10:12:11.501435 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 12 10:12:11.505630 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 12 10:12:11.507510 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 12 10:12:11.507649 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 12 10:12:11.511895 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 12 10:12:11.513133 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 12 10:12:11.514704 systemd[1]: audit-rules.service: Deactivated successfully. Jul 12 10:12:11.514979 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 12 10:12:11.519015 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 12 10:12:11.521289 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 12 10:12:11.523272 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 10:12:11.523575 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 12 10:12:11.525416 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 10:12:11.525637 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 12 10:12:11.527425 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 10:12:11.532574 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 12 10:12:11.534328 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 12 10:12:11.537162 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 12 10:12:11.555685 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 12 10:12:11.567037 systemd[1]: Finished ensure-sysext.service. Jul 12 10:12:11.571509 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 12 10:12:11.572881 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 12 10:12:11.573981 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 12 10:12:11.575212 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 12 10:12:11.578367 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 12 10:12:11.587335 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 12 10:12:11.590105 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 12 10:12:11.591473 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 12 10:12:11.591519 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 12 10:12:11.595192 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 12 10:12:11.599448 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 12 10:12:11.600601 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 12 10:12:11.600626 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 12 10:12:11.601283 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 10:12:11.601504 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 12 10:12:11.602920 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 12 10:12:11.603137 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 12 10:12:11.610421 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 10:12:11.610654 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 12 10:12:11.612129 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 10:12:11.612342 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 12 10:12:11.620161 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 12 10:12:11.620439 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 12 10:12:11.627262 augenrules[1492]: /sbin/augenrules: No change Jul 12 10:12:11.636780 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 12 10:12:11.658646 augenrules[1522]: No rules Jul 12 10:12:11.659945 systemd[1]: audit-rules.service: Deactivated successfully. Jul 12 10:12:11.661224 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 12 10:12:11.697350 systemd-resolved[1413]: Positive Trust Anchors: Jul 12 10:12:11.697372 systemd-resolved[1413]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 12 10:12:11.697404 systemd-resolved[1413]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 12 10:12:11.701174 systemd-resolved[1413]: Defaulting to hostname 'linux'. Jul 12 10:12:11.702945 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 12 10:12:11.704381 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 12 10:12:11.728174 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 12 10:12:11.731367 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 12 10:12:11.738101 kernel: mousedev: PS/2 mouse device common for all mice Jul 12 10:12:11.751083 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jul 12 10:12:11.755080 kernel: ACPI: button: Power Button [PWRF] Jul 12 10:12:11.760620 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 12 10:12:11.776419 systemd-networkd[1498]: lo: Link UP Jul 12 10:12:11.776698 systemd-networkd[1498]: lo: Gained carrier Jul 12 10:12:11.778452 systemd-networkd[1498]: Enumeration completed Jul 12 10:12:11.778595 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 12 10:12:11.779137 systemd-networkd[1498]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 10:12:11.779221 systemd-networkd[1498]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 12 10:12:11.779933 systemd-networkd[1498]: eth0: Link UP Jul 12 10:12:11.780037 systemd[1]: Reached target network.target - Network. Jul 12 10:12:11.780199 systemd-networkd[1498]: eth0: Gained carrier Jul 12 10:12:11.780213 systemd-networkd[1498]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 10:12:11.782486 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 12 10:12:11.789208 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 12 10:12:11.796482 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jul 12 10:12:11.796785 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 12 10:12:11.797986 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 12 10:12:11.798264 systemd-networkd[1498]: eth0: DHCPv4 address 10.0.0.45/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 12 10:12:11.812838 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 12 10:12:11.817395 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 12 10:12:11.818865 systemd[1]: Reached target sysinit.target - System Initialization. Jul 12 10:12:11.820043 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 12 10:12:11.821327 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 12 10:12:11.822572 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 12 10:12:11.823777 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 12 10:12:11.825030 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 12 10:12:11.825068 systemd[1]: Reached target paths.target - Path Units. Jul 12 10:12:11.825972 systemd[1]: Reached target time-set.target - System Time Set. Jul 12 10:12:11.827162 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 12 10:12:11.828340 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 12 10:12:11.829573 systemd[1]: Reached target timers.target - Timer Units. Jul 12 10:12:11.831254 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 12 10:12:11.834200 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 12 10:12:13.365129 systemd-resolved[1413]: Clock change detected. Flushing caches. Jul 12 10:12:13.365402 systemd-timesyncd[1504]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 12 10:12:13.365736 systemd-timesyncd[1504]: Initial clock synchronization to Sat 2025-07-12 10:12:13.365081 UTC. Jul 12 10:12:13.370029 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 12 10:12:13.373466 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 12 10:12:13.374721 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 12 10:12:13.388186 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 12 10:12:13.390609 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 12 10:12:13.392491 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 12 10:12:13.394415 systemd[1]: Reached target sockets.target - Socket Units. Jul 12 10:12:13.395488 systemd[1]: Reached target basic.target - Basic System. Jul 12 10:12:13.396544 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 12 10:12:13.396684 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 12 10:12:13.397748 systemd[1]: Starting containerd.service - containerd container runtime... Jul 12 10:12:13.403326 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 12 10:12:13.405485 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 12 10:12:13.408372 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 12 10:12:13.411435 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 12 10:12:13.412441 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 12 10:12:13.414061 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 12 10:12:13.421345 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 12 10:12:13.429657 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 12 10:12:13.434752 jq[1565]: false Jul 12 10:12:13.433285 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 12 10:12:13.435380 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 12 10:12:13.441954 google_oslogin_nss_cache[1567]: oslogin_cache_refresh[1567]: Refreshing passwd entry cache Jul 12 10:12:13.440474 oslogin_cache_refresh[1567]: Refreshing passwd entry cache Jul 12 10:12:13.448396 oslogin_cache_refresh[1567]: Failure getting users, quitting Jul 12 10:12:13.448219 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 12 10:12:13.449393 google_oslogin_nss_cache[1567]: oslogin_cache_refresh[1567]: Failure getting users, quitting Jul 12 10:12:13.449393 google_oslogin_nss_cache[1567]: oslogin_cache_refresh[1567]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 12 10:12:13.449393 google_oslogin_nss_cache[1567]: oslogin_cache_refresh[1567]: Refreshing group entry cache Jul 12 10:12:13.448422 oslogin_cache_refresh[1567]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 12 10:12:13.448483 oslogin_cache_refresh[1567]: Refreshing group entry cache Jul 12 10:12:13.451281 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 12 10:12:13.452684 extend-filesystems[1566]: Found /dev/vda6 Jul 12 10:12:13.451949 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 12 10:12:13.457241 google_oslogin_nss_cache[1567]: oslogin_cache_refresh[1567]: Failure getting groups, quitting Jul 12 10:12:13.457241 google_oslogin_nss_cache[1567]: oslogin_cache_refresh[1567]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 12 10:12:13.454701 oslogin_cache_refresh[1567]: Failure getting groups, quitting Jul 12 10:12:13.454711 oslogin_cache_refresh[1567]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 12 10:12:13.457547 systemd[1]: Starting update-engine.service - Update Engine... Jul 12 10:12:13.460512 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 12 10:12:13.464644 extend-filesystems[1566]: Found /dev/vda9 Jul 12 10:12:13.465232 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 12 10:12:13.467140 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 12 10:12:13.470463 extend-filesystems[1566]: Checking size of /dev/vda9 Jul 12 10:12:13.471550 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 12 10:12:13.471971 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 12 10:12:13.472270 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 12 10:12:13.473846 systemd[1]: motdgen.service: Deactivated successfully. Jul 12 10:12:13.474093 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 12 10:12:13.477857 jq[1584]: true Jul 12 10:12:13.476317 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 12 10:12:13.476861 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 12 10:12:13.496684 (ntainerd)[1591]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 12 10:12:13.499900 jq[1590]: true Jul 12 10:12:13.513270 tar[1589]: linux-amd64/helm Jul 12 10:12:13.522269 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 10:12:13.529926 extend-filesystems[1566]: Resized partition /dev/vda9 Jul 12 10:12:13.566713 extend-filesystems[1609]: resize2fs 1.47.2 (1-Jan-2025) Jul 12 10:12:13.566564 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 12 10:12:13.570716 update_engine[1581]: I20250712 10:12:13.570349 1581 main.cc:92] Flatcar Update Engine starting Jul 12 10:12:13.575191 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 12 10:12:13.596591 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 10:12:13.598516 dbus-daemon[1561]: [system] SELinux support is enabled Jul 12 10:12:13.599481 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 12 10:12:13.608558 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 12 10:12:13.608922 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 12 10:12:13.612333 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 10:12:13.614669 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 12 10:12:13.614863 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 12 10:12:13.617589 systemd[1]: Started update-engine.service - Update Engine. Jul 12 10:12:13.621009 update_engine[1581]: I20250712 10:12:13.619439 1581 update_check_scheduler.cc:74] Next update check in 8m45s Jul 12 10:12:13.625200 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 12 10:12:13.626797 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 12 10:12:13.660893 systemd-logind[1577]: Watching system buttons on /dev/input/event2 (Power Button) Jul 12 10:12:13.668682 extend-filesystems[1609]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 12 10:12:13.668682 extend-filesystems[1609]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 12 10:12:13.668682 extend-filesystems[1609]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 12 10:12:13.660914 systemd-logind[1577]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 12 10:12:13.680596 extend-filesystems[1566]: Resized filesystem in /dev/vda9 Jul 12 10:12:13.662669 systemd-logind[1577]: New seat seat0. Jul 12 10:12:13.664321 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 12 10:12:13.664622 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 12 10:12:13.666261 systemd[1]: Started systemd-logind.service - User Login Management. Jul 12 10:12:13.705016 kernel: kvm_amd: TSC scaling supported Jul 12 10:12:13.705088 kernel: kvm_amd: Nested Virtualization enabled Jul 12 10:12:13.705102 kernel: kvm_amd: Nested Paging enabled Jul 12 10:12:13.705142 kernel: kvm_amd: LBR virtualization supported Jul 12 10:12:13.706441 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jul 12 10:12:13.706470 kernel: kvm_amd: Virtual GIF supported Jul 12 10:12:13.717267 bash[1633]: Updated "/home/core/.ssh/authorized_keys" Jul 12 10:12:13.719867 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 12 10:12:13.720776 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 12 10:12:13.784500 kernel: EDAC MC: Ver: 3.0.0 Jul 12 10:12:13.808197 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 10:12:13.810766 locksmithd[1612]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 12 10:12:13.934924 containerd[1591]: time="2025-07-12T10:12:13Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 12 10:12:13.936460 containerd[1591]: time="2025-07-12T10:12:13.936279143Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Jul 12 10:12:13.944048 sshd_keygen[1592]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 12 10:12:13.944326 containerd[1591]: time="2025-07-12T10:12:13.944297045Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.608µs" Jul 12 10:12:13.944326 containerd[1591]: time="2025-07-12T10:12:13.944320238Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 12 10:12:13.944382 containerd[1591]: time="2025-07-12T10:12:13.944348010Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 12 10:12:13.944557 containerd[1591]: time="2025-07-12T10:12:13.944527106Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 12 10:12:13.944557 containerd[1591]: time="2025-07-12T10:12:13.944547334Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 12 10:12:13.944600 containerd[1591]: time="2025-07-12T10:12:13.944574215Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 12 10:12:13.944672 containerd[1591]: time="2025-07-12T10:12:13.944645198Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 12 10:12:13.944672 containerd[1591]: time="2025-07-12T10:12:13.944662140Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 12 10:12:13.945118 containerd[1591]: time="2025-07-12T10:12:13.944917489Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 12 10:12:13.945118 containerd[1591]: time="2025-07-12T10:12:13.944940041Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 12 10:12:13.945118 containerd[1591]: time="2025-07-12T10:12:13.944950270Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 12 10:12:13.945118 containerd[1591]: time="2025-07-12T10:12:13.944957965Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 12 10:12:13.945118 containerd[1591]: time="2025-07-12T10:12:13.945075265Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 12 10:12:13.945412 containerd[1591]: time="2025-07-12T10:12:13.945378784Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 12 10:12:13.945446 containerd[1591]: time="2025-07-12T10:12:13.945419070Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 12 10:12:13.945446 containerd[1591]: time="2025-07-12T10:12:13.945430221Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 12 10:12:13.945510 containerd[1591]: time="2025-07-12T10:12:13.945491496Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 12 10:12:13.945806 containerd[1591]: time="2025-07-12T10:12:13.945784746Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 12 10:12:13.945883 containerd[1591]: time="2025-07-12T10:12:13.945867020Z" level=info msg="metadata content store policy set" policy=shared Jul 12 10:12:13.969543 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 12 10:12:13.972845 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 12 10:12:13.992819 tar[1589]: linux-amd64/LICENSE Jul 12 10:12:13.992918 tar[1589]: linux-amd64/README.md Jul 12 10:12:13.997517 containerd[1591]: time="2025-07-12T10:12:13.997454031Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 12 10:12:13.997606 containerd[1591]: time="2025-07-12T10:12:13.997539091Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 12 10:12:13.997606 containerd[1591]: time="2025-07-12T10:12:13.997566813Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 12 10:12:13.997606 containerd[1591]: time="2025-07-12T10:12:13.997581450Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 12 10:12:13.997606 containerd[1591]: time="2025-07-12T10:12:13.997594404Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 12 10:12:13.997606 containerd[1591]: time="2025-07-12T10:12:13.997607579Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 12 10:12:13.997719 containerd[1591]: time="2025-07-12T10:12:13.997624671Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 12 10:12:13.997719 containerd[1591]: time="2025-07-12T10:12:13.997636473Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 12 10:12:13.997719 containerd[1591]: time="2025-07-12T10:12:13.997651141Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 12 10:12:13.997719 containerd[1591]: time="2025-07-12T10:12:13.997667411Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 12 10:12:13.997719 containerd[1591]: time="2025-07-12T10:12:13.997678733Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 12 10:12:13.997719 containerd[1591]: time="2025-07-12T10:12:13.997691827Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 12 10:12:13.997938 containerd[1591]: time="2025-07-12T10:12:13.997843301Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 12 10:12:13.997938 containerd[1591]: time="2025-07-12T10:12:13.997878377Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 12 10:12:13.997938 containerd[1591]: time="2025-07-12T10:12:13.997913112Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 12 10:12:13.997938 containerd[1591]: time="2025-07-12T10:12:13.997929804Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 12 10:12:13.997938 containerd[1591]: time="2025-07-12T10:12:13.997943279Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 12 10:12:13.998095 containerd[1591]: time="2025-07-12T10:12:13.997954690Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 12 10:12:13.998095 containerd[1591]: time="2025-07-12T10:12:13.997965481Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 12 10:12:13.998095 containerd[1591]: time="2025-07-12T10:12:13.997979457Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 12 10:12:13.998095 containerd[1591]: time="2025-07-12T10:12:13.997989896Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 12 10:12:13.998095 containerd[1591]: time="2025-07-12T10:12:13.997999314Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 12 10:12:13.998095 containerd[1591]: time="2025-07-12T10:12:13.998010084Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 12 10:12:13.998295 containerd[1591]: time="2025-07-12T10:12:13.998103800Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 12 10:12:13.998295 containerd[1591]: time="2025-07-12T10:12:13.998126072Z" level=info msg="Start snapshots syncer" Jul 12 10:12:13.998295 containerd[1591]: time="2025-07-12T10:12:13.998160697Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 12 10:12:13.998983 containerd[1591]: time="2025-07-12T10:12:13.998432787Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 12 10:12:13.998983 containerd[1591]: time="2025-07-12T10:12:13.998497549Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 12 10:12:13.999202 containerd[1591]: time="2025-07-12T10:12:13.998626200Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 12 10:12:13.999202 containerd[1591]: time="2025-07-12T10:12:13.998751766Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 12 10:12:13.999202 containerd[1591]: time="2025-07-12T10:12:13.998775570Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 12 10:12:13.999202 containerd[1591]: time="2025-07-12T10:12:13.998785659Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 12 10:12:13.999202 containerd[1591]: time="2025-07-12T10:12:13.998797642Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 12 10:12:13.999202 containerd[1591]: time="2025-07-12T10:12:13.998811959Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 12 10:12:13.999202 containerd[1591]: time="2025-07-12T10:12:13.998823460Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 12 10:12:13.999202 containerd[1591]: time="2025-07-12T10:12:13.998833689Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 12 10:12:13.999202 containerd[1591]: time="2025-07-12T10:12:13.998882581Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 12 10:12:13.999202 containerd[1591]: time="2025-07-12T10:12:13.998894774Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 12 10:12:13.999202 containerd[1591]: time="2025-07-12T10:12:13.998904352Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 12 10:12:13.999202 containerd[1591]: time="2025-07-12T10:12:13.998944217Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 12 10:12:13.999202 containerd[1591]: time="2025-07-12T10:12:13.998959575Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 12 10:12:13.999202 containerd[1591]: time="2025-07-12T10:12:13.998967711Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 12 10:12:13.999484 containerd[1591]: time="2025-07-12T10:12:13.998976367Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 12 10:12:13.999484 containerd[1591]: time="2025-07-12T10:12:13.998984973Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 12 10:12:13.999484 containerd[1591]: time="2025-07-12T10:12:13.998994130Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 12 10:12:13.999484 containerd[1591]: time="2025-07-12T10:12:13.999007285Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 12 10:12:13.999484 containerd[1591]: time="2025-07-12T10:12:13.999028635Z" level=info msg="runtime interface created" Jul 12 10:12:13.999484 containerd[1591]: time="2025-07-12T10:12:13.999035147Z" level=info msg="created NRI interface" Jul 12 10:12:13.999484 containerd[1591]: time="2025-07-12T10:12:13.999044615Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 12 10:12:13.999484 containerd[1591]: time="2025-07-12T10:12:13.999057299Z" level=info msg="Connect containerd service" Jul 12 10:12:13.999484 containerd[1591]: time="2025-07-12T10:12:13.999079741Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 12 10:12:14.000374 systemd[1]: issuegen.service: Deactivated successfully. Jul 12 10:12:14.000727 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 12 10:12:14.002325 containerd[1591]: time="2025-07-12T10:12:14.002267021Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 12 10:12:14.012498 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 12 10:12:14.017351 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 12 10:12:14.046952 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 12 10:12:14.051121 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 12 10:12:14.055819 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 12 10:12:14.057475 systemd[1]: Reached target getty.target - Login Prompts. Jul 12 10:12:14.139537 containerd[1591]: time="2025-07-12T10:12:14.139464813Z" level=info msg="Start subscribing containerd event" Jul 12 10:12:14.139537 containerd[1591]: time="2025-07-12T10:12:14.139552969Z" level=info msg="Start recovering state" Jul 12 10:12:14.139717 containerd[1591]: time="2025-07-12T10:12:14.139692591Z" level=info msg="Start event monitor" Jul 12 10:12:14.139744 containerd[1591]: time="2025-07-12T10:12:14.139702549Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 12 10:12:14.139821 containerd[1591]: time="2025-07-12T10:12:14.139780115Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 12 10:12:14.139821 containerd[1591]: time="2025-07-12T10:12:14.139707899Z" level=info msg="Start cni network conf syncer for default" Jul 12 10:12:14.139985 containerd[1591]: time="2025-07-12T10:12:14.139880804Z" level=info msg="Start streaming server" Jul 12 10:12:14.139985 containerd[1591]: time="2025-07-12T10:12:14.139908426Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 12 10:12:14.139985 containerd[1591]: time="2025-07-12T10:12:14.139919597Z" level=info msg="runtime interface starting up..." Jul 12 10:12:14.139985 containerd[1591]: time="2025-07-12T10:12:14.139938652Z" level=info msg="starting plugins..." Jul 12 10:12:14.139985 containerd[1591]: time="2025-07-12T10:12:14.139964841Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 12 10:12:14.140353 containerd[1591]: time="2025-07-12T10:12:14.140183261Z" level=info msg="containerd successfully booted in 0.205911s" Jul 12 10:12:14.140335 systemd[1]: Started containerd.service - containerd container runtime. Jul 12 10:12:14.663568 systemd-networkd[1498]: eth0: Gained IPv6LL Jul 12 10:12:14.670068 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 12 10:12:14.672860 systemd[1]: Reached target network-online.target - Network is Online. Jul 12 10:12:14.675785 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 12 10:12:14.679401 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 10:12:14.689683 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 12 10:12:14.719971 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 12 10:12:14.720414 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 12 10:12:14.722308 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 12 10:12:14.724554 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 12 10:12:16.065711 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 10:12:16.067796 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 12 10:12:16.069727 systemd[1]: Startup finished in 3.650s (kernel) + 6.008s (initrd) + 4.946s (userspace) = 14.605s. Jul 12 10:12:16.070000 (kubelet)[1707]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 10:12:16.598168 kubelet[1707]: E0712 10:12:16.598043 1707 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 10:12:16.602079 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 10:12:16.602309 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 10:12:16.602737 systemd[1]: kubelet.service: Consumed 1.705s CPU time, 266.4M memory peak. Jul 12 10:12:18.055996 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 12 10:12:18.057416 systemd[1]: Started sshd@0-10.0.0.45:22-10.0.0.1:35562.service - OpenSSH per-connection server daemon (10.0.0.1:35562). Jul 12 10:12:18.133304 sshd[1721]: Accepted publickey for core from 10.0.0.1 port 35562 ssh2: RSA SHA256:ljiWRYFI9rWPiw0u7CK8T0RxaTLaHmCBjbv/AQVYFjw Jul 12 10:12:18.135225 sshd-session[1721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 10:12:18.142605 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 12 10:12:18.143839 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 12 10:12:18.150154 systemd-logind[1577]: New session 1 of user core. Jul 12 10:12:18.163753 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 12 10:12:18.167055 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 12 10:12:18.183975 (systemd)[1726]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 12 10:12:18.186597 systemd-logind[1577]: New session c1 of user core. Jul 12 10:12:18.342825 systemd[1726]: Queued start job for default target default.target. Jul 12 10:12:18.361776 systemd[1726]: Created slice app.slice - User Application Slice. Jul 12 10:12:18.361809 systemd[1726]: Reached target paths.target - Paths. Jul 12 10:12:18.361859 systemd[1726]: Reached target timers.target - Timers. Jul 12 10:12:18.363642 systemd[1726]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 12 10:12:18.375613 systemd[1726]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 12 10:12:18.375761 systemd[1726]: Reached target sockets.target - Sockets. Jul 12 10:12:18.375815 systemd[1726]: Reached target basic.target - Basic System. Jul 12 10:12:18.375859 systemd[1726]: Reached target default.target - Main User Target. Jul 12 10:12:18.375895 systemd[1726]: Startup finished in 182ms. Jul 12 10:12:18.376152 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 12 10:12:18.377859 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 12 10:12:18.440294 systemd[1]: Started sshd@1-10.0.0.45:22-10.0.0.1:35572.service - OpenSSH per-connection server daemon (10.0.0.1:35572). Jul 12 10:12:18.506374 sshd[1737]: Accepted publickey for core from 10.0.0.1 port 35572 ssh2: RSA SHA256:ljiWRYFI9rWPiw0u7CK8T0RxaTLaHmCBjbv/AQVYFjw Jul 12 10:12:18.508097 sshd-session[1737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 10:12:18.512933 systemd-logind[1577]: New session 2 of user core. Jul 12 10:12:18.526405 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 12 10:12:18.582100 sshd[1740]: Connection closed by 10.0.0.1 port 35572 Jul 12 10:12:18.582554 sshd-session[1737]: pam_unix(sshd:session): session closed for user core Jul 12 10:12:18.591588 systemd[1]: sshd@1-10.0.0.45:22-10.0.0.1:35572.service: Deactivated successfully. Jul 12 10:12:18.593660 systemd[1]: session-2.scope: Deactivated successfully. Jul 12 10:12:18.594612 systemd-logind[1577]: Session 2 logged out. Waiting for processes to exit. Jul 12 10:12:18.597606 systemd[1]: Started sshd@2-10.0.0.45:22-10.0.0.1:35580.service - OpenSSH per-connection server daemon (10.0.0.1:35580). Jul 12 10:12:18.598102 systemd-logind[1577]: Removed session 2. Jul 12 10:12:18.654401 sshd[1746]: Accepted publickey for core from 10.0.0.1 port 35580 ssh2: RSA SHA256:ljiWRYFI9rWPiw0u7CK8T0RxaTLaHmCBjbv/AQVYFjw Jul 12 10:12:18.656157 sshd-session[1746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 10:12:18.661274 systemd-logind[1577]: New session 3 of user core. Jul 12 10:12:18.672340 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 12 10:12:18.723345 sshd[1749]: Connection closed by 10.0.0.1 port 35580 Jul 12 10:12:18.723767 sshd-session[1746]: pam_unix(sshd:session): session closed for user core Jul 12 10:12:18.737359 systemd[1]: sshd@2-10.0.0.45:22-10.0.0.1:35580.service: Deactivated successfully. Jul 12 10:12:18.739418 systemd[1]: session-3.scope: Deactivated successfully. Jul 12 10:12:18.740147 systemd-logind[1577]: Session 3 logged out. Waiting for processes to exit. Jul 12 10:12:18.742953 systemd[1]: Started sshd@3-10.0.0.45:22-10.0.0.1:35588.service - OpenSSH per-connection server daemon (10.0.0.1:35588). Jul 12 10:12:18.743736 systemd-logind[1577]: Removed session 3. Jul 12 10:12:18.803547 sshd[1755]: Accepted publickey for core from 10.0.0.1 port 35588 ssh2: RSA SHA256:ljiWRYFI9rWPiw0u7CK8T0RxaTLaHmCBjbv/AQVYFjw Jul 12 10:12:18.805802 sshd-session[1755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 10:12:18.810665 systemd-logind[1577]: New session 4 of user core. Jul 12 10:12:18.818315 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 12 10:12:18.872645 sshd[1758]: Connection closed by 10.0.0.1 port 35588 Jul 12 10:12:18.873197 sshd-session[1755]: pam_unix(sshd:session): session closed for user core Jul 12 10:12:18.888597 systemd[1]: sshd@3-10.0.0.45:22-10.0.0.1:35588.service: Deactivated successfully. Jul 12 10:12:18.890290 systemd[1]: session-4.scope: Deactivated successfully. Jul 12 10:12:18.891057 systemd-logind[1577]: Session 4 logged out. Waiting for processes to exit. Jul 12 10:12:18.893671 systemd[1]: Started sshd@4-10.0.0.45:22-10.0.0.1:35604.service - OpenSSH per-connection server daemon (10.0.0.1:35604). Jul 12 10:12:18.894195 systemd-logind[1577]: Removed session 4. Jul 12 10:12:18.952558 sshd[1764]: Accepted publickey for core from 10.0.0.1 port 35604 ssh2: RSA SHA256:ljiWRYFI9rWPiw0u7CK8T0RxaTLaHmCBjbv/AQVYFjw Jul 12 10:12:18.953877 sshd-session[1764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 10:12:18.958413 systemd-logind[1577]: New session 5 of user core. Jul 12 10:12:18.971338 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 12 10:12:19.114819 sudo[1768]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 12 10:12:19.115138 sudo[1768]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 10:12:19.137605 sudo[1768]: pam_unix(sudo:session): session closed for user root Jul 12 10:12:19.139279 sshd[1767]: Connection closed by 10.0.0.1 port 35604 Jul 12 10:12:19.139646 sshd-session[1764]: pam_unix(sshd:session): session closed for user core Jul 12 10:12:19.152243 systemd[1]: sshd@4-10.0.0.45:22-10.0.0.1:35604.service: Deactivated successfully. Jul 12 10:12:19.154243 systemd[1]: session-5.scope: Deactivated successfully. Jul 12 10:12:19.155049 systemd-logind[1577]: Session 5 logged out. Waiting for processes to exit. Jul 12 10:12:19.158312 systemd[1]: Started sshd@5-10.0.0.45:22-10.0.0.1:35618.service - OpenSSH per-connection server daemon (10.0.0.1:35618). Jul 12 10:12:19.158828 systemd-logind[1577]: Removed session 5. Jul 12 10:12:19.219099 sshd[1774]: Accepted publickey for core from 10.0.0.1 port 35618 ssh2: RSA SHA256:ljiWRYFI9rWPiw0u7CK8T0RxaTLaHmCBjbv/AQVYFjw Jul 12 10:12:19.220891 sshd-session[1774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 10:12:19.225887 systemd-logind[1577]: New session 6 of user core. Jul 12 10:12:19.239378 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 12 10:12:19.293952 sudo[1779]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 12 10:12:19.294292 sudo[1779]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 10:12:19.301583 sudo[1779]: pam_unix(sudo:session): session closed for user root Jul 12 10:12:19.307449 sudo[1778]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 12 10:12:19.307751 sudo[1778]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 10:12:19.317765 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 12 10:12:19.372843 augenrules[1801]: No rules Jul 12 10:12:19.374575 systemd[1]: audit-rules.service: Deactivated successfully. Jul 12 10:12:19.374904 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 12 10:12:19.375977 sudo[1778]: pam_unix(sudo:session): session closed for user root Jul 12 10:12:19.377539 sshd[1777]: Connection closed by 10.0.0.1 port 35618 Jul 12 10:12:19.377915 sshd-session[1774]: pam_unix(sshd:session): session closed for user core Jul 12 10:12:19.390877 systemd[1]: sshd@5-10.0.0.45:22-10.0.0.1:35618.service: Deactivated successfully. Jul 12 10:12:19.392750 systemd[1]: session-6.scope: Deactivated successfully. Jul 12 10:12:19.393508 systemd-logind[1577]: Session 6 logged out. Waiting for processes to exit. Jul 12 10:12:19.396163 systemd[1]: Started sshd@6-10.0.0.45:22-10.0.0.1:35624.service - OpenSSH per-connection server daemon (10.0.0.1:35624). Jul 12 10:12:19.396739 systemd-logind[1577]: Removed session 6. Jul 12 10:12:19.455536 sshd[1810]: Accepted publickey for core from 10.0.0.1 port 35624 ssh2: RSA SHA256:ljiWRYFI9rWPiw0u7CK8T0RxaTLaHmCBjbv/AQVYFjw Jul 12 10:12:19.456903 sshd-session[1810]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 10:12:19.461399 systemd-logind[1577]: New session 7 of user core. Jul 12 10:12:19.471296 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 12 10:12:19.523864 sudo[1814]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 12 10:12:19.524217 sudo[1814]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 10:12:20.424151 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 12 10:12:20.442485 (dockerd)[1835]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 12 10:12:21.118559 dockerd[1835]: time="2025-07-12T10:12:21.118455017Z" level=info msg="Starting up" Jul 12 10:12:21.119447 dockerd[1835]: time="2025-07-12T10:12:21.119413716Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 12 10:12:21.143958 dockerd[1835]: time="2025-07-12T10:12:21.143904943Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jul 12 10:12:21.423410 dockerd[1835]: time="2025-07-12T10:12:21.423277145Z" level=info msg="Loading containers: start." Jul 12 10:12:21.434204 kernel: Initializing XFRM netlink socket Jul 12 10:12:21.785319 systemd-networkd[1498]: docker0: Link UP Jul 12 10:12:21.789277 dockerd[1835]: time="2025-07-12T10:12:21.789223867Z" level=info msg="Loading containers: done." Jul 12 10:12:21.806524 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4156529734-merged.mount: Deactivated successfully. Jul 12 10:12:21.808282 dockerd[1835]: time="2025-07-12T10:12:21.808229624Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 12 10:12:21.808398 dockerd[1835]: time="2025-07-12T10:12:21.808374716Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jul 12 10:12:21.808507 dockerd[1835]: time="2025-07-12T10:12:21.808490603Z" level=info msg="Initializing buildkit" Jul 12 10:12:21.838993 dockerd[1835]: time="2025-07-12T10:12:21.838958573Z" level=info msg="Completed buildkit initialization" Jul 12 10:12:21.845534 dockerd[1835]: time="2025-07-12T10:12:21.845497932Z" level=info msg="Daemon has completed initialization" Jul 12 10:12:21.845691 dockerd[1835]: time="2025-07-12T10:12:21.845640078Z" level=info msg="API listen on /run/docker.sock" Jul 12 10:12:21.845756 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 12 10:12:22.660405 containerd[1591]: time="2025-07-12T10:12:22.660341406Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 12 10:12:23.354268 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2776338034.mount: Deactivated successfully. Jul 12 10:12:24.881678 containerd[1591]: time="2025-07-12T10:12:24.881616697Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:12:24.882257 containerd[1591]: time="2025-07-12T10:12:24.882192788Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=28077744" Jul 12 10:12:24.883389 containerd[1591]: time="2025-07-12T10:12:24.883336183Z" level=info msg="ImageCreate event name:\"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:12:24.885867 containerd[1591]: time="2025-07-12T10:12:24.885828178Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:12:24.886667 containerd[1591]: time="2025-07-12T10:12:24.886627728Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"28074544\" in 2.22623718s" Jul 12 10:12:24.886714 containerd[1591]: time="2025-07-12T10:12:24.886670128Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\"" Jul 12 10:12:24.887522 containerd[1591]: time="2025-07-12T10:12:24.887480177Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 12 10:12:26.405192 containerd[1591]: time="2025-07-12T10:12:26.405105320Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:12:26.405854 containerd[1591]: time="2025-07-12T10:12:26.405808820Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=24713294" Jul 12 10:12:26.406962 containerd[1591]: time="2025-07-12T10:12:26.406924263Z" level=info msg="ImageCreate event name:\"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:12:26.409480 containerd[1591]: time="2025-07-12T10:12:26.409441996Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:12:26.410518 containerd[1591]: time="2025-07-12T10:12:26.410488079Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"26315128\" in 1.522980931s" Jul 12 10:12:26.410518 containerd[1591]: time="2025-07-12T10:12:26.410520941Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\"" Jul 12 10:12:26.411088 containerd[1591]: time="2025-07-12T10:12:26.411057447Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 12 10:12:26.705872 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 12 10:12:26.707632 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 10:12:26.927217 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 10:12:26.945533 (kubelet)[2116]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 10:12:27.006905 kubelet[2116]: E0712 10:12:27.006730 2116 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 10:12:27.013927 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 10:12:27.014144 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 10:12:27.014537 systemd[1]: kubelet.service: Consumed 241ms CPU time, 111.3M memory peak. Jul 12 10:12:28.248267 containerd[1591]: time="2025-07-12T10:12:28.248190951Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:12:28.249008 containerd[1591]: time="2025-07-12T10:12:28.248943062Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=18783671" Jul 12 10:12:28.250380 containerd[1591]: time="2025-07-12T10:12:28.250338610Z" level=info msg="ImageCreate event name:\"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:12:28.253033 containerd[1591]: time="2025-07-12T10:12:28.252989764Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:12:28.253910 containerd[1591]: time="2025-07-12T10:12:28.253876617Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"20385523\" in 1.842779075s" Jul 12 10:12:28.253941 containerd[1591]: time="2025-07-12T10:12:28.253908627Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\"" Jul 12 10:12:28.254723 containerd[1591]: time="2025-07-12T10:12:28.254689743Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 12 10:12:29.252374 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount161298236.mount: Deactivated successfully. Jul 12 10:12:30.602015 containerd[1591]: time="2025-07-12T10:12:30.601916939Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:12:30.603835 containerd[1591]: time="2025-07-12T10:12:30.603734128Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=30383943" Jul 12 10:12:30.605413 containerd[1591]: time="2025-07-12T10:12:30.605374836Z" level=info msg="ImageCreate event name:\"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:12:30.609243 containerd[1591]: time="2025-07-12T10:12:30.609162271Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:12:30.610700 containerd[1591]: time="2025-07-12T10:12:30.610652447Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"30382962\" in 2.355929292s" Jul 12 10:12:30.610769 containerd[1591]: time="2025-07-12T10:12:30.610710185Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\"" Jul 12 10:12:30.611689 containerd[1591]: time="2025-07-12T10:12:30.611664756Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 12 10:12:31.143880 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1021547957.mount: Deactivated successfully. Jul 12 10:12:32.502203 containerd[1591]: time="2025-07-12T10:12:32.502112792Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:12:32.502928 containerd[1591]: time="2025-07-12T10:12:32.502875984Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jul 12 10:12:32.504108 containerd[1591]: time="2025-07-12T10:12:32.504052841Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:12:32.507013 containerd[1591]: time="2025-07-12T10:12:32.506974713Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:12:32.508171 containerd[1591]: time="2025-07-12T10:12:32.508137144Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.896445297s" Jul 12 10:12:32.508260 containerd[1591]: time="2025-07-12T10:12:32.508190885Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 12 10:12:32.508713 containerd[1591]: time="2025-07-12T10:12:32.508688759Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 12 10:12:32.983007 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1222005042.mount: Deactivated successfully. Jul 12 10:12:32.989423 containerd[1591]: time="2025-07-12T10:12:32.989363429Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 10:12:32.990099 containerd[1591]: time="2025-07-12T10:12:32.990077919Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 12 10:12:32.991338 containerd[1591]: time="2025-07-12T10:12:32.991316242Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 10:12:32.993254 containerd[1591]: time="2025-07-12T10:12:32.993225264Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 10:12:32.993831 containerd[1591]: time="2025-07-12T10:12:32.993785916Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 485.070517ms" Jul 12 10:12:32.993831 containerd[1591]: time="2025-07-12T10:12:32.993817886Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 12 10:12:32.994522 containerd[1591]: time="2025-07-12T10:12:32.994489876Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 12 10:12:33.594608 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount687646116.mount: Deactivated successfully. Jul 12 10:12:35.830196 containerd[1591]: time="2025-07-12T10:12:35.830117997Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:12:35.830934 containerd[1591]: time="2025-07-12T10:12:35.830906617Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" Jul 12 10:12:35.832388 containerd[1591]: time="2025-07-12T10:12:35.832353621Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:12:35.835157 containerd[1591]: time="2025-07-12T10:12:35.835129349Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:12:35.836011 containerd[1591]: time="2025-07-12T10:12:35.835981107Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.841460904s" Jul 12 10:12:35.836061 containerd[1591]: time="2025-07-12T10:12:35.836011724Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jul 12 10:12:37.205801 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 12 10:12:37.207551 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 10:12:37.411232 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 10:12:37.427133 (kubelet)[2276]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 10:12:37.478957 kubelet[2276]: E0712 10:12:37.478778 2276 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 10:12:37.484561 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 10:12:37.484808 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 10:12:37.485293 systemd[1]: kubelet.service: Consumed 230ms CPU time, 108M memory peak. Jul 12 10:12:38.051657 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 10:12:38.051859 systemd[1]: kubelet.service: Consumed 230ms CPU time, 108M memory peak. Jul 12 10:12:38.054461 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 10:12:38.083139 systemd[1]: Reload requested from client PID 2291 ('systemctl') (unit session-7.scope)... Jul 12 10:12:38.083190 systemd[1]: Reloading... Jul 12 10:12:38.169407 zram_generator::config[2337]: No configuration found. Jul 12 10:12:38.691457 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 10:12:38.812991 systemd[1]: Reloading finished in 729 ms. Jul 12 10:12:38.883105 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 12 10:12:38.883230 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 12 10:12:38.883572 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 10:12:38.883625 systemd[1]: kubelet.service: Consumed 170ms CPU time, 98.3M memory peak. Jul 12 10:12:38.885453 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 10:12:39.105864 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 10:12:39.118557 (kubelet)[2382]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 12 10:12:39.157102 kubelet[2382]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 10:12:39.157102 kubelet[2382]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 12 10:12:39.157102 kubelet[2382]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 10:12:39.157540 kubelet[2382]: I0712 10:12:39.157214 2382 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 12 10:12:39.600028 kubelet[2382]: I0712 10:12:39.599966 2382 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 12 10:12:39.600028 kubelet[2382]: I0712 10:12:39.600003 2382 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 12 10:12:39.600296 kubelet[2382]: I0712 10:12:39.600270 2382 server.go:934] "Client rotation is on, will bootstrap in background" Jul 12 10:12:39.623360 kubelet[2382]: E0712 10:12:39.623317 2382 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.45:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.45:6443: connect: connection refused" logger="UnhandledError" Jul 12 10:12:39.623812 kubelet[2382]: I0712 10:12:39.623792 2382 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 12 10:12:39.633358 kubelet[2382]: I0712 10:12:39.633307 2382 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 12 10:12:39.640261 kubelet[2382]: I0712 10:12:39.640189 2382 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 12 10:12:39.640777 kubelet[2382]: I0712 10:12:39.640742 2382 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 12 10:12:39.640961 kubelet[2382]: I0712 10:12:39.640911 2382 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 12 10:12:39.641112 kubelet[2382]: I0712 10:12:39.640954 2382 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 12 10:12:39.641271 kubelet[2382]: I0712 10:12:39.641129 2382 topology_manager.go:138] "Creating topology manager with none policy" Jul 12 10:12:39.641271 kubelet[2382]: I0712 10:12:39.641138 2382 container_manager_linux.go:300] "Creating device plugin manager" Jul 12 10:12:39.641319 kubelet[2382]: I0712 10:12:39.641298 2382 state_mem.go:36] "Initialized new in-memory state store" Jul 12 10:12:39.643247 kubelet[2382]: I0712 10:12:39.643165 2382 kubelet.go:408] "Attempting to sync node with API server" Jul 12 10:12:39.643401 kubelet[2382]: I0712 10:12:39.643264 2382 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 12 10:12:39.643401 kubelet[2382]: I0712 10:12:39.643333 2382 kubelet.go:314] "Adding apiserver pod source" Jul 12 10:12:39.643401 kubelet[2382]: I0712 10:12:39.643383 2382 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 12 10:12:39.644197 kubelet[2382]: W0712 10:12:39.644132 2382 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.45:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.45:6443: connect: connection refused Jul 12 10:12:39.644250 kubelet[2382]: E0712 10:12:39.644228 2382 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.45:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.45:6443: connect: connection refused" logger="UnhandledError" Jul 12 10:12:39.644357 kubelet[2382]: W0712 10:12:39.644304 2382 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.45:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.45:6443: connect: connection refused Jul 12 10:12:39.644357 kubelet[2382]: E0712 10:12:39.644346 2382 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.45:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.45:6443: connect: connection refused" logger="UnhandledError" Jul 12 10:12:39.645596 kubelet[2382]: I0712 10:12:39.645564 2382 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 12 10:12:39.646048 kubelet[2382]: I0712 10:12:39.646016 2382 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 12 10:12:39.646602 kubelet[2382]: W0712 10:12:39.646575 2382 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 12 10:12:39.648332 kubelet[2382]: I0712 10:12:39.648305 2382 server.go:1274] "Started kubelet" Jul 12 10:12:39.649117 kubelet[2382]: I0712 10:12:39.648652 2382 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 12 10:12:39.649117 kubelet[2382]: I0712 10:12:39.648654 2382 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 12 10:12:39.649117 kubelet[2382]: I0712 10:12:39.649006 2382 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 12 10:12:39.649897 kubelet[2382]: I0712 10:12:39.649870 2382 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 12 10:12:39.650368 kubelet[2382]: I0712 10:12:39.650340 2382 server.go:449] "Adding debug handlers to kubelet server" Jul 12 10:12:39.651746 kubelet[2382]: I0712 10:12:39.651718 2382 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 12 10:12:39.654385 kubelet[2382]: E0712 10:12:39.653692 2382 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 10:12:39.654385 kubelet[2382]: I0712 10:12:39.653747 2382 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 12 10:12:39.654385 kubelet[2382]: I0712 10:12:39.653950 2382 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 12 10:12:39.654385 kubelet[2382]: I0712 10:12:39.654017 2382 reconciler.go:26] "Reconciler: start to sync state" Jul 12 10:12:39.654385 kubelet[2382]: E0712 10:12:39.653346 2382 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.45:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.45:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1851795a815e0dc0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-12 10:12:39.648275904 +0000 UTC m=+0.525753863,LastTimestamp:2025-07-12 10:12:39.648275904 +0000 UTC m=+0.525753863,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 12 10:12:39.654561 kubelet[2382]: E0712 10:12:39.654457 2382 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.45:6443: connect: connection refused" interval="200ms" Jul 12 10:12:39.654561 kubelet[2382]: W0712 10:12:39.654475 2382 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.45:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.45:6443: connect: connection refused Jul 12 10:12:39.654614 kubelet[2382]: E0712 10:12:39.654584 2382 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.45:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.45:6443: connect: connection refused" logger="UnhandledError" Jul 12 10:12:39.655255 kubelet[2382]: I0712 10:12:39.655231 2382 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 12 10:12:39.655393 kubelet[2382]: E0712 10:12:39.655361 2382 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 12 10:12:39.656316 kubelet[2382]: I0712 10:12:39.656289 2382 factory.go:221] Registration of the containerd container factory successfully Jul 12 10:12:39.656316 kubelet[2382]: I0712 10:12:39.656307 2382 factory.go:221] Registration of the systemd container factory successfully Jul 12 10:12:39.684482 kubelet[2382]: I0712 10:12:39.684449 2382 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 12 10:12:39.684482 kubelet[2382]: I0712 10:12:39.684466 2382 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 12 10:12:39.684482 kubelet[2382]: I0712 10:12:39.684483 2382 state_mem.go:36] "Initialized new in-memory state store" Jul 12 10:12:39.685485 kubelet[2382]: I0712 10:12:39.685460 2382 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 12 10:12:39.686886 kubelet[2382]: I0712 10:12:39.686862 2382 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 12 10:12:39.686946 kubelet[2382]: I0712 10:12:39.686899 2382 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 12 10:12:39.686946 kubelet[2382]: I0712 10:12:39.686922 2382 kubelet.go:2321] "Starting kubelet main sync loop" Jul 12 10:12:39.686994 kubelet[2382]: E0712 10:12:39.686955 2382 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 12 10:12:39.753864 kubelet[2382]: E0712 10:12:39.753799 2382 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 10:12:39.787392 kubelet[2382]: E0712 10:12:39.787299 2382 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 12 10:12:39.854799 kubelet[2382]: E0712 10:12:39.854533 2382 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 10:12:39.855199 kubelet[2382]: E0712 10:12:39.855135 2382 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.45:6443: connect: connection refused" interval="400ms" Jul 12 10:12:39.955622 kubelet[2382]: E0712 10:12:39.955563 2382 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 10:12:39.987820 kubelet[2382]: E0712 10:12:39.987748 2382 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 12 10:12:40.056201 kubelet[2382]: E0712 10:12:40.056137 2382 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 10:12:40.088036 kubelet[2382]: W0712 10:12:40.087937 2382 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.45:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.45:6443: connect: connection refused Jul 12 10:12:40.088099 kubelet[2382]: E0712 10:12:40.088051 2382 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.45:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.45:6443: connect: connection refused" logger="UnhandledError" Jul 12 10:12:40.088833 kubelet[2382]: I0712 10:12:40.088773 2382 policy_none.go:49] "None policy: Start" Jul 12 10:12:40.089736 kubelet[2382]: I0712 10:12:40.089698 2382 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 12 10:12:40.089785 kubelet[2382]: I0712 10:12:40.089750 2382 state_mem.go:35] "Initializing new in-memory state store" Jul 12 10:12:40.099139 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 12 10:12:40.122652 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 12 10:12:40.126915 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 12 10:12:40.141242 kubelet[2382]: I0712 10:12:40.141198 2382 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 12 10:12:40.141491 kubelet[2382]: I0712 10:12:40.141464 2382 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 12 10:12:40.141525 kubelet[2382]: I0712 10:12:40.141482 2382 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 12 10:12:40.142021 kubelet[2382]: I0712 10:12:40.141746 2382 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 12 10:12:40.143312 kubelet[2382]: E0712 10:12:40.143279 2382 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 12 10:12:40.243611 kubelet[2382]: I0712 10:12:40.243544 2382 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 12 10:12:40.244197 kubelet[2382]: E0712 10:12:40.244146 2382 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.45:6443/api/v1/nodes\": dial tcp 10.0.0.45:6443: connect: connection refused" node="localhost" Jul 12 10:12:40.255732 kubelet[2382]: E0712 10:12:40.255699 2382 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.45:6443: connect: connection refused" interval="800ms" Jul 12 10:12:40.397110 systemd[1]: Created slice kubepods-burstable-pod9b26717fba0f5abcefc0f6715f46c89f.slice - libcontainer container kubepods-burstable-pod9b26717fba0f5abcefc0f6715f46c89f.slice. Jul 12 10:12:40.409111 systemd[1]: Created slice kubepods-burstable-pod3f04709fe51ae4ab5abd58e8da771b74.slice - libcontainer container kubepods-burstable-pod3f04709fe51ae4ab5abd58e8da771b74.slice. Jul 12 10:12:40.419065 systemd[1]: Created slice kubepods-burstable-podb35b56493416c25588cb530e37ffc065.slice - libcontainer container kubepods-burstable-podb35b56493416c25588cb530e37ffc065.slice. Jul 12 10:12:40.445864 kubelet[2382]: I0712 10:12:40.445836 2382 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 12 10:12:40.446261 kubelet[2382]: E0712 10:12:40.446212 2382 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.45:6443/api/v1/nodes\": dial tcp 10.0.0.45:6443: connect: connection refused" node="localhost" Jul 12 10:12:40.459651 kubelet[2382]: I0712 10:12:40.459608 2382 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 12 10:12:40.459651 kubelet[2382]: I0712 10:12:40.459642 2382 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b26717fba0f5abcefc0f6715f46c89f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9b26717fba0f5abcefc0f6715f46c89f\") " pod="kube-system/kube-apiserver-localhost" Jul 12 10:12:40.459746 kubelet[2382]: I0712 10:12:40.459665 2382 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b26717fba0f5abcefc0f6715f46c89f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9b26717fba0f5abcefc0f6715f46c89f\") " pod="kube-system/kube-apiserver-localhost" Jul 12 10:12:40.459746 kubelet[2382]: I0712 10:12:40.459682 2382 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 10:12:40.459746 kubelet[2382]: I0712 10:12:40.459708 2382 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 10:12:40.459746 kubelet[2382]: I0712 10:12:40.459724 2382 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 10:12:40.459870 kubelet[2382]: I0712 10:12:40.459766 2382 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 10:12:40.459870 kubelet[2382]: I0712 10:12:40.459828 2382 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 10:12:40.459870 kubelet[2382]: I0712 10:12:40.459859 2382 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b26717fba0f5abcefc0f6715f46c89f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9b26717fba0f5abcefc0f6715f46c89f\") " pod="kube-system/kube-apiserver-localhost" Jul 12 10:12:40.468094 kubelet[2382]: W0712 10:12:40.468043 2382 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.45:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.45:6443: connect: connection refused Jul 12 10:12:40.468167 kubelet[2382]: E0712 10:12:40.468101 2382 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.45:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.45:6443: connect: connection refused" logger="UnhandledError" Jul 12 10:12:40.708351 containerd[1591]: time="2025-07-12T10:12:40.708300466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9b26717fba0f5abcefc0f6715f46c89f,Namespace:kube-system,Attempt:0,}" Jul 12 10:12:40.717817 containerd[1591]: time="2025-07-12T10:12:40.717774219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,}" Jul 12 10:12:40.722318 containerd[1591]: time="2025-07-12T10:12:40.722274201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,}" Jul 12 10:12:40.741442 containerd[1591]: time="2025-07-12T10:12:40.741397207Z" level=info msg="connecting to shim af0a2e4710ea735fa5ab5faadd4f962299171e2e26feabe9fa3391c2515bf48e" address="unix:///run/containerd/s/66f8c7e6007b0e91c0022a2154e61872162bf68dea3fd7c290c105de8b75b10b" namespace=k8s.io protocol=ttrpc version=3 Jul 12 10:12:40.760878 kubelet[2382]: W0712 10:12:40.760381 2382 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.45:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.45:6443: connect: connection refused Jul 12 10:12:40.761198 kubelet[2382]: E0712 10:12:40.761046 2382 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.45:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.45:6443: connect: connection refused" logger="UnhandledError" Jul 12 10:12:40.761326 containerd[1591]: time="2025-07-12T10:12:40.761293114Z" level=info msg="connecting to shim 5b795b4f715b998126a58b4a7dd862d53b5d6c9d322eb44c2cd746b6a4de9df3" address="unix:///run/containerd/s/63b06db4a4200fb2a7368b4bb4dcc808629d5e9b30dfdb8d02d9a3e57e01106b" namespace=k8s.io protocol=ttrpc version=3 Jul 12 10:12:40.774068 containerd[1591]: time="2025-07-12T10:12:40.773314587Z" level=info msg="connecting to shim f2845ee24f9388ea93f5b4959f60c0eddfe059ee52bc0ae836bece87411b9278" address="unix:///run/containerd/s/f7429b89b27a5b7fbd1bf40dd5d3c8796088c45dc7f38bb89a0d42bad7dbd038" namespace=k8s.io protocol=ttrpc version=3 Jul 12 10:12:40.793350 systemd[1]: Started cri-containerd-af0a2e4710ea735fa5ab5faadd4f962299171e2e26feabe9fa3391c2515bf48e.scope - libcontainer container af0a2e4710ea735fa5ab5faadd4f962299171e2e26feabe9fa3391c2515bf48e. Jul 12 10:12:40.797034 systemd[1]: Started cri-containerd-5b795b4f715b998126a58b4a7dd862d53b5d6c9d322eb44c2cd746b6a4de9df3.scope - libcontainer container 5b795b4f715b998126a58b4a7dd862d53b5d6c9d322eb44c2cd746b6a4de9df3. Jul 12 10:12:40.828877 systemd[1]: Started cri-containerd-f2845ee24f9388ea93f5b4959f60c0eddfe059ee52bc0ae836bece87411b9278.scope - libcontainer container f2845ee24f9388ea93f5b4959f60c0eddfe059ee52bc0ae836bece87411b9278. Jul 12 10:12:40.848054 kubelet[2382]: I0712 10:12:40.848028 2382 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 12 10:12:40.848540 kubelet[2382]: E0712 10:12:40.848516 2382 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.45:6443/api/v1/nodes\": dial tcp 10.0.0.45:6443: connect: connection refused" node="localhost" Jul 12 10:12:40.876253 containerd[1591]: time="2025-07-12T10:12:40.876096082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9b26717fba0f5abcefc0f6715f46c89f,Namespace:kube-system,Attempt:0,} returns sandbox id \"af0a2e4710ea735fa5ab5faadd4f962299171e2e26feabe9fa3391c2515bf48e\"" Jul 12 10:12:40.882877 containerd[1591]: time="2025-07-12T10:12:40.882821920Z" level=info msg="CreateContainer within sandbox \"af0a2e4710ea735fa5ab5faadd4f962299171e2e26feabe9fa3391c2515bf48e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 12 10:12:40.883284 containerd[1591]: time="2025-07-12T10:12:40.883243752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,} returns sandbox id \"f2845ee24f9388ea93f5b4959f60c0eddfe059ee52bc0ae836bece87411b9278\"" Jul 12 10:12:40.885309 containerd[1591]: time="2025-07-12T10:12:40.885282747Z" level=info msg="CreateContainer within sandbox \"f2845ee24f9388ea93f5b4959f60c0eddfe059ee52bc0ae836bece87411b9278\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 12 10:12:40.886013 containerd[1591]: time="2025-07-12T10:12:40.885977671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,} returns sandbox id \"5b795b4f715b998126a58b4a7dd862d53b5d6c9d322eb44c2cd746b6a4de9df3\"" Jul 12 10:12:40.888134 containerd[1591]: time="2025-07-12T10:12:40.888097127Z" level=info msg="CreateContainer within sandbox \"5b795b4f715b998126a58b4a7dd862d53b5d6c9d322eb44c2cd746b6a4de9df3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 12 10:12:40.894385 containerd[1591]: time="2025-07-12T10:12:40.894351471Z" level=info msg="Container a485b0e59b44a0a62687c508faf00d6f8dc167378f047de14adcbdf978bb1582: CDI devices from CRI Config.CDIDevices: []" Jul 12 10:12:40.902701 containerd[1591]: time="2025-07-12T10:12:40.902659938Z" level=info msg="Container 1e55eeff0b79976c369cfe9f0316c23513fd69a8f924149b8bfe5f64ecc1528c: CDI devices from CRI Config.CDIDevices: []" Jul 12 10:12:40.906218 containerd[1591]: time="2025-07-12T10:12:40.906167158Z" level=info msg="Container c52c6d5013b9eb4cde08e0e8b0725e870143c3ec63ce3e5c605564e08c8a5eaf: CDI devices from CRI Config.CDIDevices: []" Jul 12 10:12:40.910006 containerd[1591]: time="2025-07-12T10:12:40.909962518Z" level=info msg="CreateContainer within sandbox \"f2845ee24f9388ea93f5b4959f60c0eddfe059ee52bc0ae836bece87411b9278\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1e55eeff0b79976c369cfe9f0316c23513fd69a8f924149b8bfe5f64ecc1528c\"" Jul 12 10:12:40.910739 containerd[1591]: time="2025-07-12T10:12:40.910691696Z" level=info msg="StartContainer for \"1e55eeff0b79976c369cfe9f0316c23513fd69a8f924149b8bfe5f64ecc1528c\"" Jul 12 10:12:40.911751 containerd[1591]: time="2025-07-12T10:12:40.911725395Z" level=info msg="connecting to shim 1e55eeff0b79976c369cfe9f0316c23513fd69a8f924149b8bfe5f64ecc1528c" address="unix:///run/containerd/s/f7429b89b27a5b7fbd1bf40dd5d3c8796088c45dc7f38bb89a0d42bad7dbd038" protocol=ttrpc version=3 Jul 12 10:12:40.912871 containerd[1591]: time="2025-07-12T10:12:40.912844535Z" level=info msg="CreateContainer within sandbox \"af0a2e4710ea735fa5ab5faadd4f962299171e2e26feabe9fa3391c2515bf48e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a485b0e59b44a0a62687c508faf00d6f8dc167378f047de14adcbdf978bb1582\"" Jul 12 10:12:40.913775 containerd[1591]: time="2025-07-12T10:12:40.913713575Z" level=info msg="StartContainer for \"a485b0e59b44a0a62687c508faf00d6f8dc167378f047de14adcbdf978bb1582\"" Jul 12 10:12:40.914928 containerd[1591]: time="2025-07-12T10:12:40.914892287Z" level=info msg="connecting to shim a485b0e59b44a0a62687c508faf00d6f8dc167378f047de14adcbdf978bb1582" address="unix:///run/containerd/s/66f8c7e6007b0e91c0022a2154e61872162bf68dea3fd7c290c105de8b75b10b" protocol=ttrpc version=3 Jul 12 10:12:40.916030 containerd[1591]: time="2025-07-12T10:12:40.916003221Z" level=info msg="CreateContainer within sandbox \"5b795b4f715b998126a58b4a7dd862d53b5d6c9d322eb44c2cd746b6a4de9df3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c52c6d5013b9eb4cde08e0e8b0725e870143c3ec63ce3e5c605564e08c8a5eaf\"" Jul 12 10:12:40.916564 containerd[1591]: time="2025-07-12T10:12:40.916501025Z" level=info msg="StartContainer for \"c52c6d5013b9eb4cde08e0e8b0725e870143c3ec63ce3e5c605564e08c8a5eaf\"" Jul 12 10:12:40.916731 kubelet[2382]: W0712 10:12:40.916645 2382 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.45:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.45:6443: connect: connection refused Jul 12 10:12:40.916778 kubelet[2382]: E0712 10:12:40.916738 2382 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.45:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.45:6443: connect: connection refused" logger="UnhandledError" Jul 12 10:12:40.917813 containerd[1591]: time="2025-07-12T10:12:40.917781647Z" level=info msg="connecting to shim c52c6d5013b9eb4cde08e0e8b0725e870143c3ec63ce3e5c605564e08c8a5eaf" address="unix:///run/containerd/s/63b06db4a4200fb2a7368b4bb4dcc808629d5e9b30dfdb8d02d9a3e57e01106b" protocol=ttrpc version=3 Jul 12 10:12:40.936360 systemd[1]: Started cri-containerd-1e55eeff0b79976c369cfe9f0316c23513fd69a8f924149b8bfe5f64ecc1528c.scope - libcontainer container 1e55eeff0b79976c369cfe9f0316c23513fd69a8f924149b8bfe5f64ecc1528c. Jul 12 10:12:40.950351 systemd[1]: Started cri-containerd-a485b0e59b44a0a62687c508faf00d6f8dc167378f047de14adcbdf978bb1582.scope - libcontainer container a485b0e59b44a0a62687c508faf00d6f8dc167378f047de14adcbdf978bb1582. Jul 12 10:12:40.951888 systemd[1]: Started cri-containerd-c52c6d5013b9eb4cde08e0e8b0725e870143c3ec63ce3e5c605564e08c8a5eaf.scope - libcontainer container c52c6d5013b9eb4cde08e0e8b0725e870143c3ec63ce3e5c605564e08c8a5eaf. Jul 12 10:12:41.059013 kubelet[2382]: E0712 10:12:41.058873 2382 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.45:6443: connect: connection refused" interval="1.6s" Jul 12 10:12:41.214404 containerd[1591]: time="2025-07-12T10:12:41.214252758Z" level=info msg="StartContainer for \"a485b0e59b44a0a62687c508faf00d6f8dc167378f047de14adcbdf978bb1582\" returns successfully" Jul 12 10:12:41.214740 containerd[1591]: time="2025-07-12T10:12:41.214676333Z" level=info msg="StartContainer for \"1e55eeff0b79976c369cfe9f0316c23513fd69a8f924149b8bfe5f64ecc1528c\" returns successfully" Jul 12 10:12:41.215822 containerd[1591]: time="2025-07-12T10:12:41.215084749Z" level=info msg="StartContainer for \"c52c6d5013b9eb4cde08e0e8b0725e870143c3ec63ce3e5c605564e08c8a5eaf\" returns successfully" Jul 12 10:12:41.650711 kubelet[2382]: I0712 10:12:41.650625 2382 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 12 10:12:42.662956 kubelet[2382]: E0712 10:12:42.662890 2382 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 12 10:12:42.714997 kubelet[2382]: I0712 10:12:42.714949 2382 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 12 10:12:42.714997 kubelet[2382]: E0712 10:12:42.714982 2382 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 12 10:12:42.726521 kubelet[2382]: E0712 10:12:42.726434 2382 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 10:12:42.826990 kubelet[2382]: E0712 10:12:42.826916 2382 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 10:12:42.927690 kubelet[2382]: E0712 10:12:42.927512 2382 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 10:12:43.645111 kubelet[2382]: I0712 10:12:43.645035 2382 apiserver.go:52] "Watching apiserver" Jul 12 10:12:43.654161 kubelet[2382]: I0712 10:12:43.654109 2382 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 12 10:12:45.051085 systemd[1]: Reload requested from client PID 2661 ('systemctl') (unit session-7.scope)... Jul 12 10:12:45.051102 systemd[1]: Reloading... Jul 12 10:12:45.128432 zram_generator::config[2702]: No configuration found. Jul 12 10:12:45.247866 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 10:12:45.421585 systemd[1]: Reloading finished in 370 ms. Jul 12 10:12:45.458534 kubelet[2382]: I0712 10:12:45.458462 2382 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 12 10:12:45.460815 kubelet[2382]: E0712 10:12:45.458461 2382 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{localhost.1851795a815e0dc0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-12 10:12:39.648275904 +0000 UTC m=+0.525753863,LastTimestamp:2025-07-12 10:12:39.648275904 +0000 UTC m=+0.525753863,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 12 10:12:45.458897 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 10:12:45.485814 systemd[1]: kubelet.service: Deactivated successfully. Jul 12 10:12:45.486216 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 10:12:45.486278 systemd[1]: kubelet.service: Consumed 1.054s CPU time, 133M memory peak. Jul 12 10:12:45.488215 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 10:12:45.690986 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 10:12:45.703566 (kubelet)[2749]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 12 10:12:45.787878 kubelet[2749]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 10:12:45.787878 kubelet[2749]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 12 10:12:45.787878 kubelet[2749]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 10:12:45.788320 kubelet[2749]: I0712 10:12:45.787938 2749 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 12 10:12:45.795268 kubelet[2749]: I0712 10:12:45.795234 2749 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 12 10:12:45.795268 kubelet[2749]: I0712 10:12:45.795256 2749 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 12 10:12:45.795594 kubelet[2749]: I0712 10:12:45.795568 2749 server.go:934] "Client rotation is on, will bootstrap in background" Jul 12 10:12:45.796959 kubelet[2749]: I0712 10:12:45.796934 2749 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 12 10:12:45.801221 kubelet[2749]: I0712 10:12:45.799329 2749 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 12 10:12:45.804666 kubelet[2749]: I0712 10:12:45.804638 2749 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 12 10:12:45.809466 kubelet[2749]: I0712 10:12:45.809438 2749 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 12 10:12:45.809628 kubelet[2749]: I0712 10:12:45.809600 2749 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 12 10:12:45.809782 kubelet[2749]: I0712 10:12:45.809737 2749 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 12 10:12:45.809962 kubelet[2749]: I0712 10:12:45.809772 2749 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 12 10:12:45.810049 kubelet[2749]: I0712 10:12:45.809964 2749 topology_manager.go:138] "Creating topology manager with none policy" Jul 12 10:12:45.810049 kubelet[2749]: I0712 10:12:45.809974 2749 container_manager_linux.go:300] "Creating device plugin manager" Jul 12 10:12:45.810049 kubelet[2749]: I0712 10:12:45.810005 2749 state_mem.go:36] "Initialized new in-memory state store" Jul 12 10:12:45.810135 kubelet[2749]: I0712 10:12:45.810120 2749 kubelet.go:408] "Attempting to sync node with API server" Jul 12 10:12:45.810135 kubelet[2749]: I0712 10:12:45.810134 2749 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 12 10:12:45.811211 kubelet[2749]: I0712 10:12:45.810208 2749 kubelet.go:314] "Adding apiserver pod source" Jul 12 10:12:45.811211 kubelet[2749]: I0712 10:12:45.810225 2749 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 12 10:12:45.811211 kubelet[2749]: I0712 10:12:45.810932 2749 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 12 10:12:45.811433 kubelet[2749]: I0712 10:12:45.811370 2749 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 12 10:12:45.812438 kubelet[2749]: I0712 10:12:45.811881 2749 server.go:1274] "Started kubelet" Jul 12 10:12:45.812586 kubelet[2749]: I0712 10:12:45.812534 2749 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 12 10:12:45.816111 kubelet[2749]: I0712 10:12:45.816080 2749 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 12 10:12:45.816591 kubelet[2749]: I0712 10:12:45.816448 2749 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 12 10:12:45.821597 kubelet[2749]: I0712 10:12:45.820679 2749 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 12 10:12:45.821684 kubelet[2749]: I0712 10:12:45.821674 2749 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 12 10:12:45.821828 kubelet[2749]: I0712 10:12:45.821810 2749 reconciler.go:26] "Reconciler: start to sync state" Jul 12 10:12:45.822643 kubelet[2749]: E0712 10:12:45.822026 2749 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 10:12:45.822738 kubelet[2749]: I0712 10:12:45.822664 2749 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 12 10:12:45.822768 kubelet[2749]: I0712 10:12:45.822732 2749 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 12 10:12:45.823832 kubelet[2749]: I0712 10:12:45.823795 2749 server.go:449] "Adding debug handlers to kubelet server" Jul 12 10:12:45.825631 kubelet[2749]: I0712 10:12:45.825586 2749 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 12 10:12:45.827748 kubelet[2749]: E0712 10:12:45.827336 2749 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 12 10:12:45.830032 kubelet[2749]: I0712 10:12:45.828735 2749 factory.go:221] Registration of the containerd container factory successfully Jul 12 10:12:45.830032 kubelet[2749]: I0712 10:12:45.828756 2749 factory.go:221] Registration of the systemd container factory successfully Jul 12 10:12:45.836091 kubelet[2749]: I0712 10:12:45.836041 2749 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 12 10:12:45.837379 kubelet[2749]: I0712 10:12:45.837360 2749 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 12 10:12:45.837379 kubelet[2749]: I0712 10:12:45.837379 2749 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 12 10:12:45.837449 kubelet[2749]: I0712 10:12:45.837397 2749 kubelet.go:2321] "Starting kubelet main sync loop" Jul 12 10:12:45.837449 kubelet[2749]: E0712 10:12:45.837438 2749 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 12 10:12:45.868635 kubelet[2749]: I0712 10:12:45.868598 2749 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 12 10:12:45.868635 kubelet[2749]: I0712 10:12:45.868618 2749 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 12 10:12:45.868635 kubelet[2749]: I0712 10:12:45.868649 2749 state_mem.go:36] "Initialized new in-memory state store" Jul 12 10:12:45.868826 kubelet[2749]: I0712 10:12:45.868774 2749 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 12 10:12:45.868826 kubelet[2749]: I0712 10:12:45.868784 2749 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 12 10:12:45.868826 kubelet[2749]: I0712 10:12:45.868802 2749 policy_none.go:49] "None policy: Start" Jul 12 10:12:45.869397 kubelet[2749]: I0712 10:12:45.869367 2749 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 12 10:12:45.869397 kubelet[2749]: I0712 10:12:45.869394 2749 state_mem.go:35] "Initializing new in-memory state store" Jul 12 10:12:45.869568 kubelet[2749]: I0712 10:12:45.869556 2749 state_mem.go:75] "Updated machine memory state" Jul 12 10:12:45.873982 kubelet[2749]: I0712 10:12:45.873860 2749 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 12 10:12:45.874055 kubelet[2749]: I0712 10:12:45.874030 2749 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 12 10:12:45.874080 kubelet[2749]: I0712 10:12:45.874047 2749 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 12 10:12:45.874265 kubelet[2749]: I0712 10:12:45.874244 2749 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 12 10:12:45.945019 kubelet[2749]: E0712 10:12:45.944868 2749 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 12 10:12:45.945152 kubelet[2749]: E0712 10:12:45.945069 2749 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jul 12 10:12:45.976495 kubelet[2749]: I0712 10:12:45.976474 2749 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 12 10:12:45.982408 kubelet[2749]: I0712 10:12:45.982361 2749 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jul 12 10:12:45.982568 kubelet[2749]: I0712 10:12:45.982440 2749 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 12 10:12:46.022501 kubelet[2749]: I0712 10:12:46.022415 2749 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b26717fba0f5abcefc0f6715f46c89f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9b26717fba0f5abcefc0f6715f46c89f\") " pod="kube-system/kube-apiserver-localhost" Jul 12 10:12:46.123457 kubelet[2749]: I0712 10:12:46.123392 2749 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b26717fba0f5abcefc0f6715f46c89f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9b26717fba0f5abcefc0f6715f46c89f\") " pod="kube-system/kube-apiserver-localhost" Jul 12 10:12:46.123457 kubelet[2749]: I0712 10:12:46.123447 2749 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 10:12:46.123457 kubelet[2749]: I0712 10:12:46.123498 2749 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 10:12:46.123457 kubelet[2749]: I0712 10:12:46.123521 2749 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 10:12:46.123916 kubelet[2749]: I0712 10:12:46.123547 2749 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 12 10:12:46.123916 kubelet[2749]: I0712 10:12:46.123608 2749 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b26717fba0f5abcefc0f6715f46c89f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9b26717fba0f5abcefc0f6715f46c89f\") " pod="kube-system/kube-apiserver-localhost" Jul 12 10:12:46.123916 kubelet[2749]: I0712 10:12:46.123687 2749 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 10:12:46.123916 kubelet[2749]: I0712 10:12:46.123747 2749 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 10:12:46.811655 kubelet[2749]: I0712 10:12:46.811606 2749 apiserver.go:52] "Watching apiserver" Jul 12 10:12:46.822352 kubelet[2749]: I0712 10:12:46.822283 2749 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 12 10:12:46.870868 kubelet[2749]: I0712 10:12:46.870674 2749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.87064967 podStartE2EDuration="2.87064967s" podCreationTimestamp="2025-07-12 10:12:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 10:12:46.869513221 +0000 UTC m=+1.116886360" watchObservedRunningTime="2025-07-12 10:12:46.87064967 +0000 UTC m=+1.118022799" Jul 12 10:12:46.886463 kubelet[2749]: I0712 10:12:46.886386 2749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.886361368 podStartE2EDuration="1.886361368s" podCreationTimestamp="2025-07-12 10:12:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 10:12:46.879739421 +0000 UTC m=+1.127112550" watchObservedRunningTime="2025-07-12 10:12:46.886361368 +0000 UTC m=+1.133734497" Jul 12 10:12:46.894722 kubelet[2749]: I0712 10:12:46.894648 2749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.894601712 podStartE2EDuration="3.894601712s" podCreationTimestamp="2025-07-12 10:12:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 10:12:46.886869126 +0000 UTC m=+1.134242255" watchObservedRunningTime="2025-07-12 10:12:46.894601712 +0000 UTC m=+1.141974841" Jul 12 10:12:50.812831 kubelet[2749]: I0712 10:12:50.812781 2749 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 12 10:12:50.813288 containerd[1591]: time="2025-07-12T10:12:50.813254210Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 12 10:12:50.813614 kubelet[2749]: I0712 10:12:50.813583 2749 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 12 10:12:51.730731 systemd[1]: Created slice kubepods-besteffort-podde30614a_61ca_4f41_952e_70f04400bdaa.slice - libcontainer container kubepods-besteffort-podde30614a_61ca_4f41_952e_70f04400bdaa.slice. Jul 12 10:12:51.778591 kubelet[2749]: I0712 10:12:51.778505 2749 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/de30614a-61ca-4f41-952e-70f04400bdaa-kube-proxy\") pod \"kube-proxy-kjps8\" (UID: \"de30614a-61ca-4f41-952e-70f04400bdaa\") " pod="kube-system/kube-proxy-kjps8" Jul 12 10:12:51.778591 kubelet[2749]: I0712 10:12:51.778568 2749 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/de30614a-61ca-4f41-952e-70f04400bdaa-lib-modules\") pod \"kube-proxy-kjps8\" (UID: \"de30614a-61ca-4f41-952e-70f04400bdaa\") " pod="kube-system/kube-proxy-kjps8" Jul 12 10:12:51.778591 kubelet[2749]: I0712 10:12:51.778584 2749 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/de30614a-61ca-4f41-952e-70f04400bdaa-xtables-lock\") pod \"kube-proxy-kjps8\" (UID: \"de30614a-61ca-4f41-952e-70f04400bdaa\") " pod="kube-system/kube-proxy-kjps8" Jul 12 10:12:51.778591 kubelet[2749]: I0712 10:12:51.778605 2749 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqb6q\" (UniqueName: \"kubernetes.io/projected/de30614a-61ca-4f41-952e-70f04400bdaa-kube-api-access-qqb6q\") pod \"kube-proxy-kjps8\" (UID: \"de30614a-61ca-4f41-952e-70f04400bdaa\") " pod="kube-system/kube-proxy-kjps8" Jul 12 10:12:51.949053 systemd[1]: Created slice kubepods-besteffort-podaf211876_a264_4e49_8ee1_d74d3af6dd2f.slice - libcontainer container kubepods-besteffort-podaf211876_a264_4e49_8ee1_d74d3af6dd2f.slice. Jul 12 10:12:51.980513 kubelet[2749]: I0712 10:12:51.980457 2749 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q75gf\" (UniqueName: \"kubernetes.io/projected/af211876-a264-4e49-8ee1-d74d3af6dd2f-kube-api-access-q75gf\") pod \"tigera-operator-5bf8dfcb4-6fjzm\" (UID: \"af211876-a264-4e49-8ee1-d74d3af6dd2f\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-6fjzm" Jul 12 10:12:51.980513 kubelet[2749]: I0712 10:12:51.980504 2749 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/af211876-a264-4e49-8ee1-d74d3af6dd2f-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-6fjzm\" (UID: \"af211876-a264-4e49-8ee1-d74d3af6dd2f\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-6fjzm" Jul 12 10:12:52.041657 containerd[1591]: time="2025-07-12T10:12:52.041458209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kjps8,Uid:de30614a-61ca-4f41-952e-70f04400bdaa,Namespace:kube-system,Attempt:0,}" Jul 12 10:12:52.064678 containerd[1591]: time="2025-07-12T10:12:52.064608521Z" level=info msg="connecting to shim 324f0ffe944eb691c6453e4c324b9fe0f16e85b04d7169591ae826c7be90e186" address="unix:///run/containerd/s/9803d7b2cc5061bf444b1d9a9af75a3c5cf75cbb1db9a1d540274462220a4ebc" namespace=k8s.io protocol=ttrpc version=3 Jul 12 10:12:52.106308 systemd[1]: Started cri-containerd-324f0ffe944eb691c6453e4c324b9fe0f16e85b04d7169591ae826c7be90e186.scope - libcontainer container 324f0ffe944eb691c6453e4c324b9fe0f16e85b04d7169591ae826c7be90e186. Jul 12 10:12:52.135431 containerd[1591]: time="2025-07-12T10:12:52.135374562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kjps8,Uid:de30614a-61ca-4f41-952e-70f04400bdaa,Namespace:kube-system,Attempt:0,} returns sandbox id \"324f0ffe944eb691c6453e4c324b9fe0f16e85b04d7169591ae826c7be90e186\"" Jul 12 10:12:52.138714 containerd[1591]: time="2025-07-12T10:12:52.138359032Z" level=info msg="CreateContainer within sandbox \"324f0ffe944eb691c6453e4c324b9fe0f16e85b04d7169591ae826c7be90e186\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 12 10:12:52.152569 containerd[1591]: time="2025-07-12T10:12:52.151574889Z" level=info msg="Container 0d8c131b0e0077cfaceca45ca35373748da813d4e86c920b80e8a352de3da9c9: CDI devices from CRI Config.CDIDevices: []" Jul 12 10:12:52.160122 containerd[1591]: time="2025-07-12T10:12:52.160079003Z" level=info msg="CreateContainer within sandbox \"324f0ffe944eb691c6453e4c324b9fe0f16e85b04d7169591ae826c7be90e186\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0d8c131b0e0077cfaceca45ca35373748da813d4e86c920b80e8a352de3da9c9\"" Jul 12 10:12:52.160817 containerd[1591]: time="2025-07-12T10:12:52.160735246Z" level=info msg="StartContainer for \"0d8c131b0e0077cfaceca45ca35373748da813d4e86c920b80e8a352de3da9c9\"" Jul 12 10:12:52.162126 containerd[1591]: time="2025-07-12T10:12:52.162093872Z" level=info msg="connecting to shim 0d8c131b0e0077cfaceca45ca35373748da813d4e86c920b80e8a352de3da9c9" address="unix:///run/containerd/s/9803d7b2cc5061bf444b1d9a9af75a3c5cf75cbb1db9a1d540274462220a4ebc" protocol=ttrpc version=3 Jul 12 10:12:52.185351 systemd[1]: Started cri-containerd-0d8c131b0e0077cfaceca45ca35373748da813d4e86c920b80e8a352de3da9c9.scope - libcontainer container 0d8c131b0e0077cfaceca45ca35373748da813d4e86c920b80e8a352de3da9c9. Jul 12 10:12:52.252762 containerd[1591]: time="2025-07-12T10:12:52.252354001Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-6fjzm,Uid:af211876-a264-4e49-8ee1-d74d3af6dd2f,Namespace:tigera-operator,Attempt:0,}" Jul 12 10:12:52.270319 containerd[1591]: time="2025-07-12T10:12:52.270280053Z" level=info msg="StartContainer for \"0d8c131b0e0077cfaceca45ca35373748da813d4e86c920b80e8a352de3da9c9\" returns successfully" Jul 12 10:12:52.279209 containerd[1591]: time="2025-07-12T10:12:52.279146551Z" level=info msg="connecting to shim 900eee2f1cb9ae0df1cd60cbb802308ff1fc103179b810fced672fead11e87b3" address="unix:///run/containerd/s/92f5da45981c62fac294faaeeb2b690521f9b2efee01d0c498e33fe686ee5345" namespace=k8s.io protocol=ttrpc version=3 Jul 12 10:12:52.314333 systemd[1]: Started cri-containerd-900eee2f1cb9ae0df1cd60cbb802308ff1fc103179b810fced672fead11e87b3.scope - libcontainer container 900eee2f1cb9ae0df1cd60cbb802308ff1fc103179b810fced672fead11e87b3. Jul 12 10:12:52.363018 containerd[1591]: time="2025-07-12T10:12:52.362954514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-6fjzm,Uid:af211876-a264-4e49-8ee1-d74d3af6dd2f,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"900eee2f1cb9ae0df1cd60cbb802308ff1fc103179b810fced672fead11e87b3\"" Jul 12 10:12:52.365055 containerd[1591]: time="2025-07-12T10:12:52.365016303Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 12 10:12:52.935484 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3634432666.mount: Deactivated successfully. Jul 12 10:12:53.624417 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2575503121.mount: Deactivated successfully. Jul 12 10:12:54.142985 containerd[1591]: time="2025-07-12T10:12:54.142894508Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:12:54.143626 containerd[1591]: time="2025-07-12T10:12:54.143580766Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 12 10:12:54.144696 containerd[1591]: time="2025-07-12T10:12:54.144634844Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:12:54.146728 containerd[1591]: time="2025-07-12T10:12:54.146686385Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:12:54.147252 containerd[1591]: time="2025-07-12T10:12:54.147218379Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 1.782157622s" Jul 12 10:12:54.147328 containerd[1591]: time="2025-07-12T10:12:54.147257654Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 12 10:12:54.149437 containerd[1591]: time="2025-07-12T10:12:54.149412191Z" level=info msg="CreateContainer within sandbox \"900eee2f1cb9ae0df1cd60cbb802308ff1fc103179b810fced672fead11e87b3\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 12 10:12:54.158829 containerd[1591]: time="2025-07-12T10:12:54.158761601Z" level=info msg="Container e821295ef34fad1054957444cdb184c325b092296e297debceb48b1014270ec8: CDI devices from CRI Config.CDIDevices: []" Jul 12 10:12:54.166026 containerd[1591]: time="2025-07-12T10:12:54.165978709Z" level=info msg="CreateContainer within sandbox \"900eee2f1cb9ae0df1cd60cbb802308ff1fc103179b810fced672fead11e87b3\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"e821295ef34fad1054957444cdb184c325b092296e297debceb48b1014270ec8\"" Jul 12 10:12:54.166776 containerd[1591]: time="2025-07-12T10:12:54.166711455Z" level=info msg="StartContainer for \"e821295ef34fad1054957444cdb184c325b092296e297debceb48b1014270ec8\"" Jul 12 10:12:54.167623 containerd[1591]: time="2025-07-12T10:12:54.167596592Z" level=info msg="connecting to shim e821295ef34fad1054957444cdb184c325b092296e297debceb48b1014270ec8" address="unix:///run/containerd/s/92f5da45981c62fac294faaeeb2b690521f9b2efee01d0c498e33fe686ee5345" protocol=ttrpc version=3 Jul 12 10:12:54.229379 systemd[1]: Started cri-containerd-e821295ef34fad1054957444cdb184c325b092296e297debceb48b1014270ec8.scope - libcontainer container e821295ef34fad1054957444cdb184c325b092296e297debceb48b1014270ec8. Jul 12 10:12:54.267123 containerd[1591]: time="2025-07-12T10:12:54.267067764Z" level=info msg="StartContainer for \"e821295ef34fad1054957444cdb184c325b092296e297debceb48b1014270ec8\" returns successfully" Jul 12 10:12:54.878438 kubelet[2749]: I0712 10:12:54.878302 2749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kjps8" podStartSLOduration=3.878277524 podStartE2EDuration="3.878277524s" podCreationTimestamp="2025-07-12 10:12:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 10:12:52.874009825 +0000 UTC m=+7.121382954" watchObservedRunningTime="2025-07-12 10:12:54.878277524 +0000 UTC m=+9.125650653" Jul 12 10:12:54.879369 kubelet[2749]: I0712 10:12:54.879288 2749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-6fjzm" podStartSLOduration=2.095640367 podStartE2EDuration="3.879253545s" podCreationTimestamp="2025-07-12 10:12:51 +0000 UTC" firstStartedPulling="2025-07-12 10:12:52.364472945 +0000 UTC m=+6.611846074" lastFinishedPulling="2025-07-12 10:12:54.148086123 +0000 UTC m=+8.395459252" observedRunningTime="2025-07-12 10:12:54.878218341 +0000 UTC m=+9.125591490" watchObservedRunningTime="2025-07-12 10:12:54.879253545 +0000 UTC m=+9.126626694" Jul 12 10:12:58.468204 update_engine[1581]: I20250712 10:12:58.468101 1581 update_attempter.cc:509] Updating boot flags... Jul 12 10:12:59.902770 sudo[1814]: pam_unix(sudo:session): session closed for user root Jul 12 10:12:59.906205 sshd[1813]: Connection closed by 10.0.0.1 port 35624 Jul 12 10:12:59.905308 sshd-session[1810]: pam_unix(sshd:session): session closed for user core Jul 12 10:12:59.911123 systemd[1]: sshd@6-10.0.0.45:22-10.0.0.1:35624.service: Deactivated successfully. Jul 12 10:12:59.915019 systemd[1]: session-7.scope: Deactivated successfully. Jul 12 10:12:59.915598 systemd[1]: session-7.scope: Consumed 4.910s CPU time, 221.9M memory peak. Jul 12 10:12:59.918050 systemd-logind[1577]: Session 7 logged out. Waiting for processes to exit. Jul 12 10:12:59.919811 systemd-logind[1577]: Removed session 7. Jul 12 10:13:02.947722 systemd[1]: Created slice kubepods-besteffort-pod8edb5e8a_94b5_4d1a_a14b_d00053d11b64.slice - libcontainer container kubepods-besteffort-pod8edb5e8a_94b5_4d1a_a14b_d00053d11b64.slice. Jul 12 10:13:03.049947 kubelet[2749]: I0712 10:13:03.049889 2749 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8edb5e8a-94b5-4d1a-a14b-d00053d11b64-tigera-ca-bundle\") pod \"calico-typha-7c78bd747f-qkcgb\" (UID: \"8edb5e8a-94b5-4d1a-a14b-d00053d11b64\") " pod="calico-system/calico-typha-7c78bd747f-qkcgb" Jul 12 10:13:03.049947 kubelet[2749]: I0712 10:13:03.049939 2749 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7p765\" (UniqueName: \"kubernetes.io/projected/8edb5e8a-94b5-4d1a-a14b-d00053d11b64-kube-api-access-7p765\") pod \"calico-typha-7c78bd747f-qkcgb\" (UID: \"8edb5e8a-94b5-4d1a-a14b-d00053d11b64\") " pod="calico-system/calico-typha-7c78bd747f-qkcgb" Jul 12 10:13:03.049947 kubelet[2749]: I0712 10:13:03.049962 2749 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/8edb5e8a-94b5-4d1a-a14b-d00053d11b64-typha-certs\") pod \"calico-typha-7c78bd747f-qkcgb\" (UID: \"8edb5e8a-94b5-4d1a-a14b-d00053d11b64\") " pod="calico-system/calico-typha-7c78bd747f-qkcgb" Jul 12 10:13:03.212471 systemd[1]: Created slice kubepods-besteffort-pode2c4d812_6dc7_46c7_a4ef_5d1ac4475981.slice - libcontainer container kubepods-besteffort-pode2c4d812_6dc7_46c7_a4ef_5d1ac4475981.slice. Jul 12 10:13:03.251401 kubelet[2749]: I0712 10:13:03.251326 2749 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e2c4d812-6dc7-46c7-a4ef-5d1ac4475981-lib-modules\") pod \"calico-node-2lmbd\" (UID: \"e2c4d812-6dc7-46c7-a4ef-5d1ac4475981\") " pod="calico-system/calico-node-2lmbd" Jul 12 10:13:03.251401 kubelet[2749]: I0712 10:13:03.251384 2749 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2c4d812-6dc7-46c7-a4ef-5d1ac4475981-tigera-ca-bundle\") pod \"calico-node-2lmbd\" (UID: \"e2c4d812-6dc7-46c7-a4ef-5d1ac4475981\") " pod="calico-system/calico-node-2lmbd" Jul 12 10:13:03.251401 kubelet[2749]: I0712 10:13:03.251409 2749 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e2c4d812-6dc7-46c7-a4ef-5d1ac4475981-var-lib-calico\") pod \"calico-node-2lmbd\" (UID: \"e2c4d812-6dc7-46c7-a4ef-5d1ac4475981\") " pod="calico-system/calico-node-2lmbd" Jul 12 10:13:03.251702 kubelet[2749]: I0712 10:13:03.251437 2749 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e2c4d812-6dc7-46c7-a4ef-5d1ac4475981-xtables-lock\") pod \"calico-node-2lmbd\" (UID: \"e2c4d812-6dc7-46c7-a4ef-5d1ac4475981\") " pod="calico-system/calico-node-2lmbd" Jul 12 10:13:03.251702 kubelet[2749]: I0712 10:13:03.251463 2749 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e2c4d812-6dc7-46c7-a4ef-5d1ac4475981-cni-bin-dir\") pod \"calico-node-2lmbd\" (UID: \"e2c4d812-6dc7-46c7-a4ef-5d1ac4475981\") " pod="calico-system/calico-node-2lmbd" Jul 12 10:13:03.251702 kubelet[2749]: I0712 10:13:03.251484 2749 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e2c4d812-6dc7-46c7-a4ef-5d1ac4475981-node-certs\") pod \"calico-node-2lmbd\" (UID: \"e2c4d812-6dc7-46c7-a4ef-5d1ac4475981\") " pod="calico-system/calico-node-2lmbd" Jul 12 10:13:03.251702 kubelet[2749]: I0712 10:13:03.251505 2749 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e2c4d812-6dc7-46c7-a4ef-5d1ac4475981-flexvol-driver-host\") pod \"calico-node-2lmbd\" (UID: \"e2c4d812-6dc7-46c7-a4ef-5d1ac4475981\") " pod="calico-system/calico-node-2lmbd" Jul 12 10:13:03.251702 kubelet[2749]: I0712 10:13:03.251544 2749 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e2c4d812-6dc7-46c7-a4ef-5d1ac4475981-policysync\") pod \"calico-node-2lmbd\" (UID: \"e2c4d812-6dc7-46c7-a4ef-5d1ac4475981\") " pod="calico-system/calico-node-2lmbd" Jul 12 10:13:03.251863 kubelet[2749]: I0712 10:13:03.251639 2749 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tj54\" (UniqueName: \"kubernetes.io/projected/e2c4d812-6dc7-46c7-a4ef-5d1ac4475981-kube-api-access-4tj54\") pod \"calico-node-2lmbd\" (UID: \"e2c4d812-6dc7-46c7-a4ef-5d1ac4475981\") " pod="calico-system/calico-node-2lmbd" Jul 12 10:13:03.251863 kubelet[2749]: I0712 10:13:03.251698 2749 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e2c4d812-6dc7-46c7-a4ef-5d1ac4475981-cni-net-dir\") pod \"calico-node-2lmbd\" (UID: \"e2c4d812-6dc7-46c7-a4ef-5d1ac4475981\") " pod="calico-system/calico-node-2lmbd" Jul 12 10:13:03.251863 kubelet[2749]: I0712 10:13:03.251726 2749 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e2c4d812-6dc7-46c7-a4ef-5d1ac4475981-var-run-calico\") pod \"calico-node-2lmbd\" (UID: \"e2c4d812-6dc7-46c7-a4ef-5d1ac4475981\") " pod="calico-system/calico-node-2lmbd" Jul 12 10:13:03.251863 kubelet[2749]: I0712 10:13:03.251745 2749 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e2c4d812-6dc7-46c7-a4ef-5d1ac4475981-cni-log-dir\") pod \"calico-node-2lmbd\" (UID: \"e2c4d812-6dc7-46c7-a4ef-5d1ac4475981\") " pod="calico-system/calico-node-2lmbd" Jul 12 10:13:03.256751 containerd[1591]: time="2025-07-12T10:13:03.256699276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7c78bd747f-qkcgb,Uid:8edb5e8a-94b5-4d1a-a14b-d00053d11b64,Namespace:calico-system,Attempt:0,}" Jul 12 10:13:03.356093 kubelet[2749]: E0712 10:13:03.355777 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:03.356093 kubelet[2749]: W0712 10:13:03.355811 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:03.356093 kubelet[2749]: E0712 10:13:03.355838 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:03.365997 kubelet[2749]: E0712 10:13:03.365949 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:03.365997 kubelet[2749]: W0712 10:13:03.365985 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:03.366195 kubelet[2749]: E0712 10:13:03.366013 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:03.374553 kubelet[2749]: E0712 10:13:03.374482 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:03.374553 kubelet[2749]: W0712 10:13:03.374520 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:03.374553 kubelet[2749]: E0712 10:13:03.374557 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:03.429205 containerd[1591]: time="2025-07-12T10:13:03.428623686Z" level=info msg="connecting to shim dc6571caeec036bed778bc6bda40be4d1a8779d848d79b5bd24e5a5c1b9fb2ac" address="unix:///run/containerd/s/0db8f28a4e8ffbdb5d0bee2c23f103a53773b875104e3df268118e8932af6e38" namespace=k8s.io protocol=ttrpc version=3 Jul 12 10:13:03.458320 systemd[1]: Started cri-containerd-dc6571caeec036bed778bc6bda40be4d1a8779d848d79b5bd24e5a5c1b9fb2ac.scope - libcontainer container dc6571caeec036bed778bc6bda40be4d1a8779d848d79b5bd24e5a5c1b9fb2ac. Jul 12 10:13:03.509581 kubelet[2749]: E0712 10:13:03.508096 2749 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qqlk5" podUID="536bd569-4556-43f6-b1a4-efffb6380322" Jul 12 10:13:03.509692 containerd[1591]: time="2025-07-12T10:13:03.509007041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7c78bd747f-qkcgb,Uid:8edb5e8a-94b5-4d1a-a14b-d00053d11b64,Namespace:calico-system,Attempt:0,} returns sandbox id \"dc6571caeec036bed778bc6bda40be4d1a8779d848d79b5bd24e5a5c1b9fb2ac\"" Jul 12 10:13:03.511348 containerd[1591]: time="2025-07-12T10:13:03.511311724Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 12 10:13:03.517250 containerd[1591]: time="2025-07-12T10:13:03.517151048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2lmbd,Uid:e2c4d812-6dc7-46c7-a4ef-5d1ac4475981,Namespace:calico-system,Attempt:0,}" Jul 12 10:13:03.524188 kubelet[2749]: E0712 10:13:03.524089 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:03.524188 kubelet[2749]: W0712 10:13:03.524160 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:03.524400 kubelet[2749]: E0712 10:13:03.524372 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:03.524738 kubelet[2749]: E0712 10:13:03.524718 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:03.524738 kubelet[2749]: W0712 10:13:03.524732 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:03.524803 kubelet[2749]: E0712 10:13:03.524745 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:03.525021 kubelet[2749]: E0712 10:13:03.524994 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:03.525021 kubelet[2749]: W0712 10:13:03.525013 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:03.525093 kubelet[2749]: E0712 10:13:03.525023 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:03.525343 kubelet[2749]: E0712 10:13:03.525325 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:03.525343 kubelet[2749]: W0712 10:13:03.525337 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:03.525661 kubelet[2749]: E0712 10:13:03.525376 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:03.525687 kubelet[2749]: E0712 10:13:03.525676 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:03.525712 kubelet[2749]: W0712 10:13:03.525688 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:03.525712 kubelet[2749]: E0712 10:13:03.525701 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:03.526129 kubelet[2749]: E0712 10:13:03.526098 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:03.526396 kubelet[2749]: W0712 10:13:03.526127 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:03.526396 kubelet[2749]: E0712 10:13:03.526156 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:03.526624 kubelet[2749]: E0712 10:13:03.526606 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:03.526624 kubelet[2749]: W0712 10:13:03.526621 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:03.526676 kubelet[2749]: E0712 10:13:03.526632 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:03.527014 kubelet[2749]: E0712 10:13:03.526868 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:03.527014 kubelet[2749]: W0712 10:13:03.526879 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:03.527014 kubelet[2749]: E0712 10:13:03.526889 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:03.527111 kubelet[2749]: E0712 10:13:03.527088 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:03.527111 kubelet[2749]: W0712 10:13:03.527096 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:03.527111 kubelet[2749]: E0712 10:13:03.527105 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:03.527737 kubelet[2749]: E0712 10:13:03.527390 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:03.527737 kubelet[2749]: W0712 10:13:03.527405 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:03.527737 kubelet[2749]: E0712 10:13:03.527415 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:03.527847 kubelet[2749]: E0712 10:13:03.527769 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:03.527847 kubelet[2749]: W0712 10:13:03.527779 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:03.527847 kubelet[2749]: E0712 10:13:03.527791 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:03.528284 kubelet[2749]: E0712 10:13:03.528264 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:03.528284 kubelet[2749]: W0712 10:13:03.528277 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:03.528366 kubelet[2749]: E0712 10:13:03.528290 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:03.528552 kubelet[2749]: E0712 10:13:03.528505 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:03.528552 kubelet[2749]: W0712 10:13:03.528517 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:03.528552 kubelet[2749]: E0712 10:13:03.528539 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:03.528838 kubelet[2749]: E0712 10:13:03.528822 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:03.528838 kubelet[2749]: W0712 10:13:03.528834 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:03.528899 kubelet[2749]: E0712 10:13:03.528872 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:03.529080 kubelet[2749]: E0712 10:13:03.529065 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:03.529080 kubelet[2749]: W0712 10:13:03.529076 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:03.529139 kubelet[2749]: E0712 10:13:03.529087 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:03.529373 kubelet[2749]: E0712 10:13:03.529358 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:03.529373 kubelet[2749]: W0712 10:13:03.529369 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:03.529440 kubelet[2749]: E0712 10:13:03.529380 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:03.529660 kubelet[2749]: E0712 10:13:03.529644 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:03.529712 kubelet[2749]: W0712 10:13:03.529656 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:03.529739 kubelet[2749]: E0712 10:13:03.529715 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:03.529914 kubelet[2749]: E0712 10:13:03.529902 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:03.529914 kubelet[2749]: W0712 10:13:03.529912 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:03.529967 kubelet[2749]: E0712 10:13:03.529920 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:03.530498 kubelet[2749]: E0712 10:13:03.530454 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:03.530498 kubelet[2749]: W0712 10:13:03.530470 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:03.530498 kubelet[2749]: E0712 10:13:03.530483 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:03.530770 kubelet[2749]: E0712 10:13:03.530750 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:03.530770 kubelet[2749]: W0712 10:13:03.530766 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:03.530840 kubelet[2749]: E0712 10:13:03.530782 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:03.542759 containerd[1591]: time="2025-07-12T10:13:03.542702273Z" level=info msg="connecting to shim e0885bc008f0866bf87240d6ab56b7147add916b107f29b125e38ab470b2f88a" address="unix:///run/containerd/s/ccd6f206ab804d35dae1e543ab275da2b0fc2482d4571e7599e70b256dccf1d2" namespace=k8s.io protocol=ttrpc version=3 Jul 12 10:13:03.554822 kubelet[2749]: E0712 10:13:03.554706 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:03.554822 kubelet[2749]: W0712 10:13:03.554742 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:03.554822 kubelet[2749]: E0712 10:13:03.554772 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:03.554822 kubelet[2749]: I0712 10:13:03.554816 2749 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57wc7\" (UniqueName: \"kubernetes.io/projected/536bd569-4556-43f6-b1a4-efffb6380322-kube-api-access-57wc7\") pod \"csi-node-driver-qqlk5\" (UID: \"536bd569-4556-43f6-b1a4-efffb6380322\") " pod="calico-system/csi-node-driver-qqlk5" Jul 12 10:13:03.555103 kubelet[2749]: E0712 10:13:03.555053 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:03.555103 kubelet[2749]: W0712 10:13:03.555064 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:03.555103 kubelet[2749]: E0712 10:13:03.555082 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:03.555103 kubelet[2749]: I0712 10:13:03.555097 2749 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/536bd569-4556-43f6-b1a4-efffb6380322-kubelet-dir\") pod \"csi-node-driver-qqlk5\" (UID: \"536bd569-4556-43f6-b1a4-efffb6380322\") " pod="calico-system/csi-node-driver-qqlk5" Jul 12 10:13:03.555365 kubelet[2749]: E0712 10:13:03.555330 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:03.555365 kubelet[2749]: W0712 10:13:03.555357 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:03.555426 kubelet[2749]: E0712 10:13:03.555390 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:03.555610 kubelet[2749]: E0712 10:13:03.555595 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:03.555610 kubelet[2749]: W0712 10:13:03.555606 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:03.555610 kubelet[2749]: E0712 10:13:03.555618 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:03.555798 kubelet[2749]: E0712 10:13:03.555785 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:03.555798 kubelet[2749]: W0712 10:13:03.555794 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:03.555865 kubelet[2749]: E0712 10:13:03.555805 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:03.555865 kubelet[2749]: I0712 10:13:03.555832 2749 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/536bd569-4556-43f6-b1a4-efffb6380322-registration-dir\") pod \"csi-node-driver-qqlk5\" (UID: \"536bd569-4556-43f6-b1a4-efffb6380322\") " pod="calico-system/csi-node-driver-qqlk5" Jul 12 10:13:03.556065 kubelet[2749]: E0712 10:13:03.555985 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:03.556065 kubelet[2749]: W0712 10:13:03.556004 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:03.556065 kubelet[2749]: E0712 10:13:03.556024 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:03.556400 kubelet[2749]: E0712 10:13:03.556375 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:03.556400 kubelet[2749]: W0712 10:13:03.556398 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:03.556466 kubelet[2749]: E0712 10:13:03.556413 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:03.556624 kubelet[2749]: E0712 10:13:03.556609 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:03.556624 kubelet[2749]: W0712 10:13:03.556621 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:03.556681 kubelet[2749]: E0712 10:13:03.556648 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:03.556681 kubelet[2749]: I0712 10:13:03.556664 2749 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/536bd569-4556-43f6-b1a4-efffb6380322-varrun\") pod \"csi-node-driver-qqlk5\" (UID: \"536bd569-4556-43f6-b1a4-efffb6380322\") " pod="calico-system/csi-node-driver-qqlk5" Jul 12 10:13:03.556864 kubelet[2749]: E0712 10:13:03.556850 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:03.556864 kubelet[2749]: W0712 10:13:03.556861 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:03.556916 kubelet[2749]: E0712 10:13:03.556875 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:03.556916 kubelet[2749]: I0712 10:13:03.556888 2749 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/536bd569-4556-43f6-b1a4-efffb6380322-socket-dir\") pod \"csi-node-driver-qqlk5\" (UID: \"536bd569-4556-43f6-b1a4-efffb6380322\") " pod="calico-system/csi-node-driver-qqlk5" Jul 12 10:13:03.557088 kubelet[2749]: E0712 10:13:03.557074 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:03.557088 kubelet[2749]: W0712 10:13:03.557084 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:03.557169 kubelet[2749]: E0712 10:13:03.557152 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:03.557316 kubelet[2749]: E0712 10:13:03.557303 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:03.557316 kubelet[2749]: W0712 10:13:03.557313 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:03.557361 kubelet[2749]: E0712 10:13:03.557348 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:03.557479 kubelet[2749]: E0712 10:13:03.557466 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:03.557479 kubelet[2749]: W0712 10:13:03.557476 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:03.557550 kubelet[2749]: E0712 10:13:03.557487 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:03.557659 kubelet[2749]: E0712 10:13:03.557647 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:03.557659 kubelet[2749]: W0712 10:13:03.557656 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:03.557700 kubelet[2749]: E0712 10:13:03.557665 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:03.557841 kubelet[2749]: E0712 10:13:03.557829 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:03.557841 kubelet[2749]: W0712 10:13:03.557839 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:03.557894 kubelet[2749]: E0712 10:13:03.557846 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:03.558021 kubelet[2749]: E0712 10:13:03.558009 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:03.558021 kubelet[2749]: W0712 10:13:03.558018 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:03.558062 kubelet[2749]: E0712 10:13:03.558027 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:03.573355 systemd[1]: Started cri-containerd-e0885bc008f0866bf87240d6ab56b7147add916b107f29b125e38ab470b2f88a.scope - libcontainer container e0885bc008f0866bf87240d6ab56b7147add916b107f29b125e38ab470b2f88a. Jul 12 10:13:03.601550 containerd[1591]: time="2025-07-12T10:13:03.601496412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2lmbd,Uid:e2c4d812-6dc7-46c7-a4ef-5d1ac4475981,Namespace:calico-system,Attempt:0,} returns sandbox id \"e0885bc008f0866bf87240d6ab56b7147add916b107f29b125e38ab470b2f88a\"" Jul 12 10:13:03.657938 kubelet[2749]: E0712 10:13:03.657888 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:03.657938 kubelet[2749]: W0712 10:13:03.657913 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:03.657938 kubelet[2749]: E0712 10:13:03.657938 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:03.658211 kubelet[2749]: E0712 10:13:03.658194 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:03.658211 kubelet[2749]: W0712 10:13:03.658208 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:03.658290 kubelet[2749]: E0712 10:13:03.658227 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:03.658660 kubelet[2749]: E0712 10:13:03.658626 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:03.658660 kubelet[2749]: W0712 10:13:03.658657 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:03.658732 kubelet[2749]: E0712 10:13:03.658694 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:03.658916 kubelet[2749]: E0712 10:13:03.658898 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:03.658916 kubelet[2749]: W0712 10:13:03.658910 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:03.658978 kubelet[2749]: E0712 10:13:03.658942 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:03.659165 kubelet[2749]: E0712 10:13:03.659148 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:03.659165 kubelet[2749]: W0712 10:13:03.659159 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:03.659241 kubelet[2749]: E0712 10:13:03.659205 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:03.659521 kubelet[2749]: E0712 10:13:03.659501 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:03.659521 kubelet[2749]: W0712 10:13:03.659514 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:03.659592 kubelet[2749]: E0712 10:13:03.659560 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:03.659738 kubelet[2749]: E0712 10:13:03.659721 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:03.659767 kubelet[2749]: W0712 10:13:03.659738 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:03.659899 kubelet[2749]: E0712 10:13:03.659822 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:03.659984 kubelet[2749]: E0712 10:13:03.659951 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:03.659984 kubelet[2749]: W0712 10:13:03.659965 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:03.659984 kubelet[2749]: E0712 10:13:03.659979 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:03.660250 kubelet[2749]: E0712 10:13:03.660226 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:03.660250 kubelet[2749]: W0712 10:13:03.660247 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:03.660331 kubelet[2749]: E0712 10:13:03.660264 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:03.660464 kubelet[2749]: E0712 10:13:03.660449 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:03.660496 kubelet[2749]: W0712 10:13:03.660477 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:03.660519 kubelet[2749]: E0712 10:13:03.660507 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:03.660772 kubelet[2749]: E0712 10:13:03.660752 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:03.660772 kubelet[2749]: W0712 10:13:03.660769 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:03.660837 kubelet[2749]: E0712 10:13:03.660789 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:03.661044 kubelet[2749]: E0712 10:13:03.661026 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:03.661044 kubelet[2749]: W0712 10:13:03.661041 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:03.661099 kubelet[2749]: E0712 10:13:03.661057 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:03.661292 kubelet[2749]: E0712 10:13:03.661238 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:03.661292 kubelet[2749]: W0712 10:13:03.661250 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:03.661292 kubelet[2749]: E0712 10:13:03.661263 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:03.661492 kubelet[2749]: E0712 10:13:03.661472 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:03.661492 kubelet[2749]: W0712 10:13:03.661485 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:03.661580 kubelet[2749]: E0712 10:13:03.661504 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:03.661889 kubelet[2749]: E0712 10:13:03.661854 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:03.661889 kubelet[2749]: W0712 10:13:03.661883 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:03.661958 kubelet[2749]: E0712 10:13:03.661904 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:03.662110 kubelet[2749]: E0712 10:13:03.662093 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:03.662110 kubelet[2749]: W0712 10:13:03.662107 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:03.662169 kubelet[2749]: E0712 10:13:03.662124 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:03.662404 kubelet[2749]: E0712 10:13:03.662386 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:03.662404 kubelet[2749]: W0712 10:13:03.662400 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:03.662475 kubelet[2749]: E0712 10:13:03.662415 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:03.662678 kubelet[2749]: E0712 10:13:03.662644 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:03.662678 kubelet[2749]: W0712 10:13:03.662659 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:03.662678 kubelet[2749]: E0712 10:13:03.662675 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:03.662924 kubelet[2749]: E0712 10:13:03.662846 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:03.662924 kubelet[2749]: W0712 10:13:03.662861 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:03.662924 kubelet[2749]: E0712 10:13:03.662920 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:03.663089 kubelet[2749]: E0712 10:13:03.663070 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:03.663089 kubelet[2749]: W0712 10:13:03.663082 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:03.663142 kubelet[2749]: E0712 10:13:03.663111 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:03.663278 kubelet[2749]: E0712 10:13:03.663260 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:03.663278 kubelet[2749]: W0712 10:13:03.663271 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:03.663361 kubelet[2749]: E0712 10:13:03.663285 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:03.663501 kubelet[2749]: E0712 10:13:03.663483 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:03.663501 kubelet[2749]: W0712 10:13:03.663493 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:03.663563 kubelet[2749]: E0712 10:13:03.663505 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:03.663738 kubelet[2749]: E0712 10:13:03.663713 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:03.663738 kubelet[2749]: W0712 10:13:03.663728 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:03.663876 kubelet[2749]: E0712 10:13:03.663755 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:03.664476 kubelet[2749]: E0712 10:13:03.664118 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:03.664476 kubelet[2749]: W0712 10:13:03.664140 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:03.664476 kubelet[2749]: E0712 10:13:03.664155 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:03.664476 kubelet[2749]: E0712 10:13:03.664359 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:03.664476 kubelet[2749]: W0712 10:13:03.664383 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:03.664476 kubelet[2749]: E0712 10:13:03.664395 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:03.670380 kubelet[2749]: E0712 10:13:03.670354 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:03.670380 kubelet[2749]: W0712 10:13:03.670371 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:03.670380 kubelet[2749]: E0712 10:13:03.670385 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:04.838163 kubelet[2749]: E0712 10:13:04.838076 2749 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qqlk5" podUID="536bd569-4556-43f6-b1a4-efffb6380322" Jul 12 10:13:05.006894 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2425906824.mount: Deactivated successfully. Jul 12 10:13:05.449357 containerd[1591]: time="2025-07-12T10:13:05.449273829Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:13:05.450089 containerd[1591]: time="2025-07-12T10:13:05.450001724Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Jul 12 10:13:05.450996 containerd[1591]: time="2025-07-12T10:13:05.450961149Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:13:05.453604 containerd[1591]: time="2025-07-12T10:13:05.453555815Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:13:05.454051 containerd[1591]: time="2025-07-12T10:13:05.454011917Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 1.942666289s" Jul 12 10:13:05.454103 containerd[1591]: time="2025-07-12T10:13:05.454054037Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 12 10:13:05.455750 containerd[1591]: time="2025-07-12T10:13:05.455661426Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 12 10:13:05.464070 containerd[1591]: time="2025-07-12T10:13:05.463985696Z" level=info msg="CreateContainer within sandbox \"dc6571caeec036bed778bc6bda40be4d1a8779d848d79b5bd24e5a5c1b9fb2ac\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 12 10:13:05.476099 containerd[1591]: time="2025-07-12T10:13:05.475413574Z" level=info msg="Container d8c7c8c455dbdfdb475a73ab2811c766be2fefb563eab26e95880f7a796fc80e: CDI devices from CRI Config.CDIDevices: []" Jul 12 10:13:05.488545 containerd[1591]: time="2025-07-12T10:13:05.488463679Z" level=info msg="CreateContainer within sandbox \"dc6571caeec036bed778bc6bda40be4d1a8779d848d79b5bd24e5a5c1b9fb2ac\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"d8c7c8c455dbdfdb475a73ab2811c766be2fefb563eab26e95880f7a796fc80e\"" Jul 12 10:13:05.489449 containerd[1591]: time="2025-07-12T10:13:05.489354034Z" level=info msg="StartContainer for \"d8c7c8c455dbdfdb475a73ab2811c766be2fefb563eab26e95880f7a796fc80e\"" Jul 12 10:13:05.490876 containerd[1591]: time="2025-07-12T10:13:05.490835235Z" level=info msg="connecting to shim d8c7c8c455dbdfdb475a73ab2811c766be2fefb563eab26e95880f7a796fc80e" address="unix:///run/containerd/s/0db8f28a4e8ffbdb5d0bee2c23f103a53773b875104e3df268118e8932af6e38" protocol=ttrpc version=3 Jul 12 10:13:05.518338 systemd[1]: Started cri-containerd-d8c7c8c455dbdfdb475a73ab2811c766be2fefb563eab26e95880f7a796fc80e.scope - libcontainer container d8c7c8c455dbdfdb475a73ab2811c766be2fefb563eab26e95880f7a796fc80e. Jul 12 10:13:05.806598 containerd[1591]: time="2025-07-12T10:13:05.806466782Z" level=info msg="StartContainer for \"d8c7c8c455dbdfdb475a73ab2811c766be2fefb563eab26e95880f7a796fc80e\" returns successfully" Jul 12 10:13:05.907562 kubelet[2749]: I0712 10:13:05.907461 2749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7c78bd747f-qkcgb" podStartSLOduration=1.963105182 podStartE2EDuration="3.907411323s" podCreationTimestamp="2025-07-12 10:13:02 +0000 UTC" firstStartedPulling="2025-07-12 10:13:03.510743248 +0000 UTC m=+17.758116367" lastFinishedPulling="2025-07-12 10:13:05.455049369 +0000 UTC m=+19.702422508" observedRunningTime="2025-07-12 10:13:05.907080577 +0000 UTC m=+20.154453706" watchObservedRunningTime="2025-07-12 10:13:05.907411323 +0000 UTC m=+20.154784452" Jul 12 10:13:05.946834 kubelet[2749]: E0712 10:13:05.946778 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:05.946834 kubelet[2749]: W0712 10:13:05.946803 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:05.946834 kubelet[2749]: E0712 10:13:05.946827 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:05.947061 kubelet[2749]: E0712 10:13:05.947038 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:05.947061 kubelet[2749]: W0712 10:13:05.947048 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:05.947061 kubelet[2749]: E0712 10:13:05.947056 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:05.947640 kubelet[2749]: E0712 10:13:05.947241 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:05.947640 kubelet[2749]: W0712 10:13:05.947253 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:05.947640 kubelet[2749]: E0712 10:13:05.947262 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:05.947640 kubelet[2749]: E0712 10:13:05.947453 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:05.947640 kubelet[2749]: W0712 10:13:05.947461 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:05.947640 kubelet[2749]: E0712 10:13:05.947470 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:05.947823 kubelet[2749]: E0712 10:13:05.947782 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:05.947823 kubelet[2749]: W0712 10:13:05.947802 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:05.947823 kubelet[2749]: E0712 10:13:05.947813 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:05.948272 kubelet[2749]: E0712 10:13:05.948033 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:05.948272 kubelet[2749]: W0712 10:13:05.948046 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:05.948272 kubelet[2749]: E0712 10:13:05.948055 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:05.948374 kubelet[2749]: E0712 10:13:05.948301 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:05.948374 kubelet[2749]: W0712 10:13:05.948309 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:05.948374 kubelet[2749]: E0712 10:13:05.948319 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:05.948531 kubelet[2749]: E0712 10:13:05.948509 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:05.948531 kubelet[2749]: W0712 10:13:05.948520 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:05.948531 kubelet[2749]: E0712 10:13:05.948529 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:05.948984 kubelet[2749]: E0712 10:13:05.948945 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:05.948984 kubelet[2749]: W0712 10:13:05.948970 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:05.948984 kubelet[2749]: E0712 10:13:05.948992 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:05.949296 kubelet[2749]: E0712 10:13:05.949267 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:05.949296 kubelet[2749]: W0712 10:13:05.949280 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:05.949296 kubelet[2749]: E0712 10:13:05.949289 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:05.949497 kubelet[2749]: E0712 10:13:05.949469 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:05.949497 kubelet[2749]: W0712 10:13:05.949476 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:05.949497 kubelet[2749]: E0712 10:13:05.949484 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:05.949694 kubelet[2749]: E0712 10:13:05.949678 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:05.949694 kubelet[2749]: W0712 10:13:05.949689 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:05.949751 kubelet[2749]: E0712 10:13:05.949698 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:05.949907 kubelet[2749]: E0712 10:13:05.949880 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:05.949907 kubelet[2749]: W0712 10:13:05.949902 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:05.949969 kubelet[2749]: E0712 10:13:05.949911 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:05.950207 kubelet[2749]: E0712 10:13:05.950135 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:05.950207 kubelet[2749]: W0712 10:13:05.950155 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:05.950207 kubelet[2749]: E0712 10:13:05.950196 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:05.950392 kubelet[2749]: E0712 10:13:05.950375 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:05.950392 kubelet[2749]: W0712 10:13:05.950390 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:05.950459 kubelet[2749]: E0712 10:13:05.950399 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:05.977035 kubelet[2749]: E0712 10:13:05.976985 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:05.977035 kubelet[2749]: W0712 10:13:05.977020 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:05.977345 kubelet[2749]: E0712 10:13:05.977058 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:05.977469 kubelet[2749]: E0712 10:13:05.977454 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:05.977469 kubelet[2749]: W0712 10:13:05.977466 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:05.977558 kubelet[2749]: E0712 10:13:05.977485 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:05.977851 kubelet[2749]: E0712 10:13:05.977819 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:05.977851 kubelet[2749]: W0712 10:13:05.977843 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:05.977928 kubelet[2749]: E0712 10:13:05.977872 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:05.978091 kubelet[2749]: E0712 10:13:05.978067 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:05.978091 kubelet[2749]: W0712 10:13:05.978084 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:05.978204 kubelet[2749]: E0712 10:13:05.978098 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:05.978349 kubelet[2749]: E0712 10:13:05.978331 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:05.978349 kubelet[2749]: W0712 10:13:05.978343 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:05.978408 kubelet[2749]: E0712 10:13:05.978372 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:05.978643 kubelet[2749]: E0712 10:13:05.978625 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:05.978643 kubelet[2749]: W0712 10:13:05.978637 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:05.978736 kubelet[2749]: E0712 10:13:05.978703 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:05.978843 kubelet[2749]: E0712 10:13:05.978827 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:05.978843 kubelet[2749]: W0712 10:13:05.978838 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:05.978901 kubelet[2749]: E0712 10:13:05.978874 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:05.979024 kubelet[2749]: E0712 10:13:05.979005 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:05.979024 kubelet[2749]: W0712 10:13:05.979016 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:05.979102 kubelet[2749]: E0712 10:13:05.979031 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:05.979302 kubelet[2749]: E0712 10:13:05.979275 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:05.979302 kubelet[2749]: W0712 10:13:05.979292 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:05.979361 kubelet[2749]: E0712 10:13:05.979314 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:05.979583 kubelet[2749]: E0712 10:13:05.979554 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:05.979583 kubelet[2749]: W0712 10:13:05.979573 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:05.979634 kubelet[2749]: E0712 10:13:05.979594 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:05.979787 kubelet[2749]: E0712 10:13:05.979771 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:05.979787 kubelet[2749]: W0712 10:13:05.979782 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:05.979842 kubelet[2749]: E0712 10:13:05.979796 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:05.980002 kubelet[2749]: E0712 10:13:05.979986 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:05.980002 kubelet[2749]: W0712 10:13:05.979997 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:05.980061 kubelet[2749]: E0712 10:13:05.980011 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:05.980253 kubelet[2749]: E0712 10:13:05.980236 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:05.980253 kubelet[2749]: W0712 10:13:05.980247 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:05.980444 kubelet[2749]: E0712 10:13:05.980401 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:05.980576 kubelet[2749]: E0712 10:13:05.980558 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:05.980576 kubelet[2749]: W0712 10:13:05.980570 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:05.980630 kubelet[2749]: E0712 10:13:05.980580 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:05.980838 kubelet[2749]: E0712 10:13:05.980817 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:05.980838 kubelet[2749]: W0712 10:13:05.980828 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:05.980910 kubelet[2749]: E0712 10:13:05.980842 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:05.981064 kubelet[2749]: E0712 10:13:05.981042 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:05.981064 kubelet[2749]: W0712 10:13:05.981052 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:05.981064 kubelet[2749]: E0712 10:13:05.981066 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:05.981349 kubelet[2749]: E0712 10:13:05.981325 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:05.981349 kubelet[2749]: W0712 10:13:05.981341 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:05.981422 kubelet[2749]: E0712 10:13:05.981354 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:05.981593 kubelet[2749]: E0712 10:13:05.981574 2749 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 10:13:05.981593 kubelet[2749]: W0712 10:13:05.981587 2749 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 10:13:05.981642 kubelet[2749]: E0712 10:13:05.981596 2749 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 10:13:06.827228 containerd[1591]: time="2025-07-12T10:13:06.827146905Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:13:06.828039 containerd[1591]: time="2025-07-12T10:13:06.828003314Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Jul 12 10:13:06.829425 containerd[1591]: time="2025-07-12T10:13:06.829392799Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:13:06.831363 containerd[1591]: time="2025-07-12T10:13:06.831324190Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:13:06.831882 containerd[1591]: time="2025-07-12T10:13:06.831832490Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 1.376082898s" Jul 12 10:13:06.831882 containerd[1591]: time="2025-07-12T10:13:06.831875882Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 12 10:13:06.834151 containerd[1591]: time="2025-07-12T10:13:06.834108181Z" level=info msg="CreateContainer within sandbox \"e0885bc008f0866bf87240d6ab56b7147add916b107f29b125e38ab470b2f88a\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 12 10:13:06.838424 kubelet[2749]: E0712 10:13:06.838388 2749 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qqlk5" podUID="536bd569-4556-43f6-b1a4-efffb6380322" Jul 12 10:13:06.842012 containerd[1591]: time="2025-07-12T10:13:06.841951174Z" level=info msg="Container 1e17cadab65d3be6e4446810447dd62070e5bf7fe99c2984ef0e672e8761f938: CDI devices from CRI Config.CDIDevices: []" Jul 12 10:13:06.852994 containerd[1591]: time="2025-07-12T10:13:06.852927478Z" level=info msg="CreateContainer within sandbox \"e0885bc008f0866bf87240d6ab56b7147add916b107f29b125e38ab470b2f88a\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"1e17cadab65d3be6e4446810447dd62070e5bf7fe99c2984ef0e672e8761f938\"" Jul 12 10:13:06.855562 containerd[1591]: time="2025-07-12T10:13:06.855532511Z" level=info msg="StartContainer for \"1e17cadab65d3be6e4446810447dd62070e5bf7fe99c2984ef0e672e8761f938\"" Jul 12 10:13:06.856957 containerd[1591]: time="2025-07-12T10:13:06.856931475Z" level=info msg="connecting to shim 1e17cadab65d3be6e4446810447dd62070e5bf7fe99c2984ef0e672e8761f938" address="unix:///run/containerd/s/ccd6f206ab804d35dae1e543ab275da2b0fc2482d4571e7599e70b256dccf1d2" protocol=ttrpc version=3 Jul 12 10:13:06.887367 systemd[1]: Started cri-containerd-1e17cadab65d3be6e4446810447dd62070e5bf7fe99c2984ef0e672e8761f938.scope - libcontainer container 1e17cadab65d3be6e4446810447dd62070e5bf7fe99c2984ef0e672e8761f938. Jul 12 10:13:06.900867 kubelet[2749]: I0712 10:13:06.900821 2749 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 10:13:06.947772 systemd[1]: cri-containerd-1e17cadab65d3be6e4446810447dd62070e5bf7fe99c2984ef0e672e8761f938.scope: Deactivated successfully. Jul 12 10:13:06.948186 systemd[1]: cri-containerd-1e17cadab65d3be6e4446810447dd62070e5bf7fe99c2984ef0e672e8761f938.scope: Consumed 42ms CPU time, 6.6M memory peak, 4.6M written to disk. Jul 12 10:13:06.950052 containerd[1591]: time="2025-07-12T10:13:06.950003744Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1e17cadab65d3be6e4446810447dd62070e5bf7fe99c2984ef0e672e8761f938\" id:\"1e17cadab65d3be6e4446810447dd62070e5bf7fe99c2984ef0e672e8761f938\" pid:3453 exited_at:{seconds:1752315186 nanos:949379514}" Jul 12 10:13:07.073754 containerd[1591]: time="2025-07-12T10:13:07.073582376Z" level=info msg="received exit event container_id:\"1e17cadab65d3be6e4446810447dd62070e5bf7fe99c2984ef0e672e8761f938\" id:\"1e17cadab65d3be6e4446810447dd62070e5bf7fe99c2984ef0e672e8761f938\" pid:3453 exited_at:{seconds:1752315186 nanos:949379514}" Jul 12 10:13:07.075545 containerd[1591]: time="2025-07-12T10:13:07.075350937Z" level=info msg="StartContainer for \"1e17cadab65d3be6e4446810447dd62070e5bf7fe99c2984ef0e672e8761f938\" returns successfully" Jul 12 10:13:07.098195 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1e17cadab65d3be6e4446810447dd62070e5bf7fe99c2984ef0e672e8761f938-rootfs.mount: Deactivated successfully. Jul 12 10:13:07.905291 containerd[1591]: time="2025-07-12T10:13:07.905231824Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 12 10:13:08.838599 kubelet[2749]: E0712 10:13:08.838515 2749 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qqlk5" podUID="536bd569-4556-43f6-b1a4-efffb6380322" Jul 12 10:13:10.139402 kubelet[2749]: I0712 10:13:10.139316 2749 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 10:13:10.838122 kubelet[2749]: E0712 10:13:10.838053 2749 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qqlk5" podUID="536bd569-4556-43f6-b1a4-efffb6380322" Jul 12 10:13:12.838456 kubelet[2749]: E0712 10:13:12.838366 2749 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qqlk5" podUID="536bd569-4556-43f6-b1a4-efffb6380322" Jul 12 10:13:14.019725 containerd[1591]: time="2025-07-12T10:13:14.019659975Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:13:14.020492 containerd[1591]: time="2025-07-12T10:13:14.020440436Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 12 10:13:14.021947 containerd[1591]: time="2025-07-12T10:13:14.021874899Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:13:14.023996 containerd[1591]: time="2025-07-12T10:13:14.023962523Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:13:14.024610 containerd[1591]: time="2025-07-12T10:13:14.024582201Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 6.119307667s" Jul 12 10:13:14.024610 containerd[1591]: time="2025-07-12T10:13:14.024615493Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 12 10:13:14.026979 containerd[1591]: time="2025-07-12T10:13:14.026868679Z" level=info msg="CreateContainer within sandbox \"e0885bc008f0866bf87240d6ab56b7147add916b107f29b125e38ab470b2f88a\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 12 10:13:14.034945 containerd[1591]: time="2025-07-12T10:13:14.034890655Z" level=info msg="Container 3a18bd17dcc14a056f022ab2fa201c4a8407ceebd848bedf9566a635c26e1188: CDI devices from CRI Config.CDIDevices: []" Jul 12 10:13:14.043572 containerd[1591]: time="2025-07-12T10:13:14.043533972Z" level=info msg="CreateContainer within sandbox \"e0885bc008f0866bf87240d6ab56b7147add916b107f29b125e38ab470b2f88a\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"3a18bd17dcc14a056f022ab2fa201c4a8407ceebd848bedf9566a635c26e1188\"" Jul 12 10:13:14.043992 containerd[1591]: time="2025-07-12T10:13:14.043955967Z" level=info msg="StartContainer for \"3a18bd17dcc14a056f022ab2fa201c4a8407ceebd848bedf9566a635c26e1188\"" Jul 12 10:13:14.045241 containerd[1591]: time="2025-07-12T10:13:14.045212986Z" level=info msg="connecting to shim 3a18bd17dcc14a056f022ab2fa201c4a8407ceebd848bedf9566a635c26e1188" address="unix:///run/containerd/s/ccd6f206ab804d35dae1e543ab275da2b0fc2482d4571e7599e70b256dccf1d2" protocol=ttrpc version=3 Jul 12 10:13:14.069338 systemd[1]: Started cri-containerd-3a18bd17dcc14a056f022ab2fa201c4a8407ceebd848bedf9566a635c26e1188.scope - libcontainer container 3a18bd17dcc14a056f022ab2fa201c4a8407ceebd848bedf9566a635c26e1188. Jul 12 10:13:14.174665 containerd[1591]: time="2025-07-12T10:13:14.174617828Z" level=info msg="StartContainer for \"3a18bd17dcc14a056f022ab2fa201c4a8407ceebd848bedf9566a635c26e1188\" returns successfully" Jul 12 10:13:14.837949 kubelet[2749]: E0712 10:13:14.837849 2749 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qqlk5" podUID="536bd569-4556-43f6-b1a4-efffb6380322" Jul 12 10:13:15.286077 containerd[1591]: time="2025-07-12T10:13:15.286012618Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 12 10:13:15.289838 systemd[1]: cri-containerd-3a18bd17dcc14a056f022ab2fa201c4a8407ceebd848bedf9566a635c26e1188.scope: Deactivated successfully. Jul 12 10:13:15.290209 systemd[1]: cri-containerd-3a18bd17dcc14a056f022ab2fa201c4a8407ceebd848bedf9566a635c26e1188.scope: Consumed 630ms CPU time, 180.7M memory peak, 3.2M read from disk, 171.2M written to disk. Jul 12 10:13:15.290663 containerd[1591]: time="2025-07-12T10:13:15.290616551Z" level=info msg="received exit event container_id:\"3a18bd17dcc14a056f022ab2fa201c4a8407ceebd848bedf9566a635c26e1188\" id:\"3a18bd17dcc14a056f022ab2fa201c4a8407ceebd848bedf9566a635c26e1188\" pid:3517 exited_at:{seconds:1752315195 nanos:290290316}" Jul 12 10:13:15.290756 containerd[1591]: time="2025-07-12T10:13:15.290736177Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3a18bd17dcc14a056f022ab2fa201c4a8407ceebd848bedf9566a635c26e1188\" id:\"3a18bd17dcc14a056f022ab2fa201c4a8407ceebd848bedf9566a635c26e1188\" pid:3517 exited_at:{seconds:1752315195 nanos:290290316}" Jul 12 10:13:15.312756 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3a18bd17dcc14a056f022ab2fa201c4a8407ceebd848bedf9566a635c26e1188-rootfs.mount: Deactivated successfully. Jul 12 10:13:15.320733 kubelet[2749]: I0712 10:13:15.320690 2749 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 12 10:13:15.360466 systemd[1]: Created slice kubepods-burstable-podff2428f2_6b40_496d_84be_58e87d58987a.slice - libcontainer container kubepods-burstable-podff2428f2_6b40_496d_84be_58e87d58987a.slice. Jul 12 10:13:15.367689 systemd[1]: Created slice kubepods-burstable-pod1af8e1d1_d77b_4b95_8661_839de795f16d.slice - libcontainer container kubepods-burstable-pod1af8e1d1_d77b_4b95_8661_839de795f16d.slice. Jul 12 10:13:15.372631 systemd[1]: Created slice kubepods-besteffort-pod1cfe8fc0_9493_425f_a1f9_214bd32beb86.slice - libcontainer container kubepods-besteffort-pod1cfe8fc0_9493_425f_a1f9_214bd32beb86.slice. Jul 12 10:13:15.378432 systemd[1]: Created slice kubepods-besteffort-poda1f402c6_d478_442f_ad36_94ee1518dde7.slice - libcontainer container kubepods-besteffort-poda1f402c6_d478_442f_ad36_94ee1518dde7.slice. Jul 12 10:13:15.382652 systemd[1]: Created slice kubepods-besteffort-pod304fa002_a9d7_4823_b108_a746b3a2662e.slice - libcontainer container kubepods-besteffort-pod304fa002_a9d7_4823_b108_a746b3a2662e.slice. Jul 12 10:13:15.388442 systemd[1]: Created slice kubepods-besteffort-pod5bc98679_4b95_44d6_bc8a_3bb14fbcd2dc.slice - libcontainer container kubepods-besteffort-pod5bc98679_4b95_44d6_bc8a_3bb14fbcd2dc.slice. Jul 12 10:13:15.393844 systemd[1]: Created slice kubepods-besteffort-pod726e2a2b_dba2_401e_ba58_be1ad9b6ceae.slice - libcontainer container kubepods-besteffort-pod726e2a2b_dba2_401e_ba58_be1ad9b6ceae.slice. Jul 12 10:13:15.553265 kubelet[2749]: I0712 10:13:15.413713 2749 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bc98679-4b95-44d6-bc8a-3bb14fbcd2dc-config\") pod \"goldmane-58fd7646b9-mcxx7\" (UID: \"5bc98679-4b95-44d6-bc8a-3bb14fbcd2dc\") " pod="calico-system/goldmane-58fd7646b9-mcxx7" Jul 12 10:13:15.553265 kubelet[2749]: I0712 10:13:15.514004 2749 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1af8e1d1-d77b-4b95-8661-839de795f16d-config-volume\") pod \"coredns-7c65d6cfc9-mksmw\" (UID: \"1af8e1d1-d77b-4b95-8661-839de795f16d\") " pod="kube-system/coredns-7c65d6cfc9-mksmw" Jul 12 10:13:15.553265 kubelet[2749]: I0712 10:13:15.514046 2749 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7jml\" (UniqueName: \"kubernetes.io/projected/a1f402c6-d478-442f-ad36-94ee1518dde7-kube-api-access-q7jml\") pod \"calico-apiserver-64dc94bff6-68bnm\" (UID: \"a1f402c6-d478-442f-ad36-94ee1518dde7\") " pod="calico-apiserver/calico-apiserver-64dc94bff6-68bnm" Jul 12 10:13:15.553265 kubelet[2749]: I0712 10:13:15.514074 2749 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phskc\" (UniqueName: \"kubernetes.io/projected/304fa002-a9d7-4823-b108-a746b3a2662e-kube-api-access-phskc\") pod \"whisker-794cbc4b8d-vt692\" (UID: \"304fa002-a9d7-4823-b108-a746b3a2662e\") " pod="calico-system/whisker-794cbc4b8d-vt692" Jul 12 10:13:15.553265 kubelet[2749]: I0712 10:13:15.514099 2749 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/726e2a2b-dba2-401e-ba58-be1ad9b6ceae-calico-apiserver-certs\") pod \"calico-apiserver-64dc94bff6-rtzpw\" (UID: \"726e2a2b-dba2-401e-ba58-be1ad9b6ceae\") " pod="calico-apiserver/calico-apiserver-64dc94bff6-rtzpw" Jul 12 10:13:15.553517 kubelet[2749]: I0712 10:13:15.514123 2749 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ff2428f2-6b40-496d-84be-58e87d58987a-config-volume\") pod \"coredns-7c65d6cfc9-79dhp\" (UID: \"ff2428f2-6b40-496d-84be-58e87d58987a\") " pod="kube-system/coredns-7c65d6cfc9-79dhp" Jul 12 10:13:15.553517 kubelet[2749]: I0712 10:13:15.514142 2749 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/304fa002-a9d7-4823-b108-a746b3a2662e-whisker-backend-key-pair\") pod \"whisker-794cbc4b8d-vt692\" (UID: \"304fa002-a9d7-4823-b108-a746b3a2662e\") " pod="calico-system/whisker-794cbc4b8d-vt692" Jul 12 10:13:15.553517 kubelet[2749]: I0712 10:13:15.514198 2749 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1cfe8fc0-9493-425f-a1f9-214bd32beb86-tigera-ca-bundle\") pod \"calico-kube-controllers-6645b4c756-pfljt\" (UID: \"1cfe8fc0-9493-425f-a1f9-214bd32beb86\") " pod="calico-system/calico-kube-controllers-6645b4c756-pfljt" Jul 12 10:13:15.553517 kubelet[2749]: I0712 10:13:15.514225 2749 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bc98679-4b95-44d6-bc8a-3bb14fbcd2dc-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-mcxx7\" (UID: \"5bc98679-4b95-44d6-bc8a-3bb14fbcd2dc\") " pod="calico-system/goldmane-58fd7646b9-mcxx7" Jul 12 10:13:15.553517 kubelet[2749]: I0712 10:13:15.514244 2749 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkvgv\" (UniqueName: \"kubernetes.io/projected/1af8e1d1-d77b-4b95-8661-839de795f16d-kube-api-access-dkvgv\") pod \"coredns-7c65d6cfc9-mksmw\" (UID: \"1af8e1d1-d77b-4b95-8661-839de795f16d\") " pod="kube-system/coredns-7c65d6cfc9-mksmw" Jul 12 10:13:15.553639 kubelet[2749]: I0712 10:13:15.514267 2749 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnz2m\" (UniqueName: \"kubernetes.io/projected/1cfe8fc0-9493-425f-a1f9-214bd32beb86-kube-api-access-cnz2m\") pod \"calico-kube-controllers-6645b4c756-pfljt\" (UID: \"1cfe8fc0-9493-425f-a1f9-214bd32beb86\") " pod="calico-system/calico-kube-controllers-6645b4c756-pfljt" Jul 12 10:13:15.553639 kubelet[2749]: I0712 10:13:15.514290 2749 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxrhr\" (UniqueName: \"kubernetes.io/projected/ff2428f2-6b40-496d-84be-58e87d58987a-kube-api-access-zxrhr\") pod \"coredns-7c65d6cfc9-79dhp\" (UID: \"ff2428f2-6b40-496d-84be-58e87d58987a\") " pod="kube-system/coredns-7c65d6cfc9-79dhp" Jul 12 10:13:15.553639 kubelet[2749]: I0712 10:13:15.514312 2749 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/5bc98679-4b95-44d6-bc8a-3bb14fbcd2dc-goldmane-key-pair\") pod \"goldmane-58fd7646b9-mcxx7\" (UID: \"5bc98679-4b95-44d6-bc8a-3bb14fbcd2dc\") " pod="calico-system/goldmane-58fd7646b9-mcxx7" Jul 12 10:13:15.553639 kubelet[2749]: I0712 10:13:15.514334 2749 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a1f402c6-d478-442f-ad36-94ee1518dde7-calico-apiserver-certs\") pod \"calico-apiserver-64dc94bff6-68bnm\" (UID: \"a1f402c6-d478-442f-ad36-94ee1518dde7\") " pod="calico-apiserver/calico-apiserver-64dc94bff6-68bnm" Jul 12 10:13:15.553639 kubelet[2749]: I0712 10:13:15.514370 2749 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/304fa002-a9d7-4823-b108-a746b3a2662e-whisker-ca-bundle\") pod \"whisker-794cbc4b8d-vt692\" (UID: \"304fa002-a9d7-4823-b108-a746b3a2662e\") " pod="calico-system/whisker-794cbc4b8d-vt692" Jul 12 10:13:15.553777 kubelet[2749]: I0712 10:13:15.514396 2749 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfc4c\" (UniqueName: \"kubernetes.io/projected/5bc98679-4b95-44d6-bc8a-3bb14fbcd2dc-kube-api-access-qfc4c\") pod \"goldmane-58fd7646b9-mcxx7\" (UID: \"5bc98679-4b95-44d6-bc8a-3bb14fbcd2dc\") " pod="calico-system/goldmane-58fd7646b9-mcxx7" Jul 12 10:13:15.553777 kubelet[2749]: I0712 10:13:15.514436 2749 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbvz2\" (UniqueName: \"kubernetes.io/projected/726e2a2b-dba2-401e-ba58-be1ad9b6ceae-kube-api-access-nbvz2\") pod \"calico-apiserver-64dc94bff6-rtzpw\" (UID: \"726e2a2b-dba2-401e-ba58-be1ad9b6ceae\") " pod="calico-apiserver/calico-apiserver-64dc94bff6-rtzpw" Jul 12 10:13:15.666783 containerd[1591]: time="2025-07-12T10:13:15.666736732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-79dhp,Uid:ff2428f2-6b40-496d-84be-58e87d58987a,Namespace:kube-system,Attempt:0,}" Jul 12 10:13:15.671334 containerd[1591]: time="2025-07-12T10:13:15.671293215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mksmw,Uid:1af8e1d1-d77b-4b95-8661-839de795f16d,Namespace:kube-system,Attempt:0,}" Jul 12 10:13:15.676353 containerd[1591]: time="2025-07-12T10:13:15.676286782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6645b4c756-pfljt,Uid:1cfe8fc0-9493-425f-a1f9-214bd32beb86,Namespace:calico-system,Attempt:0,}" Jul 12 10:13:15.683373 containerd[1591]: time="2025-07-12T10:13:15.683321384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64dc94bff6-68bnm,Uid:a1f402c6-d478-442f-ad36-94ee1518dde7,Namespace:calico-apiserver,Attempt:0,}" Jul 12 10:13:15.688333 containerd[1591]: time="2025-07-12T10:13:15.688284915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-794cbc4b8d-vt692,Uid:304fa002-a9d7-4823-b108-a746b3a2662e,Namespace:calico-system,Attempt:0,}" Jul 12 10:13:15.692659 containerd[1591]: time="2025-07-12T10:13:15.692448087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-mcxx7,Uid:5bc98679-4b95-44d6-bc8a-3bb14fbcd2dc,Namespace:calico-system,Attempt:0,}" Jul 12 10:13:15.774891 containerd[1591]: time="2025-07-12T10:13:15.774830331Z" level=error msg="Failed to destroy network for sandbox \"b6484903e2e6fe12869cce9151fa0fe74d9e6112291d26c5a9970b9755ba1b2b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 10:13:15.776047 containerd[1591]: time="2025-07-12T10:13:15.775979245Z" level=error msg="Failed to destroy network for sandbox \"ea13a8d27f06c39458f9a0d80dc124c27250b7e9de964abe382b31a186004c3d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 10:13:15.788285 containerd[1591]: time="2025-07-12T10:13:15.788231536Z" level=error msg="Failed to destroy network for sandbox \"063f9198925fa3b93be6fe4b3a84e3aeb09173b96e1f596433513430ab1bbf16\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 10:13:15.790758 containerd[1591]: time="2025-07-12T10:13:15.790673626Z" level=error msg="Failed to destroy network for sandbox \"93444dd7e50d8ebc9cfdedee24c0d4a472a4bc0e2cb6ee4b6b067da280a3c0f8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 10:13:15.797046 containerd[1591]: time="2025-07-12T10:13:15.796993582Z" level=error msg="Failed to destroy network for sandbox \"5dbc574030c9b274c6a8e484a46e11519e3784d7dd3fabb77464f397d284cd71\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 10:13:15.802296 containerd[1591]: time="2025-07-12T10:13:15.802246858Z" level=error msg="Failed to destroy network for sandbox \"99824707482abc33d669903d01b7408d82cd4cfab062c2e0e6c3b31e57e0e57c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 10:13:15.853873 containerd[1591]: time="2025-07-12T10:13:15.853762564Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64dc94bff6-68bnm,Uid:a1f402c6-d478-442f-ad36-94ee1518dde7,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6484903e2e6fe12869cce9151fa0fe74d9e6112291d26c5a9970b9755ba1b2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 10:13:15.854579 containerd[1591]: time="2025-07-12T10:13:15.854550869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64dc94bff6-rtzpw,Uid:726e2a2b-dba2-401e-ba58-be1ad9b6ceae,Namespace:calico-apiserver,Attempt:0,}" Jul 12 10:13:15.861738 kubelet[2749]: E0712 10:13:15.861683 2749 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6484903e2e6fe12869cce9151fa0fe74d9e6112291d26c5a9970b9755ba1b2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 10:13:15.862063 kubelet[2749]: E0712 10:13:15.861759 2749 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6484903e2e6fe12869cce9151fa0fe74d9e6112291d26c5a9970b9755ba1b2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-64dc94bff6-68bnm" Jul 12 10:13:15.862063 kubelet[2749]: E0712 10:13:15.861779 2749 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6484903e2e6fe12869cce9151fa0fe74d9e6112291d26c5a9970b9755ba1b2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-64dc94bff6-68bnm" Jul 12 10:13:15.862063 kubelet[2749]: E0712 10:13:15.861824 2749 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-64dc94bff6-68bnm_calico-apiserver(a1f402c6-d478-442f-ad36-94ee1518dde7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-64dc94bff6-68bnm_calico-apiserver(a1f402c6-d478-442f-ad36-94ee1518dde7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b6484903e2e6fe12869cce9151fa0fe74d9e6112291d26c5a9970b9755ba1b2b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-64dc94bff6-68bnm" podUID="a1f402c6-d478-442f-ad36-94ee1518dde7" Jul 12 10:13:15.884779 containerd[1591]: time="2025-07-12T10:13:15.884717678Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-794cbc4b8d-vt692,Uid:304fa002-a9d7-4823-b108-a746b3a2662e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea13a8d27f06c39458f9a0d80dc124c27250b7e9de964abe382b31a186004c3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 10:13:15.885011 kubelet[2749]: E0712 10:13:15.884934 2749 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea13a8d27f06c39458f9a0d80dc124c27250b7e9de964abe382b31a186004c3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 10:13:15.885059 kubelet[2749]: E0712 10:13:15.885033 2749 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea13a8d27f06c39458f9a0d80dc124c27250b7e9de964abe382b31a186004c3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-794cbc4b8d-vt692" Jul 12 10:13:15.885086 kubelet[2749]: E0712 10:13:15.885057 2749 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea13a8d27f06c39458f9a0d80dc124c27250b7e9de964abe382b31a186004c3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-794cbc4b8d-vt692" Jul 12 10:13:15.885193 kubelet[2749]: E0712 10:13:15.885110 2749 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-794cbc4b8d-vt692_calico-system(304fa002-a9d7-4823-b108-a746b3a2662e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-794cbc4b8d-vt692_calico-system(304fa002-a9d7-4823-b108-a746b3a2662e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ea13a8d27f06c39458f9a0d80dc124c27250b7e9de964abe382b31a186004c3d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-794cbc4b8d-vt692" podUID="304fa002-a9d7-4823-b108-a746b3a2662e" Jul 12 10:13:15.915618 containerd[1591]: time="2025-07-12T10:13:15.915562855Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6645b4c756-pfljt,Uid:1cfe8fc0-9493-425f-a1f9-214bd32beb86,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"063f9198925fa3b93be6fe4b3a84e3aeb09173b96e1f596433513430ab1bbf16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 10:13:15.915933 kubelet[2749]: E0712 10:13:15.915857 2749 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"063f9198925fa3b93be6fe4b3a84e3aeb09173b96e1f596433513430ab1bbf16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 10:13:15.915977 kubelet[2749]: E0712 10:13:15.915957 2749 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"063f9198925fa3b93be6fe4b3a84e3aeb09173b96e1f596433513430ab1bbf16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6645b4c756-pfljt" Jul 12 10:13:15.916002 kubelet[2749]: E0712 10:13:15.915975 2749 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"063f9198925fa3b93be6fe4b3a84e3aeb09173b96e1f596433513430ab1bbf16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6645b4c756-pfljt" Jul 12 10:13:15.916122 kubelet[2749]: E0712 10:13:15.916094 2749 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6645b4c756-pfljt_calico-system(1cfe8fc0-9493-425f-a1f9-214bd32beb86)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6645b4c756-pfljt_calico-system(1cfe8fc0-9493-425f-a1f9-214bd32beb86)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"063f9198925fa3b93be6fe4b3a84e3aeb09173b96e1f596433513430ab1bbf16\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6645b4c756-pfljt" podUID="1cfe8fc0-9493-425f-a1f9-214bd32beb86" Jul 12 10:13:15.922098 containerd[1591]: time="2025-07-12T10:13:15.922039225Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 12 10:13:15.931933 containerd[1591]: time="2025-07-12T10:13:15.931854785Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mksmw,Uid:1af8e1d1-d77b-4b95-8661-839de795f16d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"93444dd7e50d8ebc9cfdedee24c0d4a472a4bc0e2cb6ee4b6b067da280a3c0f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 10:13:15.932096 kubelet[2749]: E0712 10:13:15.932065 2749 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93444dd7e50d8ebc9cfdedee24c0d4a472a4bc0e2cb6ee4b6b067da280a3c0f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 10:13:15.932162 kubelet[2749]: E0712 10:13:15.932121 2749 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93444dd7e50d8ebc9cfdedee24c0d4a472a4bc0e2cb6ee4b6b067da280a3c0f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-mksmw" Jul 12 10:13:15.932162 kubelet[2749]: E0712 10:13:15.932138 2749 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93444dd7e50d8ebc9cfdedee24c0d4a472a4bc0e2cb6ee4b6b067da280a3c0f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-mksmw" Jul 12 10:13:15.932245 kubelet[2749]: E0712 10:13:15.932206 2749 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-mksmw_kube-system(1af8e1d1-d77b-4b95-8661-839de795f16d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-mksmw_kube-system(1af8e1d1-d77b-4b95-8661-839de795f16d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"93444dd7e50d8ebc9cfdedee24c0d4a472a4bc0e2cb6ee4b6b067da280a3c0f8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-mksmw" podUID="1af8e1d1-d77b-4b95-8661-839de795f16d" Jul 12 10:13:15.969649 containerd[1591]: time="2025-07-12T10:13:15.969591535Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-79dhp,Uid:ff2428f2-6b40-496d-84be-58e87d58987a,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5dbc574030c9b274c6a8e484a46e11519e3784d7dd3fabb77464f397d284cd71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 10:13:15.969852 kubelet[2749]: E0712 10:13:15.969797 2749 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5dbc574030c9b274c6a8e484a46e11519e3784d7dd3fabb77464f397d284cd71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 10:13:15.969931 kubelet[2749]: E0712 10:13:15.969853 2749 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5dbc574030c9b274c6a8e484a46e11519e3784d7dd3fabb77464f397d284cd71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-79dhp" Jul 12 10:13:15.969931 kubelet[2749]: E0712 10:13:15.969872 2749 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5dbc574030c9b274c6a8e484a46e11519e3784d7dd3fabb77464f397d284cd71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-79dhp" Jul 12 10:13:15.969931 kubelet[2749]: E0712 10:13:15.969906 2749 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-79dhp_kube-system(ff2428f2-6b40-496d-84be-58e87d58987a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-79dhp_kube-system(ff2428f2-6b40-496d-84be-58e87d58987a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5dbc574030c9b274c6a8e484a46e11519e3784d7dd3fabb77464f397d284cd71\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-79dhp" podUID="ff2428f2-6b40-496d-84be-58e87d58987a" Jul 12 10:13:16.006986 containerd[1591]: time="2025-07-12T10:13:16.006932927Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-mcxx7,Uid:5bc98679-4b95-44d6-bc8a-3bb14fbcd2dc,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"99824707482abc33d669903d01b7408d82cd4cfab062c2e0e6c3b31e57e0e57c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 10:13:16.007208 kubelet[2749]: E0712 10:13:16.007136 2749 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99824707482abc33d669903d01b7408d82cd4cfab062c2e0e6c3b31e57e0e57c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 10:13:16.007285 kubelet[2749]: E0712 10:13:16.007211 2749 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99824707482abc33d669903d01b7408d82cd4cfab062c2e0e6c3b31e57e0e57c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-mcxx7" Jul 12 10:13:16.007285 kubelet[2749]: E0712 10:13:16.007228 2749 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99824707482abc33d669903d01b7408d82cd4cfab062c2e0e6c3b31e57e0e57c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-mcxx7" Jul 12 10:13:16.007285 kubelet[2749]: E0712 10:13:16.007275 2749 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-mcxx7_calico-system(5bc98679-4b95-44d6-bc8a-3bb14fbcd2dc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-mcxx7_calico-system(5bc98679-4b95-44d6-bc8a-3bb14fbcd2dc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"99824707482abc33d669903d01b7408d82cd4cfab062c2e0e6c3b31e57e0e57c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-mcxx7" podUID="5bc98679-4b95-44d6-bc8a-3bb14fbcd2dc" Jul 12 10:13:16.121784 containerd[1591]: time="2025-07-12T10:13:16.121633417Z" level=error msg="Failed to destroy network for sandbox \"6ee4188b6411cd79b7419dfc6f543dcbccdd9eab368d5f00ee53e5d8f55e3e83\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 10:13:16.123133 containerd[1591]: time="2025-07-12T10:13:16.123102634Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64dc94bff6-rtzpw,Uid:726e2a2b-dba2-401e-ba58-be1ad9b6ceae,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ee4188b6411cd79b7419dfc6f543dcbccdd9eab368d5f00ee53e5d8f55e3e83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 10:13:16.123362 kubelet[2749]: E0712 10:13:16.123314 2749 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ee4188b6411cd79b7419dfc6f543dcbccdd9eab368d5f00ee53e5d8f55e3e83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 10:13:16.123414 kubelet[2749]: E0712 10:13:16.123382 2749 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ee4188b6411cd79b7419dfc6f543dcbccdd9eab368d5f00ee53e5d8f55e3e83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-64dc94bff6-rtzpw" Jul 12 10:13:16.123414 kubelet[2749]: E0712 10:13:16.123401 2749 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ee4188b6411cd79b7419dfc6f543dcbccdd9eab368d5f00ee53e5d8f55e3e83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-64dc94bff6-rtzpw" Jul 12 10:13:16.123483 kubelet[2749]: E0712 10:13:16.123445 2749 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-64dc94bff6-rtzpw_calico-apiserver(726e2a2b-dba2-401e-ba58-be1ad9b6ceae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-64dc94bff6-rtzpw_calico-apiserver(726e2a2b-dba2-401e-ba58-be1ad9b6ceae)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6ee4188b6411cd79b7419dfc6f543dcbccdd9eab368d5f00ee53e5d8f55e3e83\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-64dc94bff6-rtzpw" podUID="726e2a2b-dba2-401e-ba58-be1ad9b6ceae" Jul 12 10:13:16.844918 systemd[1]: Created slice kubepods-besteffort-pod536bd569_4556_43f6_b1a4_efffb6380322.slice - libcontainer container kubepods-besteffort-pod536bd569_4556_43f6_b1a4_efffb6380322.slice. Jul 12 10:13:16.847968 containerd[1591]: time="2025-07-12T10:13:16.847924504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qqlk5,Uid:536bd569-4556-43f6-b1a4-efffb6380322,Namespace:calico-system,Attempt:0,}" Jul 12 10:13:16.914349 containerd[1591]: time="2025-07-12T10:13:16.914253373Z" level=error msg="Failed to destroy network for sandbox \"1f8a198a8d74c72ce6fcabb80ff2296670f845980caafabc7dbe9f0af709ec98\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 10:13:16.915801 containerd[1591]: time="2025-07-12T10:13:16.915756323Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qqlk5,Uid:536bd569-4556-43f6-b1a4-efffb6380322,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f8a198a8d74c72ce6fcabb80ff2296670f845980caafabc7dbe9f0af709ec98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 10:13:16.916150 kubelet[2749]: E0712 10:13:16.916099 2749 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f8a198a8d74c72ce6fcabb80ff2296670f845980caafabc7dbe9f0af709ec98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 10:13:16.916611 kubelet[2749]: E0712 10:13:16.916190 2749 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f8a198a8d74c72ce6fcabb80ff2296670f845980caafabc7dbe9f0af709ec98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qqlk5" Jul 12 10:13:16.916611 kubelet[2749]: E0712 10:13:16.916212 2749 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f8a198a8d74c72ce6fcabb80ff2296670f845980caafabc7dbe9f0af709ec98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qqlk5" Jul 12 10:13:16.916611 kubelet[2749]: E0712 10:13:16.916261 2749 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qqlk5_calico-system(536bd569-4556-43f6-b1a4-efffb6380322)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qqlk5_calico-system(536bd569-4556-43f6-b1a4-efffb6380322)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1f8a198a8d74c72ce6fcabb80ff2296670f845980caafabc7dbe9f0af709ec98\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qqlk5" podUID="536bd569-4556-43f6-b1a4-efffb6380322" Jul 12 10:13:16.916913 systemd[1]: run-netns-cni\x2d065e6fa6\x2d8160\x2de61b\x2d9f88\x2d1c57fe8332ba.mount: Deactivated successfully. Jul 12 10:13:23.208044 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1592806129.mount: Deactivated successfully. Jul 12 10:13:24.138047 containerd[1591]: time="2025-07-12T10:13:24.137978119Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:13:24.138827 containerd[1591]: time="2025-07-12T10:13:24.138733740Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 12 10:13:24.140033 containerd[1591]: time="2025-07-12T10:13:24.139997647Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:13:24.142289 containerd[1591]: time="2025-07-12T10:13:24.142249181Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:13:24.149034 containerd[1591]: time="2025-07-12T10:13:24.148970750Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 8.226890018s" Jul 12 10:13:24.149034 containerd[1591]: time="2025-07-12T10:13:24.149022037Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 12 10:13:24.174081 containerd[1591]: time="2025-07-12T10:13:24.173944561Z" level=info msg="CreateContainer within sandbox \"e0885bc008f0866bf87240d6ab56b7147add916b107f29b125e38ab470b2f88a\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 12 10:13:24.188648 containerd[1591]: time="2025-07-12T10:13:24.188593096Z" level=info msg="Container d0565d0910ac10d5296e04f66347e8804962980f7d43919a425eb7467b227a11: CDI devices from CRI Config.CDIDevices: []" Jul 12 10:13:24.249681 containerd[1591]: time="2025-07-12T10:13:24.249591488Z" level=info msg="CreateContainer within sandbox \"e0885bc008f0866bf87240d6ab56b7147add916b107f29b125e38ab470b2f88a\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"d0565d0910ac10d5296e04f66347e8804962980f7d43919a425eb7467b227a11\"" Jul 12 10:13:24.250197 containerd[1591]: time="2025-07-12T10:13:24.250147313Z" level=info msg="StartContainer for \"d0565d0910ac10d5296e04f66347e8804962980f7d43919a425eb7467b227a11\"" Jul 12 10:13:24.251774 containerd[1591]: time="2025-07-12T10:13:24.251745198Z" level=info msg="connecting to shim d0565d0910ac10d5296e04f66347e8804962980f7d43919a425eb7467b227a11" address="unix:///run/containerd/s/ccd6f206ab804d35dae1e543ab275da2b0fc2482d4571e7599e70b256dccf1d2" protocol=ttrpc version=3 Jul 12 10:13:24.274380 systemd[1]: Started cri-containerd-d0565d0910ac10d5296e04f66347e8804962980f7d43919a425eb7467b227a11.scope - libcontainer container d0565d0910ac10d5296e04f66347e8804962980f7d43919a425eb7467b227a11. Jul 12 10:13:24.390807 containerd[1591]: time="2025-07-12T10:13:24.390652229Z" level=info msg="StartContainer for \"d0565d0910ac10d5296e04f66347e8804962980f7d43919a425eb7467b227a11\" returns successfully" Jul 12 10:13:24.414680 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 12 10:13:24.414790 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 12 10:13:24.671024 kubelet[2749]: I0712 10:13:24.670859 2749 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/304fa002-a9d7-4823-b108-a746b3a2662e-whisker-backend-key-pair\") pod \"304fa002-a9d7-4823-b108-a746b3a2662e\" (UID: \"304fa002-a9d7-4823-b108-a746b3a2662e\") " Jul 12 10:13:24.671024 kubelet[2749]: I0712 10:13:24.670923 2749 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phskc\" (UniqueName: \"kubernetes.io/projected/304fa002-a9d7-4823-b108-a746b3a2662e-kube-api-access-phskc\") pod \"304fa002-a9d7-4823-b108-a746b3a2662e\" (UID: \"304fa002-a9d7-4823-b108-a746b3a2662e\") " Jul 12 10:13:24.671024 kubelet[2749]: I0712 10:13:24.670973 2749 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/304fa002-a9d7-4823-b108-a746b3a2662e-whisker-ca-bundle\") pod \"304fa002-a9d7-4823-b108-a746b3a2662e\" (UID: \"304fa002-a9d7-4823-b108-a746b3a2662e\") " Jul 12 10:13:24.673406 kubelet[2749]: I0712 10:13:24.671540 2749 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/304fa002-a9d7-4823-b108-a746b3a2662e-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "304fa002-a9d7-4823-b108-a746b3a2662e" (UID: "304fa002-a9d7-4823-b108-a746b3a2662e"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 12 10:13:24.676981 kubelet[2749]: I0712 10:13:24.676886 2749 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/304fa002-a9d7-4823-b108-a746b3a2662e-kube-api-access-phskc" (OuterVolumeSpecName: "kube-api-access-phskc") pod "304fa002-a9d7-4823-b108-a746b3a2662e" (UID: "304fa002-a9d7-4823-b108-a746b3a2662e"). InnerVolumeSpecName "kube-api-access-phskc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 12 10:13:24.678644 systemd[1]: var-lib-kubelet-pods-304fa002\x2da9d7\x2d4823\x2db108\x2da746b3a2662e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dphskc.mount: Deactivated successfully. Jul 12 10:13:24.680084 kubelet[2749]: I0712 10:13:24.680060 2749 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/304fa002-a9d7-4823-b108-a746b3a2662e-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "304fa002-a9d7-4823-b108-a746b3a2662e" (UID: "304fa002-a9d7-4823-b108-a746b3a2662e"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 12 10:13:24.682030 systemd[1]: var-lib-kubelet-pods-304fa002\x2da9d7\x2d4823\x2db108\x2da746b3a2662e-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 12 10:13:24.771515 kubelet[2749]: I0712 10:13:24.771456 2749 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/304fa002-a9d7-4823-b108-a746b3a2662e-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 12 10:13:24.771515 kubelet[2749]: I0712 10:13:24.771497 2749 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phskc\" (UniqueName: \"kubernetes.io/projected/304fa002-a9d7-4823-b108-a746b3a2662e-kube-api-access-phskc\") on node \"localhost\" DevicePath \"\"" Jul 12 10:13:24.771515 kubelet[2749]: I0712 10:13:24.771506 2749 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/304fa002-a9d7-4823-b108-a746b3a2662e-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 12 10:13:25.253455 systemd[1]: Removed slice kubepods-besteffort-pod304fa002_a9d7_4823_b108_a746b3a2662e.slice - libcontainer container kubepods-besteffort-pod304fa002_a9d7_4823_b108_a746b3a2662e.slice. Jul 12 10:13:25.261202 kubelet[2749]: I0712 10:13:25.260074 2749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-2lmbd" podStartSLOduration=1.698453741 podStartE2EDuration="22.26005595s" podCreationTimestamp="2025-07-12 10:13:03 +0000 UTC" firstStartedPulling="2025-07-12 10:13:03.602727292 +0000 UTC m=+17.850100411" lastFinishedPulling="2025-07-12 10:13:24.164329491 +0000 UTC m=+38.411702620" observedRunningTime="2025-07-12 10:13:25.257425214 +0000 UTC m=+39.504798353" watchObservedRunningTime="2025-07-12 10:13:25.26005595 +0000 UTC m=+39.507429079" Jul 12 10:13:25.320308 systemd[1]: Created slice kubepods-besteffort-pod9c35f064_ef20_4d5a_a4d6_eac75895af59.slice - libcontainer container kubepods-besteffort-pod9c35f064_ef20_4d5a_a4d6_eac75895af59.slice. Jul 12 10:13:25.475305 kubelet[2749]: I0712 10:13:25.475232 2749 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c35f064-ef20-4d5a-a4d6-eac75895af59-whisker-ca-bundle\") pod \"whisker-774d884cc6-zsqw5\" (UID: \"9c35f064-ef20-4d5a-a4d6-eac75895af59\") " pod="calico-system/whisker-774d884cc6-zsqw5" Jul 12 10:13:25.475305 kubelet[2749]: I0712 10:13:25.475285 2749 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9c35f064-ef20-4d5a-a4d6-eac75895af59-whisker-backend-key-pair\") pod \"whisker-774d884cc6-zsqw5\" (UID: \"9c35f064-ef20-4d5a-a4d6-eac75895af59\") " pod="calico-system/whisker-774d884cc6-zsqw5" Jul 12 10:13:25.475305 kubelet[2749]: I0712 10:13:25.475309 2749 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfr2b\" (UniqueName: \"kubernetes.io/projected/9c35f064-ef20-4d5a-a4d6-eac75895af59-kube-api-access-bfr2b\") pod \"whisker-774d884cc6-zsqw5\" (UID: \"9c35f064-ef20-4d5a-a4d6-eac75895af59\") " pod="calico-system/whisker-774d884cc6-zsqw5" Jul 12 10:13:25.624465 containerd[1591]: time="2025-07-12T10:13:25.624339792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-774d884cc6-zsqw5,Uid:9c35f064-ef20-4d5a-a4d6-eac75895af59,Namespace:calico-system,Attempt:0,}" Jul 12 10:13:25.845121 kubelet[2749]: I0712 10:13:25.845014 2749 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="304fa002-a9d7-4823-b108-a746b3a2662e" path="/var/lib/kubelet/pods/304fa002-a9d7-4823-b108-a746b3a2662e/volumes" Jul 12 10:13:26.281403 systemd-networkd[1498]: calia4d98a38836: Link UP Jul 12 10:13:26.281614 systemd-networkd[1498]: calia4d98a38836: Gained carrier Jul 12 10:13:26.321622 containerd[1591]: 2025-07-12 10:13:25.667 [INFO][3896] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 12 10:13:26.321622 containerd[1591]: 2025-07-12 10:13:25.685 [INFO][3896] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--774d884cc6--zsqw5-eth0 whisker-774d884cc6- calico-system 9c35f064-ef20-4d5a-a4d6-eac75895af59 881 0 2025-07-12 10:13:25 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:774d884cc6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-774d884cc6-zsqw5 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calia4d98a38836 [] [] }} ContainerID="b6d1cdc398a4edf2cabdec02cd062d1b96699a039aee23d8028e13026b9a71be" Namespace="calico-system" Pod="whisker-774d884cc6-zsqw5" WorkloadEndpoint="localhost-k8s-whisker--774d884cc6--zsqw5-" Jul 12 10:13:26.321622 containerd[1591]: 2025-07-12 10:13:25.685 [INFO][3896] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b6d1cdc398a4edf2cabdec02cd062d1b96699a039aee23d8028e13026b9a71be" Namespace="calico-system" Pod="whisker-774d884cc6-zsqw5" WorkloadEndpoint="localhost-k8s-whisker--774d884cc6--zsqw5-eth0" Jul 12 10:13:26.321622 containerd[1591]: 2025-07-12 10:13:25.793 [INFO][3910] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b6d1cdc398a4edf2cabdec02cd062d1b96699a039aee23d8028e13026b9a71be" HandleID="k8s-pod-network.b6d1cdc398a4edf2cabdec02cd062d1b96699a039aee23d8028e13026b9a71be" Workload="localhost-k8s-whisker--774d884cc6--zsqw5-eth0" Jul 12 10:13:26.321984 containerd[1591]: 2025-07-12 10:13:25.794 [INFO][3910] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b6d1cdc398a4edf2cabdec02cd062d1b96699a039aee23d8028e13026b9a71be" HandleID="k8s-pod-network.b6d1cdc398a4edf2cabdec02cd062d1b96699a039aee23d8028e13026b9a71be" Workload="localhost-k8s-whisker--774d884cc6--zsqw5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000386350), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-774d884cc6-zsqw5", "timestamp":"2025-07-12 10:13:25.792724949 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 10:13:26.321984 containerd[1591]: 2025-07-12 10:13:25.794 [INFO][3910] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 10:13:26.321984 containerd[1591]: 2025-07-12 10:13:25.794 [INFO][3910] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 10:13:26.321984 containerd[1591]: 2025-07-12 10:13:25.794 [INFO][3910] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 12 10:13:26.321984 containerd[1591]: 2025-07-12 10:13:25.806 [INFO][3910] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b6d1cdc398a4edf2cabdec02cd062d1b96699a039aee23d8028e13026b9a71be" host="localhost" Jul 12 10:13:26.321984 containerd[1591]: 2025-07-12 10:13:25.820 [INFO][3910] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 12 10:13:26.321984 containerd[1591]: 2025-07-12 10:13:25.826 [INFO][3910] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 12 10:13:26.321984 containerd[1591]: 2025-07-12 10:13:25.828 [INFO][3910] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 12 10:13:26.321984 containerd[1591]: 2025-07-12 10:13:25.830 [INFO][3910] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 12 10:13:26.321984 containerd[1591]: 2025-07-12 10:13:25.830 [INFO][3910] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b6d1cdc398a4edf2cabdec02cd062d1b96699a039aee23d8028e13026b9a71be" host="localhost" Jul 12 10:13:26.322438 containerd[1591]: 2025-07-12 10:13:25.834 [INFO][3910] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b6d1cdc398a4edf2cabdec02cd062d1b96699a039aee23d8028e13026b9a71be Jul 12 10:13:26.322438 containerd[1591]: 2025-07-12 10:13:25.896 [INFO][3910] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b6d1cdc398a4edf2cabdec02cd062d1b96699a039aee23d8028e13026b9a71be" host="localhost" Jul 12 10:13:26.322438 containerd[1591]: 2025-07-12 10:13:26.266 [INFO][3910] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.b6d1cdc398a4edf2cabdec02cd062d1b96699a039aee23d8028e13026b9a71be" host="localhost" Jul 12 10:13:26.322438 containerd[1591]: 2025-07-12 10:13:26.266 [INFO][3910] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.b6d1cdc398a4edf2cabdec02cd062d1b96699a039aee23d8028e13026b9a71be" host="localhost" Jul 12 10:13:26.322438 containerd[1591]: 2025-07-12 10:13:26.266 [INFO][3910] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 10:13:26.322438 containerd[1591]: 2025-07-12 10:13:26.266 [INFO][3910] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="b6d1cdc398a4edf2cabdec02cd062d1b96699a039aee23d8028e13026b9a71be" HandleID="k8s-pod-network.b6d1cdc398a4edf2cabdec02cd062d1b96699a039aee23d8028e13026b9a71be" Workload="localhost-k8s-whisker--774d884cc6--zsqw5-eth0" Jul 12 10:13:26.322569 containerd[1591]: 2025-07-12 10:13:26.271 [INFO][3896] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b6d1cdc398a4edf2cabdec02cd062d1b96699a039aee23d8028e13026b9a71be" Namespace="calico-system" Pod="whisker-774d884cc6-zsqw5" WorkloadEndpoint="localhost-k8s-whisker--774d884cc6--zsqw5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--774d884cc6--zsqw5-eth0", GenerateName:"whisker-774d884cc6-", Namespace:"calico-system", SelfLink:"", UID:"9c35f064-ef20-4d5a-a4d6-eac75895af59", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 10, 13, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"774d884cc6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-774d884cc6-zsqw5", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia4d98a38836", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 10:13:26.322569 containerd[1591]: 2025-07-12 10:13:26.271 [INFO][3896] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="b6d1cdc398a4edf2cabdec02cd062d1b96699a039aee23d8028e13026b9a71be" Namespace="calico-system" Pod="whisker-774d884cc6-zsqw5" WorkloadEndpoint="localhost-k8s-whisker--774d884cc6--zsqw5-eth0" Jul 12 10:13:26.322660 containerd[1591]: 2025-07-12 10:13:26.271 [INFO][3896] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia4d98a38836 ContainerID="b6d1cdc398a4edf2cabdec02cd062d1b96699a039aee23d8028e13026b9a71be" Namespace="calico-system" Pod="whisker-774d884cc6-zsqw5" WorkloadEndpoint="localhost-k8s-whisker--774d884cc6--zsqw5-eth0" Jul 12 10:13:26.322660 containerd[1591]: 2025-07-12 10:13:26.280 [INFO][3896] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b6d1cdc398a4edf2cabdec02cd062d1b96699a039aee23d8028e13026b9a71be" Namespace="calico-system" Pod="whisker-774d884cc6-zsqw5" WorkloadEndpoint="localhost-k8s-whisker--774d884cc6--zsqw5-eth0" Jul 12 10:13:26.322705 containerd[1591]: 2025-07-12 10:13:26.283 [INFO][3896] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b6d1cdc398a4edf2cabdec02cd062d1b96699a039aee23d8028e13026b9a71be" Namespace="calico-system" Pod="whisker-774d884cc6-zsqw5" WorkloadEndpoint="localhost-k8s-whisker--774d884cc6--zsqw5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--774d884cc6--zsqw5-eth0", GenerateName:"whisker-774d884cc6-", Namespace:"calico-system", SelfLink:"", UID:"9c35f064-ef20-4d5a-a4d6-eac75895af59", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 10, 13, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"774d884cc6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b6d1cdc398a4edf2cabdec02cd062d1b96699a039aee23d8028e13026b9a71be", Pod:"whisker-774d884cc6-zsqw5", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia4d98a38836", MAC:"a2:92:e2:15:af:3e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 10:13:26.322759 containerd[1591]: 2025-07-12 10:13:26.308 [INFO][3896] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b6d1cdc398a4edf2cabdec02cd062d1b96699a039aee23d8028e13026b9a71be" Namespace="calico-system" Pod="whisker-774d884cc6-zsqw5" WorkloadEndpoint="localhost-k8s-whisker--774d884cc6--zsqw5-eth0" Jul 12 10:13:26.520989 containerd[1591]: time="2025-07-12T10:13:26.520934497Z" level=info msg="connecting to shim b6d1cdc398a4edf2cabdec02cd062d1b96699a039aee23d8028e13026b9a71be" address="unix:///run/containerd/s/b86cae1d60aee4f11aae149e7612933de7acc4e93ea9a38627b6824f2f446b30" namespace=k8s.io protocol=ttrpc version=3 Jul 12 10:13:26.554835 systemd[1]: Started cri-containerd-b6d1cdc398a4edf2cabdec02cd062d1b96699a039aee23d8028e13026b9a71be.scope - libcontainer container b6d1cdc398a4edf2cabdec02cd062d1b96699a039aee23d8028e13026b9a71be. Jul 12 10:13:26.568994 systemd-resolved[1413]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 10:13:26.569948 containerd[1591]: time="2025-07-12T10:13:26.569902832Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d0565d0910ac10d5296e04f66347e8804962980f7d43919a425eb7467b227a11\" id:\"4a42b479035ae9579dff77f8e229c02485a675f41894e0109f8161fa1e9e103d\" pid:4069 exit_status:1 exited_at:{seconds:1752315206 nanos:569459338}" Jul 12 10:13:26.608862 containerd[1591]: time="2025-07-12T10:13:26.608802879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-774d884cc6-zsqw5,Uid:9c35f064-ef20-4d5a-a4d6-eac75895af59,Namespace:calico-system,Attempt:0,} returns sandbox id \"b6d1cdc398a4edf2cabdec02cd062d1b96699a039aee23d8028e13026b9a71be\"" Jul 12 10:13:26.610978 containerd[1591]: time="2025-07-12T10:13:26.610930600Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 12 10:13:26.638983 systemd-networkd[1498]: vxlan.calico: Link UP Jul 12 10:13:26.638996 systemd-networkd[1498]: vxlan.calico: Gained carrier Jul 12 10:13:26.838922 containerd[1591]: time="2025-07-12T10:13:26.838753639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-79dhp,Uid:ff2428f2-6b40-496d-84be-58e87d58987a,Namespace:kube-system,Attempt:0,}" Jul 12 10:13:26.915591 systemd[1]: Started sshd@7-10.0.0.45:22-10.0.0.1:47726.service - OpenSSH per-connection server daemon (10.0.0.1:47726). Jul 12 10:13:27.018562 systemd-networkd[1498]: calif8e02e6e609: Link UP Jul 12 10:13:27.022213 systemd-networkd[1498]: calif8e02e6e609: Gained carrier Jul 12 10:13:27.040994 containerd[1591]: 2025-07-12 10:13:26.878 [INFO][4170] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--79dhp-eth0 coredns-7c65d6cfc9- kube-system ff2428f2-6b40-496d-84be-58e87d58987a 794 0 2025-07-12 10:12:51 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-79dhp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif8e02e6e609 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="3b36aac60f6d9f06a691559fb0f499fb9bf62078c4879554d061e4b4a76fbbd6" Namespace="kube-system" Pod="coredns-7c65d6cfc9-79dhp" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--79dhp-" Jul 12 10:13:27.040994 containerd[1591]: 2025-07-12 10:13:26.878 [INFO][4170] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3b36aac60f6d9f06a691559fb0f499fb9bf62078c4879554d061e4b4a76fbbd6" Namespace="kube-system" Pod="coredns-7c65d6cfc9-79dhp" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--79dhp-eth0" Jul 12 10:13:27.040994 containerd[1591]: 2025-07-12 10:13:26.915 [INFO][4185] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3b36aac60f6d9f06a691559fb0f499fb9bf62078c4879554d061e4b4a76fbbd6" HandleID="k8s-pod-network.3b36aac60f6d9f06a691559fb0f499fb9bf62078c4879554d061e4b4a76fbbd6" Workload="localhost-k8s-coredns--7c65d6cfc9--79dhp-eth0" Jul 12 10:13:27.041282 containerd[1591]: 2025-07-12 10:13:26.915 [INFO][4185] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3b36aac60f6d9f06a691559fb0f499fb9bf62078c4879554d061e4b4a76fbbd6" HandleID="k8s-pod-network.3b36aac60f6d9f06a691559fb0f499fb9bf62078c4879554d061e4b4a76fbbd6" Workload="localhost-k8s-coredns--7c65d6cfc9--79dhp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002df6b0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-79dhp", "timestamp":"2025-07-12 10:13:26.915678098 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 10:13:27.041282 containerd[1591]: 2025-07-12 10:13:26.915 [INFO][4185] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 10:13:27.041282 containerd[1591]: 2025-07-12 10:13:26.915 [INFO][4185] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 10:13:27.041282 containerd[1591]: 2025-07-12 10:13:26.916 [INFO][4185] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 12 10:13:27.041282 containerd[1591]: 2025-07-12 10:13:26.923 [INFO][4185] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3b36aac60f6d9f06a691559fb0f499fb9bf62078c4879554d061e4b4a76fbbd6" host="localhost" Jul 12 10:13:27.041282 containerd[1591]: 2025-07-12 10:13:26.929 [INFO][4185] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 12 10:13:27.041282 containerd[1591]: 2025-07-12 10:13:26.935 [INFO][4185] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 12 10:13:27.041282 containerd[1591]: 2025-07-12 10:13:26.937 [INFO][4185] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 12 10:13:27.041282 containerd[1591]: 2025-07-12 10:13:26.990 [INFO][4185] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 12 10:13:27.041282 containerd[1591]: 2025-07-12 10:13:26.990 [INFO][4185] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3b36aac60f6d9f06a691559fb0f499fb9bf62078c4879554d061e4b4a76fbbd6" host="localhost" Jul 12 10:13:27.041539 containerd[1591]: 2025-07-12 10:13:26.992 [INFO][4185] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3b36aac60f6d9f06a691559fb0f499fb9bf62078c4879554d061e4b4a76fbbd6 Jul 12 10:13:27.041539 containerd[1591]: 2025-07-12 10:13:26.997 [INFO][4185] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3b36aac60f6d9f06a691559fb0f499fb9bf62078c4879554d061e4b4a76fbbd6" host="localhost" Jul 12 10:13:27.041539 containerd[1591]: 2025-07-12 10:13:27.006 [INFO][4185] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.3b36aac60f6d9f06a691559fb0f499fb9bf62078c4879554d061e4b4a76fbbd6" host="localhost" Jul 12 10:13:27.041539 containerd[1591]: 2025-07-12 10:13:27.006 [INFO][4185] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.3b36aac60f6d9f06a691559fb0f499fb9bf62078c4879554d061e4b4a76fbbd6" host="localhost" Jul 12 10:13:27.041539 containerd[1591]: 2025-07-12 10:13:27.006 [INFO][4185] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 10:13:27.041539 containerd[1591]: 2025-07-12 10:13:27.006 [INFO][4185] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="3b36aac60f6d9f06a691559fb0f499fb9bf62078c4879554d061e4b4a76fbbd6" HandleID="k8s-pod-network.3b36aac60f6d9f06a691559fb0f499fb9bf62078c4879554d061e4b4a76fbbd6" Workload="localhost-k8s-coredns--7c65d6cfc9--79dhp-eth0" Jul 12 10:13:27.041667 containerd[1591]: 2025-07-12 10:13:27.013 [INFO][4170] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3b36aac60f6d9f06a691559fb0f499fb9bf62078c4879554d061e4b4a76fbbd6" Namespace="kube-system" Pod="coredns-7c65d6cfc9-79dhp" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--79dhp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--79dhp-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"ff2428f2-6b40-496d-84be-58e87d58987a", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 10, 12, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-79dhp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif8e02e6e609", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 10:13:27.041739 containerd[1591]: 2025-07-12 10:13:27.014 [INFO][4170] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="3b36aac60f6d9f06a691559fb0f499fb9bf62078c4879554d061e4b4a76fbbd6" Namespace="kube-system" Pod="coredns-7c65d6cfc9-79dhp" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--79dhp-eth0" Jul 12 10:13:27.041739 containerd[1591]: 2025-07-12 10:13:27.014 [INFO][4170] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif8e02e6e609 ContainerID="3b36aac60f6d9f06a691559fb0f499fb9bf62078c4879554d061e4b4a76fbbd6" Namespace="kube-system" Pod="coredns-7c65d6cfc9-79dhp" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--79dhp-eth0" Jul 12 10:13:27.041739 containerd[1591]: 2025-07-12 10:13:27.022 [INFO][4170] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3b36aac60f6d9f06a691559fb0f499fb9bf62078c4879554d061e4b4a76fbbd6" Namespace="kube-system" Pod="coredns-7c65d6cfc9-79dhp" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--79dhp-eth0" Jul 12 10:13:27.041812 containerd[1591]: 2025-07-12 10:13:27.023 [INFO][4170] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3b36aac60f6d9f06a691559fb0f499fb9bf62078c4879554d061e4b4a76fbbd6" Namespace="kube-system" Pod="coredns-7c65d6cfc9-79dhp" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--79dhp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--79dhp-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"ff2428f2-6b40-496d-84be-58e87d58987a", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 10, 12, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3b36aac60f6d9f06a691559fb0f499fb9bf62078c4879554d061e4b4a76fbbd6", Pod:"coredns-7c65d6cfc9-79dhp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif8e02e6e609", MAC:"32:39:21:d0:14:0a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 10:13:27.041812 containerd[1591]: 2025-07-12 10:13:27.037 [INFO][4170] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3b36aac60f6d9f06a691559fb0f499fb9bf62078c4879554d061e4b4a76fbbd6" Namespace="kube-system" Pod="coredns-7c65d6cfc9-79dhp" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--79dhp-eth0" Jul 12 10:13:27.062736 sshd[4198]: Accepted publickey for core from 10.0.0.1 port 47726 ssh2: RSA SHA256:ljiWRYFI9rWPiw0u7CK8T0RxaTLaHmCBjbv/AQVYFjw Jul 12 10:13:27.065415 sshd-session[4198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 10:13:27.066992 containerd[1591]: time="2025-07-12T10:13:27.066941159Z" level=info msg="connecting to shim 3b36aac60f6d9f06a691559fb0f499fb9bf62078c4879554d061e4b4a76fbbd6" address="unix:///run/containerd/s/7f306495536a88296bf5ebee563a60482d5eacf4eceeac38cf9208d02e16fcd6" namespace=k8s.io protocol=ttrpc version=3 Jul 12 10:13:27.071298 systemd-logind[1577]: New session 8 of user core. Jul 12 10:13:27.076313 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 12 10:13:27.096332 systemd[1]: Started cri-containerd-3b36aac60f6d9f06a691559fb0f499fb9bf62078c4879554d061e4b4a76fbbd6.scope - libcontainer container 3b36aac60f6d9f06a691559fb0f499fb9bf62078c4879554d061e4b4a76fbbd6. Jul 12 10:13:27.110261 systemd-resolved[1413]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 10:13:27.145379 containerd[1591]: time="2025-07-12T10:13:27.145327637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-79dhp,Uid:ff2428f2-6b40-496d-84be-58e87d58987a,Namespace:kube-system,Attempt:0,} returns sandbox id \"3b36aac60f6d9f06a691559fb0f499fb9bf62078c4879554d061e4b4a76fbbd6\"" Jul 12 10:13:27.150040 containerd[1591]: time="2025-07-12T10:13:27.150000030Z" level=info msg="CreateContainer within sandbox \"3b36aac60f6d9f06a691559fb0f499fb9bf62078c4879554d061e4b4a76fbbd6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 12 10:13:27.168520 containerd[1591]: time="2025-07-12T10:13:27.168467011Z" level=info msg="Container a527734401dcfa24dc9957c12c7bdff0b61b85ea0a13d3285633d6774afe3c12: CDI devices from CRI Config.CDIDevices: []" Jul 12 10:13:27.169845 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2715806546.mount: Deactivated successfully. Jul 12 10:13:27.178502 containerd[1591]: time="2025-07-12T10:13:27.178448492Z" level=info msg="CreateContainer within sandbox \"3b36aac60f6d9f06a691559fb0f499fb9bf62078c4879554d061e4b4a76fbbd6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a527734401dcfa24dc9957c12c7bdff0b61b85ea0a13d3285633d6774afe3c12\"" Jul 12 10:13:27.179423 containerd[1591]: time="2025-07-12T10:13:27.179387127Z" level=info msg="StartContainer for \"a527734401dcfa24dc9957c12c7bdff0b61b85ea0a13d3285633d6774afe3c12\"" Jul 12 10:13:27.180721 containerd[1591]: time="2025-07-12T10:13:27.180660490Z" level=info msg="connecting to shim a527734401dcfa24dc9957c12c7bdff0b61b85ea0a13d3285633d6774afe3c12" address="unix:///run/containerd/s/7f306495536a88296bf5ebee563a60482d5eacf4eceeac38cf9208d02e16fcd6" protocol=ttrpc version=3 Jul 12 10:13:27.208358 systemd[1]: Started cri-containerd-a527734401dcfa24dc9957c12c7bdff0b61b85ea0a13d3285633d6774afe3c12.scope - libcontainer container a527734401dcfa24dc9957c12c7bdff0b61b85ea0a13d3285633d6774afe3c12. Jul 12 10:13:27.226951 sshd[4268]: Connection closed by 10.0.0.1 port 47726 Jul 12 10:13:27.228110 sshd-session[4198]: pam_unix(sshd:session): session closed for user core Jul 12 10:13:27.232492 systemd[1]: sshd@7-10.0.0.45:22-10.0.0.1:47726.service: Deactivated successfully. Jul 12 10:13:27.235844 systemd[1]: session-8.scope: Deactivated successfully. Jul 12 10:13:27.237209 systemd-logind[1577]: Session 8 logged out. Waiting for processes to exit. Jul 12 10:13:27.239551 systemd-logind[1577]: Removed session 8. Jul 12 10:13:27.249222 containerd[1591]: time="2025-07-12T10:13:27.249139396Z" level=info msg="StartContainer for \"a527734401dcfa24dc9957c12c7bdff0b61b85ea0a13d3285633d6774afe3c12\" returns successfully" Jul 12 10:13:27.340109 containerd[1591]: time="2025-07-12T10:13:27.340033263Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d0565d0910ac10d5296e04f66347e8804962980f7d43919a425eb7467b227a11\" id:\"333614cc4ead8d78fe69e2d1f49b9f10080ce60adf41c2bb15a0f0c9b45245a7\" pid:4336 exit_status:1 exited_at:{seconds:1752315207 nanos:339603867}" Jul 12 10:13:27.623346 systemd-networkd[1498]: calia4d98a38836: Gained IPv6LL Jul 12 10:13:27.840098 containerd[1591]: time="2025-07-12T10:13:27.840019890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mksmw,Uid:1af8e1d1-d77b-4b95-8661-839de795f16d,Namespace:kube-system,Attempt:0,}" Jul 12 10:13:27.840962 containerd[1591]: time="2025-07-12T10:13:27.840938015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64dc94bff6-rtzpw,Uid:726e2a2b-dba2-401e-ba58-be1ad9b6ceae,Namespace:calico-apiserver,Attempt:0,}" Jul 12 10:13:27.841062 containerd[1591]: time="2025-07-12T10:13:27.841041730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64dc94bff6-68bnm,Uid:a1f402c6-d478-442f-ad36-94ee1518dde7,Namespace:calico-apiserver,Attempt:0,}" Jul 12 10:13:27.932853 containerd[1591]: time="2025-07-12T10:13:27.930505059Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:13:27.932853 containerd[1591]: time="2025-07-12T10:13:27.932063869Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 12 10:13:27.938388 containerd[1591]: time="2025-07-12T10:13:27.937152464Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:13:27.941239 containerd[1591]: time="2025-07-12T10:13:27.940105845Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:13:27.944412 containerd[1591]: time="2025-07-12T10:13:27.942982733Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 1.33200824s" Jul 12 10:13:27.944412 containerd[1591]: time="2025-07-12T10:13:27.943027066Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 12 10:13:27.949264 containerd[1591]: time="2025-07-12T10:13:27.948326938Z" level=info msg="CreateContainer within sandbox \"b6d1cdc398a4edf2cabdec02cd062d1b96699a039aee23d8028e13026b9a71be\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 12 10:13:27.966395 containerd[1591]: time="2025-07-12T10:13:27.966349934Z" level=info msg="Container a4e5819cc850b92211f0e3a707b0a769458a3e27a80a47f35220eb0d213a31e7: CDI devices from CRI Config.CDIDevices: []" Jul 12 10:13:27.975040 containerd[1591]: time="2025-07-12T10:13:27.974976449Z" level=info msg="CreateContainer within sandbox \"b6d1cdc398a4edf2cabdec02cd062d1b96699a039aee23d8028e13026b9a71be\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"a4e5819cc850b92211f0e3a707b0a769458a3e27a80a47f35220eb0d213a31e7\"" Jul 12 10:13:27.975845 containerd[1591]: time="2025-07-12T10:13:27.975820295Z" level=info msg="StartContainer for \"a4e5819cc850b92211f0e3a707b0a769458a3e27a80a47f35220eb0d213a31e7\"" Jul 12 10:13:27.976894 containerd[1591]: time="2025-07-12T10:13:27.976866953Z" level=info msg="connecting to shim a4e5819cc850b92211f0e3a707b0a769458a3e27a80a47f35220eb0d213a31e7" address="unix:///run/containerd/s/b86cae1d60aee4f11aae149e7612933de7acc4e93ea9a38627b6824f2f446b30" protocol=ttrpc version=3 Jul 12 10:13:27.997041 systemd-networkd[1498]: cali6b7cdece69e: Link UP Jul 12 10:13:27.997815 systemd-networkd[1498]: cali6b7cdece69e: Gained carrier Jul 12 10:13:28.013528 systemd[1]: Started cri-containerd-a4e5819cc850b92211f0e3a707b0a769458a3e27a80a47f35220eb0d213a31e7.scope - libcontainer container a4e5819cc850b92211f0e3a707b0a769458a3e27a80a47f35220eb0d213a31e7. Jul 12 10:13:28.015611 containerd[1591]: 2025-07-12 10:13:27.896 [INFO][4391] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--64dc94bff6--68bnm-eth0 calico-apiserver-64dc94bff6- calico-apiserver a1f402c6-d478-442f-ad36-94ee1518dde7 805 0 2025-07-12 10:13:00 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:64dc94bff6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-64dc94bff6-68bnm eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6b7cdece69e [] [] }} ContainerID="7d7be6f7914149709a240a1f1dace54935b4ed4b3b803e01915e14a9dce042b7" Namespace="calico-apiserver" Pod="calico-apiserver-64dc94bff6-68bnm" WorkloadEndpoint="localhost-k8s-calico--apiserver--64dc94bff6--68bnm-" Jul 12 10:13:28.015611 containerd[1591]: 2025-07-12 10:13:27.898 [INFO][4391] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7d7be6f7914149709a240a1f1dace54935b4ed4b3b803e01915e14a9dce042b7" Namespace="calico-apiserver" Pod="calico-apiserver-64dc94bff6-68bnm" WorkloadEndpoint="localhost-k8s-calico--apiserver--64dc94bff6--68bnm-eth0" Jul 12 10:13:28.015611 containerd[1591]: 2025-07-12 10:13:27.942 [INFO][4412] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7d7be6f7914149709a240a1f1dace54935b4ed4b3b803e01915e14a9dce042b7" HandleID="k8s-pod-network.7d7be6f7914149709a240a1f1dace54935b4ed4b3b803e01915e14a9dce042b7" Workload="localhost-k8s-calico--apiserver--64dc94bff6--68bnm-eth0" Jul 12 10:13:28.015611 containerd[1591]: 2025-07-12 10:13:27.942 [INFO][4412] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7d7be6f7914149709a240a1f1dace54935b4ed4b3b803e01915e14a9dce042b7" HandleID="k8s-pod-network.7d7be6f7914149709a240a1f1dace54935b4ed4b3b803e01915e14a9dce042b7" Workload="localhost-k8s-calico--apiserver--64dc94bff6--68bnm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00019fae0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-64dc94bff6-68bnm", "timestamp":"2025-07-12 10:13:27.942337831 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 10:13:28.015611 containerd[1591]: 2025-07-12 10:13:27.942 [INFO][4412] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 10:13:28.015611 containerd[1591]: 2025-07-12 10:13:27.942 [INFO][4412] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 10:13:28.015611 containerd[1591]: 2025-07-12 10:13:27.942 [INFO][4412] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 12 10:13:28.015611 containerd[1591]: 2025-07-12 10:13:27.953 [INFO][4412] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7d7be6f7914149709a240a1f1dace54935b4ed4b3b803e01915e14a9dce042b7" host="localhost" Jul 12 10:13:28.015611 containerd[1591]: 2025-07-12 10:13:27.960 [INFO][4412] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 12 10:13:28.015611 containerd[1591]: 2025-07-12 10:13:27.968 [INFO][4412] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 12 10:13:28.015611 containerd[1591]: 2025-07-12 10:13:27.971 [INFO][4412] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 12 10:13:28.015611 containerd[1591]: 2025-07-12 10:13:27.973 [INFO][4412] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 12 10:13:28.015611 containerd[1591]: 2025-07-12 10:13:27.973 [INFO][4412] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7d7be6f7914149709a240a1f1dace54935b4ed4b3b803e01915e14a9dce042b7" host="localhost" Jul 12 10:13:28.015611 containerd[1591]: 2025-07-12 10:13:27.975 [INFO][4412] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7d7be6f7914149709a240a1f1dace54935b4ed4b3b803e01915e14a9dce042b7 Jul 12 10:13:28.015611 containerd[1591]: 2025-07-12 10:13:27.980 [INFO][4412] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7d7be6f7914149709a240a1f1dace54935b4ed4b3b803e01915e14a9dce042b7" host="localhost" Jul 12 10:13:28.015611 containerd[1591]: 2025-07-12 10:13:27.986 [INFO][4412] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.7d7be6f7914149709a240a1f1dace54935b4ed4b3b803e01915e14a9dce042b7" host="localhost" Jul 12 10:13:28.015611 containerd[1591]: 2025-07-12 10:13:27.986 [INFO][4412] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.7d7be6f7914149709a240a1f1dace54935b4ed4b3b803e01915e14a9dce042b7" host="localhost" Jul 12 10:13:28.015611 containerd[1591]: 2025-07-12 10:13:27.987 [INFO][4412] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 10:13:28.015611 containerd[1591]: 2025-07-12 10:13:27.987 [INFO][4412] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="7d7be6f7914149709a240a1f1dace54935b4ed4b3b803e01915e14a9dce042b7" HandleID="k8s-pod-network.7d7be6f7914149709a240a1f1dace54935b4ed4b3b803e01915e14a9dce042b7" Workload="localhost-k8s-calico--apiserver--64dc94bff6--68bnm-eth0" Jul 12 10:13:28.016517 containerd[1591]: 2025-07-12 10:13:27.993 [INFO][4391] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7d7be6f7914149709a240a1f1dace54935b4ed4b3b803e01915e14a9dce042b7" Namespace="calico-apiserver" Pod="calico-apiserver-64dc94bff6-68bnm" WorkloadEndpoint="localhost-k8s-calico--apiserver--64dc94bff6--68bnm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--64dc94bff6--68bnm-eth0", GenerateName:"calico-apiserver-64dc94bff6-", Namespace:"calico-apiserver", SelfLink:"", UID:"a1f402c6-d478-442f-ad36-94ee1518dde7", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 10, 13, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64dc94bff6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-64dc94bff6-68bnm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6b7cdece69e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 10:13:28.016517 containerd[1591]: 2025-07-12 10:13:27.994 [INFO][4391] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="7d7be6f7914149709a240a1f1dace54935b4ed4b3b803e01915e14a9dce042b7" Namespace="calico-apiserver" Pod="calico-apiserver-64dc94bff6-68bnm" WorkloadEndpoint="localhost-k8s-calico--apiserver--64dc94bff6--68bnm-eth0" Jul 12 10:13:28.016517 containerd[1591]: 2025-07-12 10:13:27.994 [INFO][4391] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6b7cdece69e ContainerID="7d7be6f7914149709a240a1f1dace54935b4ed4b3b803e01915e14a9dce042b7" Namespace="calico-apiserver" Pod="calico-apiserver-64dc94bff6-68bnm" WorkloadEndpoint="localhost-k8s-calico--apiserver--64dc94bff6--68bnm-eth0" Jul 12 10:13:28.016517 containerd[1591]: 2025-07-12 10:13:27.998 [INFO][4391] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7d7be6f7914149709a240a1f1dace54935b4ed4b3b803e01915e14a9dce042b7" Namespace="calico-apiserver" Pod="calico-apiserver-64dc94bff6-68bnm" WorkloadEndpoint="localhost-k8s-calico--apiserver--64dc94bff6--68bnm-eth0" Jul 12 10:13:28.016517 containerd[1591]: 2025-07-12 10:13:27.999 [INFO][4391] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7d7be6f7914149709a240a1f1dace54935b4ed4b3b803e01915e14a9dce042b7" Namespace="calico-apiserver" Pod="calico-apiserver-64dc94bff6-68bnm" WorkloadEndpoint="localhost-k8s-calico--apiserver--64dc94bff6--68bnm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--64dc94bff6--68bnm-eth0", GenerateName:"calico-apiserver-64dc94bff6-", Namespace:"calico-apiserver", SelfLink:"", UID:"a1f402c6-d478-442f-ad36-94ee1518dde7", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 10, 13, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64dc94bff6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7d7be6f7914149709a240a1f1dace54935b4ed4b3b803e01915e14a9dce042b7", Pod:"calico-apiserver-64dc94bff6-68bnm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6b7cdece69e", MAC:"22:39:49:a1:43:e2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 10:13:28.016517 containerd[1591]: 2025-07-12 10:13:28.011 [INFO][4391] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7d7be6f7914149709a240a1f1dace54935b4ed4b3b803e01915e14a9dce042b7" Namespace="calico-apiserver" Pod="calico-apiserver-64dc94bff6-68bnm" WorkloadEndpoint="localhost-k8s-calico--apiserver--64dc94bff6--68bnm-eth0" Jul 12 10:13:28.042776 containerd[1591]: time="2025-07-12T10:13:28.042699455Z" level=info msg="connecting to shim 7d7be6f7914149709a240a1f1dace54935b4ed4b3b803e01915e14a9dce042b7" address="unix:///run/containerd/s/7d1108af8348c83075e2d0202e405a874c8684faacf036521cbf88044a0e7629" namespace=k8s.io protocol=ttrpc version=3 Jul 12 10:13:28.075722 systemd[1]: Started cri-containerd-7d7be6f7914149709a240a1f1dace54935b4ed4b3b803e01915e14a9dce042b7.scope - libcontainer container 7d7be6f7914149709a240a1f1dace54935b4ed4b3b803e01915e14a9dce042b7. Jul 12 10:13:28.082834 containerd[1591]: time="2025-07-12T10:13:28.082787367Z" level=info msg="StartContainer for \"a4e5819cc850b92211f0e3a707b0a769458a3e27a80a47f35220eb0d213a31e7\" returns successfully" Jul 12 10:13:28.093564 containerd[1591]: time="2025-07-12T10:13:28.093514448Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 12 10:13:28.097485 systemd-resolved[1413]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 10:13:28.113576 systemd-networkd[1498]: cali4e2af64392e: Link UP Jul 12 10:13:28.115213 systemd-networkd[1498]: cali4e2af64392e: Gained carrier Jul 12 10:13:28.125934 containerd[1591]: 2025-07-12 10:13:27.909 [INFO][4370] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--mksmw-eth0 coredns-7c65d6cfc9- kube-system 1af8e1d1-d77b-4b95-8661-839de795f16d 801 0 2025-07-12 10:12:51 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-mksmw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4e2af64392e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="fb96c62986f3fb9a2e2d2c5704a2c3ac3211a543a1193457226911532eedc71d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mksmw" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mksmw-" Jul 12 10:13:28.125934 containerd[1591]: 2025-07-12 10:13:27.910 [INFO][4370] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fb96c62986f3fb9a2e2d2c5704a2c3ac3211a543a1193457226911532eedc71d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mksmw" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mksmw-eth0" Jul 12 10:13:28.125934 containerd[1591]: 2025-07-12 10:13:27.956 [INFO][4420] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fb96c62986f3fb9a2e2d2c5704a2c3ac3211a543a1193457226911532eedc71d" HandleID="k8s-pod-network.fb96c62986f3fb9a2e2d2c5704a2c3ac3211a543a1193457226911532eedc71d" Workload="localhost-k8s-coredns--7c65d6cfc9--mksmw-eth0" Jul 12 10:13:28.125934 containerd[1591]: 2025-07-12 10:13:27.956 [INFO][4420] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fb96c62986f3fb9a2e2d2c5704a2c3ac3211a543a1193457226911532eedc71d" HandleID="k8s-pod-network.fb96c62986f3fb9a2e2d2c5704a2c3ac3211a543a1193457226911532eedc71d" Workload="localhost-k8s-coredns--7c65d6cfc9--mksmw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00035f5f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-mksmw", "timestamp":"2025-07-12 10:13:27.956675801 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 10:13:28.125934 containerd[1591]: 2025-07-12 10:13:27.957 [INFO][4420] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 10:13:28.125934 containerd[1591]: 2025-07-12 10:13:27.987 [INFO][4420] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 10:13:28.125934 containerd[1591]: 2025-07-12 10:13:27.987 [INFO][4420] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 12 10:13:28.125934 containerd[1591]: 2025-07-12 10:13:28.051 [INFO][4420] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fb96c62986f3fb9a2e2d2c5704a2c3ac3211a543a1193457226911532eedc71d" host="localhost" Jul 12 10:13:28.125934 containerd[1591]: 2025-07-12 10:13:28.062 [INFO][4420] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 12 10:13:28.125934 containerd[1591]: 2025-07-12 10:13:28.069 [INFO][4420] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 12 10:13:28.125934 containerd[1591]: 2025-07-12 10:13:28.071 [INFO][4420] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 12 10:13:28.125934 containerd[1591]: 2025-07-12 10:13:28.074 [INFO][4420] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 12 10:13:28.125934 containerd[1591]: 2025-07-12 10:13:28.074 [INFO][4420] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fb96c62986f3fb9a2e2d2c5704a2c3ac3211a543a1193457226911532eedc71d" host="localhost" Jul 12 10:13:28.125934 containerd[1591]: 2025-07-12 10:13:28.076 [INFO][4420] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.fb96c62986f3fb9a2e2d2c5704a2c3ac3211a543a1193457226911532eedc71d Jul 12 10:13:28.125934 containerd[1591]: 2025-07-12 10:13:28.083 [INFO][4420] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fb96c62986f3fb9a2e2d2c5704a2c3ac3211a543a1193457226911532eedc71d" host="localhost" Jul 12 10:13:28.125934 containerd[1591]: 2025-07-12 10:13:28.095 [INFO][4420] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.fb96c62986f3fb9a2e2d2c5704a2c3ac3211a543a1193457226911532eedc71d" host="localhost" Jul 12 10:13:28.125934 containerd[1591]: 2025-07-12 10:13:28.096 [INFO][4420] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.fb96c62986f3fb9a2e2d2c5704a2c3ac3211a543a1193457226911532eedc71d" host="localhost" Jul 12 10:13:28.125934 containerd[1591]: 2025-07-12 10:13:28.099 [INFO][4420] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 10:13:28.125934 containerd[1591]: 2025-07-12 10:13:28.099 [INFO][4420] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="fb96c62986f3fb9a2e2d2c5704a2c3ac3211a543a1193457226911532eedc71d" HandleID="k8s-pod-network.fb96c62986f3fb9a2e2d2c5704a2c3ac3211a543a1193457226911532eedc71d" Workload="localhost-k8s-coredns--7c65d6cfc9--mksmw-eth0" Jul 12 10:13:28.126477 containerd[1591]: 2025-07-12 10:13:28.110 [INFO][4370] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fb96c62986f3fb9a2e2d2c5704a2c3ac3211a543a1193457226911532eedc71d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mksmw" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mksmw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--mksmw-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"1af8e1d1-d77b-4b95-8661-839de795f16d", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 10, 12, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-mksmw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4e2af64392e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 10:13:28.126477 containerd[1591]: 2025-07-12 10:13:28.110 [INFO][4370] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="fb96c62986f3fb9a2e2d2c5704a2c3ac3211a543a1193457226911532eedc71d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mksmw" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mksmw-eth0" Jul 12 10:13:28.126477 containerd[1591]: 2025-07-12 10:13:28.110 [INFO][4370] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4e2af64392e ContainerID="fb96c62986f3fb9a2e2d2c5704a2c3ac3211a543a1193457226911532eedc71d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mksmw" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mksmw-eth0" Jul 12 10:13:28.126477 containerd[1591]: 2025-07-12 10:13:28.113 [INFO][4370] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fb96c62986f3fb9a2e2d2c5704a2c3ac3211a543a1193457226911532eedc71d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mksmw" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mksmw-eth0" Jul 12 10:13:28.126477 containerd[1591]: 2025-07-12 10:13:28.113 [INFO][4370] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fb96c62986f3fb9a2e2d2c5704a2c3ac3211a543a1193457226911532eedc71d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mksmw" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mksmw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--mksmw-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"1af8e1d1-d77b-4b95-8661-839de795f16d", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 10, 12, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fb96c62986f3fb9a2e2d2c5704a2c3ac3211a543a1193457226911532eedc71d", Pod:"coredns-7c65d6cfc9-mksmw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4e2af64392e", MAC:"56:c9:e8:d2:44:dc", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 10:13:28.126477 containerd[1591]: 2025-07-12 10:13:28.121 [INFO][4370] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fb96c62986f3fb9a2e2d2c5704a2c3ac3211a543a1193457226911532eedc71d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mksmw" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mksmw-eth0" Jul 12 10:13:28.136566 systemd-networkd[1498]: vxlan.calico: Gained IPv6LL Jul 12 10:13:28.148400 containerd[1591]: time="2025-07-12T10:13:28.148326442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64dc94bff6-68bnm,Uid:a1f402c6-d478-442f-ad36-94ee1518dde7,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"7d7be6f7914149709a240a1f1dace54935b4ed4b3b803e01915e14a9dce042b7\"" Jul 12 10:13:28.167620 containerd[1591]: time="2025-07-12T10:13:28.167290223Z" level=info msg="connecting to shim fb96c62986f3fb9a2e2d2c5704a2c3ac3211a543a1193457226911532eedc71d" address="unix:///run/containerd/s/d8844883de26cd1eab7cc572b12ab306c62dddc8cfd13d229c894e591483e67a" namespace=k8s.io protocol=ttrpc version=3 Jul 12 10:13:28.198712 systemd-networkd[1498]: cali9fa3a69a153: Link UP Jul 12 10:13:28.199823 systemd-networkd[1498]: cali9fa3a69a153: Gained carrier Jul 12 10:13:28.202461 systemd[1]: Started cri-containerd-fb96c62986f3fb9a2e2d2c5704a2c3ac3211a543a1193457226911532eedc71d.scope - libcontainer container fb96c62986f3fb9a2e2d2c5704a2c3ac3211a543a1193457226911532eedc71d. Jul 12 10:13:28.220397 containerd[1591]: 2025-07-12 10:13:27.907 [INFO][4364] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--64dc94bff6--rtzpw-eth0 calico-apiserver-64dc94bff6- calico-apiserver 726e2a2b-dba2-401e-ba58-be1ad9b6ceae 804 0 2025-07-12 10:13:00 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:64dc94bff6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-64dc94bff6-rtzpw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9fa3a69a153 [] [] }} ContainerID="a80b5ff988fa1f13d4e7aef438de72e47f67b9bc433e64dd6782996223edc8ce" Namespace="calico-apiserver" Pod="calico-apiserver-64dc94bff6-rtzpw" WorkloadEndpoint="localhost-k8s-calico--apiserver--64dc94bff6--rtzpw-" Jul 12 10:13:28.220397 containerd[1591]: 2025-07-12 10:13:27.907 [INFO][4364] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a80b5ff988fa1f13d4e7aef438de72e47f67b9bc433e64dd6782996223edc8ce" Namespace="calico-apiserver" Pod="calico-apiserver-64dc94bff6-rtzpw" WorkloadEndpoint="localhost-k8s-calico--apiserver--64dc94bff6--rtzpw-eth0" Jul 12 10:13:28.220397 containerd[1591]: 2025-07-12 10:13:27.958 [INFO][4418] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a80b5ff988fa1f13d4e7aef438de72e47f67b9bc433e64dd6782996223edc8ce" HandleID="k8s-pod-network.a80b5ff988fa1f13d4e7aef438de72e47f67b9bc433e64dd6782996223edc8ce" Workload="localhost-k8s-calico--apiserver--64dc94bff6--rtzpw-eth0" Jul 12 10:13:28.220397 containerd[1591]: 2025-07-12 10:13:27.958 [INFO][4418] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a80b5ff988fa1f13d4e7aef438de72e47f67b9bc433e64dd6782996223edc8ce" HandleID="k8s-pod-network.a80b5ff988fa1f13d4e7aef438de72e47f67b9bc433e64dd6782996223edc8ce" Workload="localhost-k8s-calico--apiserver--64dc94bff6--rtzpw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005930d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-64dc94bff6-rtzpw", "timestamp":"2025-07-12 10:13:27.95839414 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 10:13:28.220397 containerd[1591]: 2025-07-12 10:13:27.958 [INFO][4418] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 10:13:28.220397 containerd[1591]: 2025-07-12 10:13:28.100 [INFO][4418] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 10:13:28.220397 containerd[1591]: 2025-07-12 10:13:28.100 [INFO][4418] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 12 10:13:28.220397 containerd[1591]: 2025-07-12 10:13:28.152 [INFO][4418] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a80b5ff988fa1f13d4e7aef438de72e47f67b9bc433e64dd6782996223edc8ce" host="localhost" Jul 12 10:13:28.220397 containerd[1591]: 2025-07-12 10:13:28.161 [INFO][4418] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 12 10:13:28.220397 containerd[1591]: 2025-07-12 10:13:28.174 [INFO][4418] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 12 10:13:28.220397 containerd[1591]: 2025-07-12 10:13:28.176 [INFO][4418] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 12 10:13:28.220397 containerd[1591]: 2025-07-12 10:13:28.179 [INFO][4418] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 12 10:13:28.220397 containerd[1591]: 2025-07-12 10:13:28.179 [INFO][4418] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a80b5ff988fa1f13d4e7aef438de72e47f67b9bc433e64dd6782996223edc8ce" host="localhost" Jul 12 10:13:28.220397 containerd[1591]: 2025-07-12 10:13:28.180 [INFO][4418] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a80b5ff988fa1f13d4e7aef438de72e47f67b9bc433e64dd6782996223edc8ce Jul 12 10:13:28.220397 containerd[1591]: 2025-07-12 10:13:28.185 [INFO][4418] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a80b5ff988fa1f13d4e7aef438de72e47f67b9bc433e64dd6782996223edc8ce" host="localhost" Jul 12 10:13:28.220397 containerd[1591]: 2025-07-12 10:13:28.190 [INFO][4418] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.a80b5ff988fa1f13d4e7aef438de72e47f67b9bc433e64dd6782996223edc8ce" host="localhost" Jul 12 10:13:28.220397 containerd[1591]: 2025-07-12 10:13:28.190 [INFO][4418] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.a80b5ff988fa1f13d4e7aef438de72e47f67b9bc433e64dd6782996223edc8ce" host="localhost" Jul 12 10:13:28.220397 containerd[1591]: 2025-07-12 10:13:28.190 [INFO][4418] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 10:13:28.220397 containerd[1591]: 2025-07-12 10:13:28.190 [INFO][4418] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="a80b5ff988fa1f13d4e7aef438de72e47f67b9bc433e64dd6782996223edc8ce" HandleID="k8s-pod-network.a80b5ff988fa1f13d4e7aef438de72e47f67b9bc433e64dd6782996223edc8ce" Workload="localhost-k8s-calico--apiserver--64dc94bff6--rtzpw-eth0" Jul 12 10:13:28.220948 containerd[1591]: 2025-07-12 10:13:28.194 [INFO][4364] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a80b5ff988fa1f13d4e7aef438de72e47f67b9bc433e64dd6782996223edc8ce" Namespace="calico-apiserver" Pod="calico-apiserver-64dc94bff6-rtzpw" WorkloadEndpoint="localhost-k8s-calico--apiserver--64dc94bff6--rtzpw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--64dc94bff6--rtzpw-eth0", GenerateName:"calico-apiserver-64dc94bff6-", Namespace:"calico-apiserver", SelfLink:"", UID:"726e2a2b-dba2-401e-ba58-be1ad9b6ceae", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 10, 13, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64dc94bff6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-64dc94bff6-rtzpw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9fa3a69a153", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 10:13:28.220948 containerd[1591]: 2025-07-12 10:13:28.194 [INFO][4364] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="a80b5ff988fa1f13d4e7aef438de72e47f67b9bc433e64dd6782996223edc8ce" Namespace="calico-apiserver" Pod="calico-apiserver-64dc94bff6-rtzpw" WorkloadEndpoint="localhost-k8s-calico--apiserver--64dc94bff6--rtzpw-eth0" Jul 12 10:13:28.220948 containerd[1591]: 2025-07-12 10:13:28.194 [INFO][4364] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9fa3a69a153 ContainerID="a80b5ff988fa1f13d4e7aef438de72e47f67b9bc433e64dd6782996223edc8ce" Namespace="calico-apiserver" Pod="calico-apiserver-64dc94bff6-rtzpw" WorkloadEndpoint="localhost-k8s-calico--apiserver--64dc94bff6--rtzpw-eth0" Jul 12 10:13:28.220948 containerd[1591]: 2025-07-12 10:13:28.200 [INFO][4364] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a80b5ff988fa1f13d4e7aef438de72e47f67b9bc433e64dd6782996223edc8ce" Namespace="calico-apiserver" Pod="calico-apiserver-64dc94bff6-rtzpw" WorkloadEndpoint="localhost-k8s-calico--apiserver--64dc94bff6--rtzpw-eth0" Jul 12 10:13:28.220948 containerd[1591]: 2025-07-12 10:13:28.201 [INFO][4364] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a80b5ff988fa1f13d4e7aef438de72e47f67b9bc433e64dd6782996223edc8ce" Namespace="calico-apiserver" Pod="calico-apiserver-64dc94bff6-rtzpw" WorkloadEndpoint="localhost-k8s-calico--apiserver--64dc94bff6--rtzpw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--64dc94bff6--rtzpw-eth0", GenerateName:"calico-apiserver-64dc94bff6-", Namespace:"calico-apiserver", SelfLink:"", UID:"726e2a2b-dba2-401e-ba58-be1ad9b6ceae", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 10, 13, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64dc94bff6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a80b5ff988fa1f13d4e7aef438de72e47f67b9bc433e64dd6782996223edc8ce", Pod:"calico-apiserver-64dc94bff6-rtzpw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9fa3a69a153", MAC:"be:09:12:58:77:0e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 10:13:28.220948 containerd[1591]: 2025-07-12 10:13:28.216 [INFO][4364] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a80b5ff988fa1f13d4e7aef438de72e47f67b9bc433e64dd6782996223edc8ce" Namespace="calico-apiserver" Pod="calico-apiserver-64dc94bff6-rtzpw" WorkloadEndpoint="localhost-k8s-calico--apiserver--64dc94bff6--rtzpw-eth0" Jul 12 10:13:28.222122 systemd-resolved[1413]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 10:13:28.241482 containerd[1591]: time="2025-07-12T10:13:28.241303785Z" level=info msg="connecting to shim a80b5ff988fa1f13d4e7aef438de72e47f67b9bc433e64dd6782996223edc8ce" address="unix:///run/containerd/s/8e666cbf77bf18d1409d4e2a087e069fa6ef2c36d088ea4438d48fba1c69c637" namespace=k8s.io protocol=ttrpc version=3 Jul 12 10:13:28.260638 containerd[1591]: time="2025-07-12T10:13:28.260487189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mksmw,Uid:1af8e1d1-d77b-4b95-8661-839de795f16d,Namespace:kube-system,Attempt:0,} returns sandbox id \"fb96c62986f3fb9a2e2d2c5704a2c3ac3211a543a1193457226911532eedc71d\"" Jul 12 10:13:28.265505 containerd[1591]: time="2025-07-12T10:13:28.265472889Z" level=info msg="CreateContainer within sandbox \"fb96c62986f3fb9a2e2d2c5704a2c3ac3211a543a1193457226911532eedc71d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 12 10:13:28.266316 systemd[1]: Started cri-containerd-a80b5ff988fa1f13d4e7aef438de72e47f67b9bc433e64dd6782996223edc8ce.scope - libcontainer container a80b5ff988fa1f13d4e7aef438de72e47f67b9bc433e64dd6782996223edc8ce. Jul 12 10:13:28.279897 containerd[1591]: time="2025-07-12T10:13:28.279810065Z" level=info msg="Container 9bd59befed4343cdf749654c37de58bf51508846bb3a692271a659e2c3cb3680: CDI devices from CRI Config.CDIDevices: []" Jul 12 10:13:28.286408 kubelet[2749]: I0712 10:13:28.285953 2749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-79dhp" podStartSLOduration=37.285929736 podStartE2EDuration="37.285929736s" podCreationTimestamp="2025-07-12 10:12:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 10:13:28.273493894 +0000 UTC m=+42.520867023" watchObservedRunningTime="2025-07-12 10:13:28.285929736 +0000 UTC m=+42.533302865" Jul 12 10:13:28.287678 systemd-resolved[1413]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 10:13:28.303295 containerd[1591]: time="2025-07-12T10:13:28.303247143Z" level=info msg="CreateContainer within sandbox \"fb96c62986f3fb9a2e2d2c5704a2c3ac3211a543a1193457226911532eedc71d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9bd59befed4343cdf749654c37de58bf51508846bb3a692271a659e2c3cb3680\"" Jul 12 10:13:28.304153 containerd[1591]: time="2025-07-12T10:13:28.304125143Z" level=info msg="StartContainer for \"9bd59befed4343cdf749654c37de58bf51508846bb3a692271a659e2c3cb3680\"" Jul 12 10:13:28.307236 containerd[1591]: time="2025-07-12T10:13:28.307207726Z" level=info msg="connecting to shim 9bd59befed4343cdf749654c37de58bf51508846bb3a692271a659e2c3cb3680" address="unix:///run/containerd/s/d8844883de26cd1eab7cc572b12ab306c62dddc8cfd13d229c894e591483e67a" protocol=ttrpc version=3 Jul 12 10:13:28.319540 containerd[1591]: time="2025-07-12T10:13:28.319486804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64dc94bff6-rtzpw,Uid:726e2a2b-dba2-401e-ba58-be1ad9b6ceae,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a80b5ff988fa1f13d4e7aef438de72e47f67b9bc433e64dd6782996223edc8ce\"" Jul 12 10:13:28.332389 systemd[1]: Started cri-containerd-9bd59befed4343cdf749654c37de58bf51508846bb3a692271a659e2c3cb3680.scope - libcontainer container 9bd59befed4343cdf749654c37de58bf51508846bb3a692271a659e2c3cb3680. Jul 12 10:13:28.367393 containerd[1591]: time="2025-07-12T10:13:28.367320053Z" level=info msg="StartContainer for \"9bd59befed4343cdf749654c37de58bf51508846bb3a692271a659e2c3cb3680\" returns successfully" Jul 12 10:13:28.392401 systemd-networkd[1498]: calif8e02e6e609: Gained IPv6LL Jul 12 10:13:28.839340 containerd[1591]: time="2025-07-12T10:13:28.839281677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6645b4c756-pfljt,Uid:1cfe8fc0-9493-425f-a1f9-214bd32beb86,Namespace:calico-system,Attempt:0,}" Jul 12 10:13:28.932819 systemd-networkd[1498]: calieb159f1b478: Link UP Jul 12 10:13:28.933089 systemd-networkd[1498]: calieb159f1b478: Gained carrier Jul 12 10:13:28.946673 containerd[1591]: 2025-07-12 10:13:28.873 [INFO][4676] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6645b4c756--pfljt-eth0 calico-kube-controllers-6645b4c756- calico-system 1cfe8fc0-9493-425f-a1f9-214bd32beb86 802 0 2025-07-12 10:13:03 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6645b4c756 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6645b4c756-pfljt eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calieb159f1b478 [] [] }} ContainerID="d1a681c201e7d9b6dfc029de2086b1441b2ab33cffa975db28ff104ecfc982c9" Namespace="calico-system" Pod="calico-kube-controllers-6645b4c756-pfljt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6645b4c756--pfljt-" Jul 12 10:13:28.946673 containerd[1591]: 2025-07-12 10:13:28.874 [INFO][4676] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d1a681c201e7d9b6dfc029de2086b1441b2ab33cffa975db28ff104ecfc982c9" Namespace="calico-system" Pod="calico-kube-controllers-6645b4c756-pfljt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6645b4c756--pfljt-eth0" Jul 12 10:13:28.946673 containerd[1591]: 2025-07-12 10:13:28.899 [INFO][4693] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d1a681c201e7d9b6dfc029de2086b1441b2ab33cffa975db28ff104ecfc982c9" HandleID="k8s-pod-network.d1a681c201e7d9b6dfc029de2086b1441b2ab33cffa975db28ff104ecfc982c9" Workload="localhost-k8s-calico--kube--controllers--6645b4c756--pfljt-eth0" Jul 12 10:13:28.946673 containerd[1591]: 2025-07-12 10:13:28.899 [INFO][4693] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d1a681c201e7d9b6dfc029de2086b1441b2ab33cffa975db28ff104ecfc982c9" HandleID="k8s-pod-network.d1a681c201e7d9b6dfc029de2086b1441b2ab33cffa975db28ff104ecfc982c9" Workload="localhost-k8s-calico--kube--controllers--6645b4c756--pfljt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f7b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6645b4c756-pfljt", "timestamp":"2025-07-12 10:13:28.899006695 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 10:13:28.946673 containerd[1591]: 2025-07-12 10:13:28.899 [INFO][4693] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 10:13:28.946673 containerd[1591]: 2025-07-12 10:13:28.899 [INFO][4693] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 10:13:28.946673 containerd[1591]: 2025-07-12 10:13:28.899 [INFO][4693] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 12 10:13:28.946673 containerd[1591]: 2025-07-12 10:13:28.906 [INFO][4693] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d1a681c201e7d9b6dfc029de2086b1441b2ab33cffa975db28ff104ecfc982c9" host="localhost" Jul 12 10:13:28.946673 containerd[1591]: 2025-07-12 10:13:28.909 [INFO][4693] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 12 10:13:28.946673 containerd[1591]: 2025-07-12 10:13:28.913 [INFO][4693] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 12 10:13:28.946673 containerd[1591]: 2025-07-12 10:13:28.915 [INFO][4693] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 12 10:13:28.946673 containerd[1591]: 2025-07-12 10:13:28.917 [INFO][4693] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 12 10:13:28.946673 containerd[1591]: 2025-07-12 10:13:28.917 [INFO][4693] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d1a681c201e7d9b6dfc029de2086b1441b2ab33cffa975db28ff104ecfc982c9" host="localhost" Jul 12 10:13:28.946673 containerd[1591]: 2025-07-12 10:13:28.918 [INFO][4693] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d1a681c201e7d9b6dfc029de2086b1441b2ab33cffa975db28ff104ecfc982c9 Jul 12 10:13:28.946673 containerd[1591]: 2025-07-12 10:13:28.922 [INFO][4693] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d1a681c201e7d9b6dfc029de2086b1441b2ab33cffa975db28ff104ecfc982c9" host="localhost" Jul 12 10:13:28.946673 containerd[1591]: 2025-07-12 10:13:28.927 [INFO][4693] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.d1a681c201e7d9b6dfc029de2086b1441b2ab33cffa975db28ff104ecfc982c9" host="localhost" Jul 12 10:13:28.946673 containerd[1591]: 2025-07-12 10:13:28.927 [INFO][4693] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.d1a681c201e7d9b6dfc029de2086b1441b2ab33cffa975db28ff104ecfc982c9" host="localhost" Jul 12 10:13:28.946673 containerd[1591]: 2025-07-12 10:13:28.927 [INFO][4693] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 10:13:28.946673 containerd[1591]: 2025-07-12 10:13:28.927 [INFO][4693] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="d1a681c201e7d9b6dfc029de2086b1441b2ab33cffa975db28ff104ecfc982c9" HandleID="k8s-pod-network.d1a681c201e7d9b6dfc029de2086b1441b2ab33cffa975db28ff104ecfc982c9" Workload="localhost-k8s-calico--kube--controllers--6645b4c756--pfljt-eth0" Jul 12 10:13:28.947592 containerd[1591]: 2025-07-12 10:13:28.930 [INFO][4676] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d1a681c201e7d9b6dfc029de2086b1441b2ab33cffa975db28ff104ecfc982c9" Namespace="calico-system" Pod="calico-kube-controllers-6645b4c756-pfljt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6645b4c756--pfljt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6645b4c756--pfljt-eth0", GenerateName:"calico-kube-controllers-6645b4c756-", Namespace:"calico-system", SelfLink:"", UID:"1cfe8fc0-9493-425f-a1f9-214bd32beb86", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 10, 13, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6645b4c756", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6645b4c756-pfljt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calieb159f1b478", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 10:13:28.947592 containerd[1591]: 2025-07-12 10:13:28.930 [INFO][4676] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="d1a681c201e7d9b6dfc029de2086b1441b2ab33cffa975db28ff104ecfc982c9" Namespace="calico-system" Pod="calico-kube-controllers-6645b4c756-pfljt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6645b4c756--pfljt-eth0" Jul 12 10:13:28.947592 containerd[1591]: 2025-07-12 10:13:28.930 [INFO][4676] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calieb159f1b478 ContainerID="d1a681c201e7d9b6dfc029de2086b1441b2ab33cffa975db28ff104ecfc982c9" Namespace="calico-system" Pod="calico-kube-controllers-6645b4c756-pfljt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6645b4c756--pfljt-eth0" Jul 12 10:13:28.947592 containerd[1591]: 2025-07-12 10:13:28.933 [INFO][4676] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d1a681c201e7d9b6dfc029de2086b1441b2ab33cffa975db28ff104ecfc982c9" Namespace="calico-system" Pod="calico-kube-controllers-6645b4c756-pfljt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6645b4c756--pfljt-eth0" Jul 12 10:13:28.947592 containerd[1591]: 2025-07-12 10:13:28.934 [INFO][4676] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d1a681c201e7d9b6dfc029de2086b1441b2ab33cffa975db28ff104ecfc982c9" Namespace="calico-system" Pod="calico-kube-controllers-6645b4c756-pfljt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6645b4c756--pfljt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6645b4c756--pfljt-eth0", GenerateName:"calico-kube-controllers-6645b4c756-", Namespace:"calico-system", SelfLink:"", UID:"1cfe8fc0-9493-425f-a1f9-214bd32beb86", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 10, 13, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6645b4c756", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d1a681c201e7d9b6dfc029de2086b1441b2ab33cffa975db28ff104ecfc982c9", Pod:"calico-kube-controllers-6645b4c756-pfljt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calieb159f1b478", MAC:"22:7a:22:bb:14:c0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 10:13:28.947592 containerd[1591]: 2025-07-12 10:13:28.943 [INFO][4676] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d1a681c201e7d9b6dfc029de2086b1441b2ab33cffa975db28ff104ecfc982c9" Namespace="calico-system" Pod="calico-kube-controllers-6645b4c756-pfljt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6645b4c756--pfljt-eth0" Jul 12 10:13:28.970917 containerd[1591]: time="2025-07-12T10:13:28.970872561Z" level=info msg="connecting to shim d1a681c201e7d9b6dfc029de2086b1441b2ab33cffa975db28ff104ecfc982c9" address="unix:///run/containerd/s/d892dc1e149ab43c8129c0596e3714b948f0282497de326f23fe4b15fcb51dd2" namespace=k8s.io protocol=ttrpc version=3 Jul 12 10:13:28.998339 systemd[1]: Started cri-containerd-d1a681c201e7d9b6dfc029de2086b1441b2ab33cffa975db28ff104ecfc982c9.scope - libcontainer container d1a681c201e7d9b6dfc029de2086b1441b2ab33cffa975db28ff104ecfc982c9. Jul 12 10:13:29.012076 systemd-resolved[1413]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 10:13:29.043003 containerd[1591]: time="2025-07-12T10:13:29.042967008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6645b4c756-pfljt,Uid:1cfe8fc0-9493-425f-a1f9-214bd32beb86,Namespace:calico-system,Attempt:0,} returns sandbox id \"d1a681c201e7d9b6dfc029de2086b1441b2ab33cffa975db28ff104ecfc982c9\"" Jul 12 10:13:29.223399 systemd-networkd[1498]: cali6b7cdece69e: Gained IPv6LL Jul 12 10:13:29.839342 containerd[1591]: time="2025-07-12T10:13:29.839285396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-mcxx7,Uid:5bc98679-4b95-44d6-bc8a-3bb14fbcd2dc,Namespace:calico-system,Attempt:0,}" Jul 12 10:13:29.927354 systemd-networkd[1498]: cali9fa3a69a153: Gained IPv6LL Jul 12 10:13:29.951578 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1686804845.mount: Deactivated successfully. Jul 12 10:13:30.003888 containerd[1591]: time="2025-07-12T10:13:30.003448682Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:13:30.004728 containerd[1591]: time="2025-07-12T10:13:30.004689294Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 12 10:13:30.006155 containerd[1591]: time="2025-07-12T10:13:30.006101577Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:13:30.008958 containerd[1591]: time="2025-07-12T10:13:30.008913612Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:13:30.010031 containerd[1591]: time="2025-07-12T10:13:30.009808002Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 1.916244722s" Jul 12 10:13:30.010031 containerd[1591]: time="2025-07-12T10:13:30.009833560Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 12 10:13:30.012652 containerd[1591]: time="2025-07-12T10:13:30.012193053Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 12 10:13:30.013698 containerd[1591]: time="2025-07-12T10:13:30.013666002Z" level=info msg="CreateContainer within sandbox \"b6d1cdc398a4edf2cabdec02cd062d1b96699a039aee23d8028e13026b9a71be\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 12 10:13:30.025198 containerd[1591]: time="2025-07-12T10:13:30.024514064Z" level=info msg="Container 9569d14de0d0e47d6bc2a085e59396759213e94429f79f5aa3bf37aab33228e3: CDI devices from CRI Config.CDIDevices: []" Jul 12 10:13:30.035930 containerd[1591]: time="2025-07-12T10:13:30.035875052Z" level=info msg="CreateContainer within sandbox \"b6d1cdc398a4edf2cabdec02cd062d1b96699a039aee23d8028e13026b9a71be\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"9569d14de0d0e47d6bc2a085e59396759213e94429f79f5aa3bf37aab33228e3\"" Jul 12 10:13:30.037653 containerd[1591]: time="2025-07-12T10:13:30.036668773Z" level=info msg="StartContainer for \"9569d14de0d0e47d6bc2a085e59396759213e94429f79f5aa3bf37aab33228e3\"" Jul 12 10:13:30.037792 containerd[1591]: time="2025-07-12T10:13:30.037770184Z" level=info msg="connecting to shim 9569d14de0d0e47d6bc2a085e59396759213e94429f79f5aa3bf37aab33228e3" address="unix:///run/containerd/s/b86cae1d60aee4f11aae149e7612933de7acc4e93ea9a38627b6824f2f446b30" protocol=ttrpc version=3 Jul 12 10:13:30.057258 systemd-networkd[1498]: cali4e2af64392e: Gained IPv6LL Jul 12 10:13:30.070418 systemd[1]: Started cri-containerd-9569d14de0d0e47d6bc2a085e59396759213e94429f79f5aa3bf37aab33228e3.scope - libcontainer container 9569d14de0d0e47d6bc2a085e59396759213e94429f79f5aa3bf37aab33228e3. Jul 12 10:13:30.097555 systemd-networkd[1498]: cali2f78cc2ce62: Link UP Jul 12 10:13:30.099423 systemd-networkd[1498]: cali2f78cc2ce62: Gained carrier Jul 12 10:13:30.109267 kubelet[2749]: I0712 10:13:30.109220 2749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-mksmw" podStartSLOduration=39.10914625 podStartE2EDuration="39.10914625s" podCreationTimestamp="2025-07-12 10:12:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 10:13:29.27455509 +0000 UTC m=+43.521928219" watchObservedRunningTime="2025-07-12 10:13:30.10914625 +0000 UTC m=+44.356519379" Jul 12 10:13:30.114609 containerd[1591]: 2025-07-12 10:13:30.017 [INFO][4763] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--58fd7646b9--mcxx7-eth0 goldmane-58fd7646b9- calico-system 5bc98679-4b95-44d6-bc8a-3bb14fbcd2dc 803 0 2025-07-12 10:13:02 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-58fd7646b9-mcxx7 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali2f78cc2ce62 [] [] }} ContainerID="da2e7f175c5d151d51d6147dee4eaf36e9ca3d60a29073610b2cef6ad2b2436b" Namespace="calico-system" Pod="goldmane-58fd7646b9-mcxx7" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--mcxx7-" Jul 12 10:13:30.114609 containerd[1591]: 2025-07-12 10:13:30.018 [INFO][4763] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="da2e7f175c5d151d51d6147dee4eaf36e9ca3d60a29073610b2cef6ad2b2436b" Namespace="calico-system" Pod="goldmane-58fd7646b9-mcxx7" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--mcxx7-eth0" Jul 12 10:13:30.114609 containerd[1591]: 2025-07-12 10:13:30.045 [INFO][4782] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="da2e7f175c5d151d51d6147dee4eaf36e9ca3d60a29073610b2cef6ad2b2436b" HandleID="k8s-pod-network.da2e7f175c5d151d51d6147dee4eaf36e9ca3d60a29073610b2cef6ad2b2436b" Workload="localhost-k8s-goldmane--58fd7646b9--mcxx7-eth0" Jul 12 10:13:30.114609 containerd[1591]: 2025-07-12 10:13:30.045 [INFO][4782] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="da2e7f175c5d151d51d6147dee4eaf36e9ca3d60a29073610b2cef6ad2b2436b" HandleID="k8s-pod-network.da2e7f175c5d151d51d6147dee4eaf36e9ca3d60a29073610b2cef6ad2b2436b" Workload="localhost-k8s-goldmane--58fd7646b9--mcxx7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000336390), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-58fd7646b9-mcxx7", "timestamp":"2025-07-12 10:13:30.045558548 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 10:13:30.114609 containerd[1591]: 2025-07-12 10:13:30.045 [INFO][4782] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 10:13:30.114609 containerd[1591]: 2025-07-12 10:13:30.045 [INFO][4782] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 10:13:30.114609 containerd[1591]: 2025-07-12 10:13:30.045 [INFO][4782] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 12 10:13:30.114609 containerd[1591]: 2025-07-12 10:13:30.060 [INFO][4782] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.da2e7f175c5d151d51d6147dee4eaf36e9ca3d60a29073610b2cef6ad2b2436b" host="localhost" Jul 12 10:13:30.114609 containerd[1591]: 2025-07-12 10:13:30.070 [INFO][4782] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 12 10:13:30.114609 containerd[1591]: 2025-07-12 10:13:30.074 [INFO][4782] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 12 10:13:30.114609 containerd[1591]: 2025-07-12 10:13:30.076 [INFO][4782] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 12 10:13:30.114609 containerd[1591]: 2025-07-12 10:13:30.079 [INFO][4782] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 12 10:13:30.114609 containerd[1591]: 2025-07-12 10:13:30.079 [INFO][4782] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.da2e7f175c5d151d51d6147dee4eaf36e9ca3d60a29073610b2cef6ad2b2436b" host="localhost" Jul 12 10:13:30.114609 containerd[1591]: 2025-07-12 10:13:30.080 [INFO][4782] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.da2e7f175c5d151d51d6147dee4eaf36e9ca3d60a29073610b2cef6ad2b2436b Jul 12 10:13:30.114609 containerd[1591]: 2025-07-12 10:13:30.085 [INFO][4782] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.da2e7f175c5d151d51d6147dee4eaf36e9ca3d60a29073610b2cef6ad2b2436b" host="localhost" Jul 12 10:13:30.114609 containerd[1591]: 2025-07-12 10:13:30.091 [INFO][4782] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.da2e7f175c5d151d51d6147dee4eaf36e9ca3d60a29073610b2cef6ad2b2436b" host="localhost" Jul 12 10:13:30.114609 containerd[1591]: 2025-07-12 10:13:30.091 [INFO][4782] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.da2e7f175c5d151d51d6147dee4eaf36e9ca3d60a29073610b2cef6ad2b2436b" host="localhost" Jul 12 10:13:30.114609 containerd[1591]: 2025-07-12 10:13:30.091 [INFO][4782] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 10:13:30.114609 containerd[1591]: 2025-07-12 10:13:30.091 [INFO][4782] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="da2e7f175c5d151d51d6147dee4eaf36e9ca3d60a29073610b2cef6ad2b2436b" HandleID="k8s-pod-network.da2e7f175c5d151d51d6147dee4eaf36e9ca3d60a29073610b2cef6ad2b2436b" Workload="localhost-k8s-goldmane--58fd7646b9--mcxx7-eth0" Jul 12 10:13:30.115138 containerd[1591]: 2025-07-12 10:13:30.094 [INFO][4763] cni-plugin/k8s.go 418: Populated endpoint ContainerID="da2e7f175c5d151d51d6147dee4eaf36e9ca3d60a29073610b2cef6ad2b2436b" Namespace="calico-system" Pod="goldmane-58fd7646b9-mcxx7" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--mcxx7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--mcxx7-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"5bc98679-4b95-44d6-bc8a-3bb14fbcd2dc", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 10, 13, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-58fd7646b9-mcxx7", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2f78cc2ce62", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 10:13:30.115138 containerd[1591]: 2025-07-12 10:13:30.094 [INFO][4763] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="da2e7f175c5d151d51d6147dee4eaf36e9ca3d60a29073610b2cef6ad2b2436b" Namespace="calico-system" Pod="goldmane-58fd7646b9-mcxx7" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--mcxx7-eth0" Jul 12 10:13:30.115138 containerd[1591]: 2025-07-12 10:13:30.095 [INFO][4763] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2f78cc2ce62 ContainerID="da2e7f175c5d151d51d6147dee4eaf36e9ca3d60a29073610b2cef6ad2b2436b" Namespace="calico-system" Pod="goldmane-58fd7646b9-mcxx7" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--mcxx7-eth0" Jul 12 10:13:30.115138 containerd[1591]: 2025-07-12 10:13:30.100 [INFO][4763] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="da2e7f175c5d151d51d6147dee4eaf36e9ca3d60a29073610b2cef6ad2b2436b" Namespace="calico-system" Pod="goldmane-58fd7646b9-mcxx7" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--mcxx7-eth0" Jul 12 10:13:30.115138 containerd[1591]: 2025-07-12 10:13:30.100 [INFO][4763] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="da2e7f175c5d151d51d6147dee4eaf36e9ca3d60a29073610b2cef6ad2b2436b" Namespace="calico-system" Pod="goldmane-58fd7646b9-mcxx7" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--mcxx7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--mcxx7-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"5bc98679-4b95-44d6-bc8a-3bb14fbcd2dc", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 10, 13, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"da2e7f175c5d151d51d6147dee4eaf36e9ca3d60a29073610b2cef6ad2b2436b", Pod:"goldmane-58fd7646b9-mcxx7", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2f78cc2ce62", MAC:"96:fe:bb:8f:c7:6a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 10:13:30.115138 containerd[1591]: 2025-07-12 10:13:30.111 [INFO][4763] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="da2e7f175c5d151d51d6147dee4eaf36e9ca3d60a29073610b2cef6ad2b2436b" Namespace="calico-system" Pod="goldmane-58fd7646b9-mcxx7" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--mcxx7-eth0" Jul 12 10:13:30.270243 containerd[1591]: time="2025-07-12T10:13:30.270197948Z" level=info msg="StartContainer for \"9569d14de0d0e47d6bc2a085e59396759213e94429f79f5aa3bf37aab33228e3\" returns successfully" Jul 12 10:13:30.292310 containerd[1591]: time="2025-07-12T10:13:30.292243372Z" level=info msg="connecting to shim da2e7f175c5d151d51d6147dee4eaf36e9ca3d60a29073610b2cef6ad2b2436b" address="unix:///run/containerd/s/039c111e5587fb2c69d330b5adf52dfcf344ae608abb259b27eea74958ddcae0" namespace=k8s.io protocol=ttrpc version=3 Jul 12 10:13:30.323314 systemd[1]: Started cri-containerd-da2e7f175c5d151d51d6147dee4eaf36e9ca3d60a29073610b2cef6ad2b2436b.scope - libcontainer container da2e7f175c5d151d51d6147dee4eaf36e9ca3d60a29073610b2cef6ad2b2436b. Jul 12 10:13:30.338424 systemd-resolved[1413]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 10:13:30.372900 containerd[1591]: time="2025-07-12T10:13:30.372752650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-mcxx7,Uid:5bc98679-4b95-44d6-bc8a-3bb14fbcd2dc,Namespace:calico-system,Attempt:0,} returns sandbox id \"da2e7f175c5d151d51d6147dee4eaf36e9ca3d60a29073610b2cef6ad2b2436b\"" Jul 12 10:13:30.823467 systemd-networkd[1498]: calieb159f1b478: Gained IPv6LL Jul 12 10:13:30.838592 containerd[1591]: time="2025-07-12T10:13:30.838535641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qqlk5,Uid:536bd569-4556-43f6-b1a4-efffb6380322,Namespace:calico-system,Attempt:0,}" Jul 12 10:13:30.941252 systemd-networkd[1498]: cali0f1add7a9cc: Link UP Jul 12 10:13:30.941668 systemd-networkd[1498]: cali0f1add7a9cc: Gained carrier Jul 12 10:13:30.961351 containerd[1591]: 2025-07-12 10:13:30.872 [INFO][4883] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--qqlk5-eth0 csi-node-driver- calico-system 536bd569-4556-43f6-b1a4-efffb6380322 684 0 2025-07-12 10:13:03 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-qqlk5 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali0f1add7a9cc [] [] }} ContainerID="1edbbd77036bca3af039e12b8ceba2dba6505bf9ad4cb33f0d7fe287c9b8abea" Namespace="calico-system" Pod="csi-node-driver-qqlk5" WorkloadEndpoint="localhost-k8s-csi--node--driver--qqlk5-" Jul 12 10:13:30.961351 containerd[1591]: 2025-07-12 10:13:30.872 [INFO][4883] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1edbbd77036bca3af039e12b8ceba2dba6505bf9ad4cb33f0d7fe287c9b8abea" Namespace="calico-system" Pod="csi-node-driver-qqlk5" WorkloadEndpoint="localhost-k8s-csi--node--driver--qqlk5-eth0" Jul 12 10:13:30.961351 containerd[1591]: 2025-07-12 10:13:30.900 [INFO][4898] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1edbbd77036bca3af039e12b8ceba2dba6505bf9ad4cb33f0d7fe287c9b8abea" HandleID="k8s-pod-network.1edbbd77036bca3af039e12b8ceba2dba6505bf9ad4cb33f0d7fe287c9b8abea" Workload="localhost-k8s-csi--node--driver--qqlk5-eth0" Jul 12 10:13:30.961351 containerd[1591]: 2025-07-12 10:13:30.901 [INFO][4898] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1edbbd77036bca3af039e12b8ceba2dba6505bf9ad4cb33f0d7fe287c9b8abea" HandleID="k8s-pod-network.1edbbd77036bca3af039e12b8ceba2dba6505bf9ad4cb33f0d7fe287c9b8abea" Workload="localhost-k8s-csi--node--driver--qqlk5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002df5f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-qqlk5", "timestamp":"2025-07-12 10:13:30.900814084 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 10:13:30.961351 containerd[1591]: 2025-07-12 10:13:30.901 [INFO][4898] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 10:13:30.961351 containerd[1591]: 2025-07-12 10:13:30.901 [INFO][4898] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 10:13:30.961351 containerd[1591]: 2025-07-12 10:13:30.901 [INFO][4898] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 12 10:13:30.961351 containerd[1591]: 2025-07-12 10:13:30.907 [INFO][4898] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1edbbd77036bca3af039e12b8ceba2dba6505bf9ad4cb33f0d7fe287c9b8abea" host="localhost" Jul 12 10:13:30.961351 containerd[1591]: 2025-07-12 10:13:30.914 [INFO][4898] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 12 10:13:30.961351 containerd[1591]: 2025-07-12 10:13:30.918 [INFO][4898] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 12 10:13:30.961351 containerd[1591]: 2025-07-12 10:13:30.920 [INFO][4898] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 12 10:13:30.961351 containerd[1591]: 2025-07-12 10:13:30.923 [INFO][4898] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 12 10:13:30.961351 containerd[1591]: 2025-07-12 10:13:30.923 [INFO][4898] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1edbbd77036bca3af039e12b8ceba2dba6505bf9ad4cb33f0d7fe287c9b8abea" host="localhost" Jul 12 10:13:30.961351 containerd[1591]: 2025-07-12 10:13:30.924 [INFO][4898] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1edbbd77036bca3af039e12b8ceba2dba6505bf9ad4cb33f0d7fe287c9b8abea Jul 12 10:13:30.961351 containerd[1591]: 2025-07-12 10:13:30.928 [INFO][4898] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1edbbd77036bca3af039e12b8ceba2dba6505bf9ad4cb33f0d7fe287c9b8abea" host="localhost" Jul 12 10:13:30.961351 containerd[1591]: 2025-07-12 10:13:30.935 [INFO][4898] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.1edbbd77036bca3af039e12b8ceba2dba6505bf9ad4cb33f0d7fe287c9b8abea" host="localhost" Jul 12 10:13:30.961351 containerd[1591]: 2025-07-12 10:13:30.935 [INFO][4898] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.1edbbd77036bca3af039e12b8ceba2dba6505bf9ad4cb33f0d7fe287c9b8abea" host="localhost" Jul 12 10:13:30.961351 containerd[1591]: 2025-07-12 10:13:30.935 [INFO][4898] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 10:13:30.961351 containerd[1591]: 2025-07-12 10:13:30.935 [INFO][4898] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="1edbbd77036bca3af039e12b8ceba2dba6505bf9ad4cb33f0d7fe287c9b8abea" HandleID="k8s-pod-network.1edbbd77036bca3af039e12b8ceba2dba6505bf9ad4cb33f0d7fe287c9b8abea" Workload="localhost-k8s-csi--node--driver--qqlk5-eth0" Jul 12 10:13:30.962600 containerd[1591]: 2025-07-12 10:13:30.938 [INFO][4883] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1edbbd77036bca3af039e12b8ceba2dba6505bf9ad4cb33f0d7fe287c9b8abea" Namespace="calico-system" Pod="csi-node-driver-qqlk5" WorkloadEndpoint="localhost-k8s-csi--node--driver--qqlk5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--qqlk5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"536bd569-4556-43f6-b1a4-efffb6380322", ResourceVersion:"684", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 10, 13, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-qqlk5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0f1add7a9cc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 10:13:30.962600 containerd[1591]: 2025-07-12 10:13:30.938 [INFO][4883] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="1edbbd77036bca3af039e12b8ceba2dba6505bf9ad4cb33f0d7fe287c9b8abea" Namespace="calico-system" Pod="csi-node-driver-qqlk5" WorkloadEndpoint="localhost-k8s-csi--node--driver--qqlk5-eth0" Jul 12 10:13:30.962600 containerd[1591]: 2025-07-12 10:13:30.939 [INFO][4883] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0f1add7a9cc ContainerID="1edbbd77036bca3af039e12b8ceba2dba6505bf9ad4cb33f0d7fe287c9b8abea" Namespace="calico-system" Pod="csi-node-driver-qqlk5" WorkloadEndpoint="localhost-k8s-csi--node--driver--qqlk5-eth0" Jul 12 10:13:30.962600 containerd[1591]: 2025-07-12 10:13:30.941 [INFO][4883] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1edbbd77036bca3af039e12b8ceba2dba6505bf9ad4cb33f0d7fe287c9b8abea" Namespace="calico-system" Pod="csi-node-driver-qqlk5" WorkloadEndpoint="localhost-k8s-csi--node--driver--qqlk5-eth0" Jul 12 10:13:30.962600 containerd[1591]: 2025-07-12 10:13:30.945 [INFO][4883] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1edbbd77036bca3af039e12b8ceba2dba6505bf9ad4cb33f0d7fe287c9b8abea" Namespace="calico-system" Pod="csi-node-driver-qqlk5" WorkloadEndpoint="localhost-k8s-csi--node--driver--qqlk5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--qqlk5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"536bd569-4556-43f6-b1a4-efffb6380322", ResourceVersion:"684", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 10, 13, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1edbbd77036bca3af039e12b8ceba2dba6505bf9ad4cb33f0d7fe287c9b8abea", Pod:"csi-node-driver-qqlk5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0f1add7a9cc", MAC:"ba:87:bd:3e:1b:ce", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 10:13:30.962600 containerd[1591]: 2025-07-12 10:13:30.956 [INFO][4883] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1edbbd77036bca3af039e12b8ceba2dba6505bf9ad4cb33f0d7fe287c9b8abea" Namespace="calico-system" Pod="csi-node-driver-qqlk5" WorkloadEndpoint="localhost-k8s-csi--node--driver--qqlk5-eth0" Jul 12 10:13:30.991937 containerd[1591]: time="2025-07-12T10:13:30.991871331Z" level=info msg="connecting to shim 1edbbd77036bca3af039e12b8ceba2dba6505bf9ad4cb33f0d7fe287c9b8abea" address="unix:///run/containerd/s/3c5d67d046340049024569d8b59e473376bc2faa99b9fb8c6cd8a4cb43638ca3" namespace=k8s.io protocol=ttrpc version=3 Jul 12 10:13:31.026470 systemd[1]: Started cri-containerd-1edbbd77036bca3af039e12b8ceba2dba6505bf9ad4cb33f0d7fe287c9b8abea.scope - libcontainer container 1edbbd77036bca3af039e12b8ceba2dba6505bf9ad4cb33f0d7fe287c9b8abea. Jul 12 10:13:31.040183 systemd-resolved[1413]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 10:13:31.055489 containerd[1591]: time="2025-07-12T10:13:31.055437884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qqlk5,Uid:536bd569-4556-43f6-b1a4-efffb6380322,Namespace:calico-system,Attempt:0,} returns sandbox id \"1edbbd77036bca3af039e12b8ceba2dba6505bf9ad4cb33f0d7fe287c9b8abea\"" Jul 12 10:13:31.656462 systemd-networkd[1498]: cali2f78cc2ce62: Gained IPv6LL Jul 12 10:13:31.954276 containerd[1591]: time="2025-07-12T10:13:31.954106739Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:13:31.954986 containerd[1591]: time="2025-07-12T10:13:31.954949653Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 12 10:13:31.956103 containerd[1591]: time="2025-07-12T10:13:31.956071851Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:13:31.958256 containerd[1591]: time="2025-07-12T10:13:31.958214958Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:13:31.958982 containerd[1591]: time="2025-07-12T10:13:31.958927787Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 1.946687976s" Jul 12 10:13:31.959022 containerd[1591]: time="2025-07-12T10:13:31.958982069Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 12 10:13:31.960365 containerd[1591]: time="2025-07-12T10:13:31.960017274Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 12 10:13:31.961533 containerd[1591]: time="2025-07-12T10:13:31.961459654Z" level=info msg="CreateContainer within sandbox \"7d7be6f7914149709a240a1f1dace54935b4ed4b3b803e01915e14a9dce042b7\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 12 10:13:31.970252 containerd[1591]: time="2025-07-12T10:13:31.970206067Z" level=info msg="Container 18ee100bcde85a87d5f69ce6c4c494cb552a927b722a96610af05307a8000fbb: CDI devices from CRI Config.CDIDevices: []" Jul 12 10:13:31.979876 containerd[1591]: time="2025-07-12T10:13:31.979830901Z" level=info msg="CreateContainer within sandbox \"7d7be6f7914149709a240a1f1dace54935b4ed4b3b803e01915e14a9dce042b7\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"18ee100bcde85a87d5f69ce6c4c494cb552a927b722a96610af05307a8000fbb\"" Jul 12 10:13:31.980548 containerd[1591]: time="2025-07-12T10:13:31.980511750Z" level=info msg="StartContainer for \"18ee100bcde85a87d5f69ce6c4c494cb552a927b722a96610af05307a8000fbb\"" Jul 12 10:13:31.981705 containerd[1591]: time="2025-07-12T10:13:31.981649077Z" level=info msg="connecting to shim 18ee100bcde85a87d5f69ce6c4c494cb552a927b722a96610af05307a8000fbb" address="unix:///run/containerd/s/7d1108af8348c83075e2d0202e405a874c8684faacf036521cbf88044a0e7629" protocol=ttrpc version=3 Jul 12 10:13:32.010414 systemd[1]: Started cri-containerd-18ee100bcde85a87d5f69ce6c4c494cb552a927b722a96610af05307a8000fbb.scope - libcontainer container 18ee100bcde85a87d5f69ce6c4c494cb552a927b722a96610af05307a8000fbb. Jul 12 10:13:32.070872 containerd[1591]: time="2025-07-12T10:13:32.070824951Z" level=info msg="StartContainer for \"18ee100bcde85a87d5f69ce6c4c494cb552a927b722a96610af05307a8000fbb\" returns successfully" Jul 12 10:13:32.231433 systemd-networkd[1498]: cali0f1add7a9cc: Gained IPv6LL Jul 12 10:13:32.240222 systemd[1]: Started sshd@8-10.0.0.45:22-10.0.0.1:47730.service - OpenSSH per-connection server daemon (10.0.0.1:47730). Jul 12 10:13:32.412954 kubelet[2749]: I0712 10:13:32.412824 2749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-64dc94bff6-68bnm" podStartSLOduration=28.602862778 podStartE2EDuration="32.412795045s" podCreationTimestamp="2025-07-12 10:13:00 +0000 UTC" firstStartedPulling="2025-07-12 10:13:28.149941137 +0000 UTC m=+42.397314266" lastFinishedPulling="2025-07-12 10:13:31.959873394 +0000 UTC m=+46.207246533" observedRunningTime="2025-07-12 10:13:32.410748169 +0000 UTC m=+46.658121298" watchObservedRunningTime="2025-07-12 10:13:32.412795045 +0000 UTC m=+46.660168174" Jul 12 10:13:32.415390 kubelet[2749]: I0712 10:13:32.413209 2749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-774d884cc6-zsqw5" podStartSLOduration=4.011788622 podStartE2EDuration="7.413198734s" podCreationTimestamp="2025-07-12 10:13:25 +0000 UTC" firstStartedPulling="2025-07-12 10:13:26.61056386 +0000 UTC m=+40.857936979" lastFinishedPulling="2025-07-12 10:13:30.011973962 +0000 UTC m=+44.259347091" observedRunningTime="2025-07-12 10:13:31.461799788 +0000 UTC m=+45.709172917" watchObservedRunningTime="2025-07-12 10:13:32.413198734 +0000 UTC m=+46.660571873" Jul 12 10:13:32.441519 containerd[1591]: time="2025-07-12T10:13:32.441112353Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:13:32.442187 containerd[1591]: time="2025-07-12T10:13:32.442130587Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 12 10:13:32.445752 containerd[1591]: time="2025-07-12T10:13:32.445609883Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 485.563614ms" Jul 12 10:13:32.445752 containerd[1591]: time="2025-07-12T10:13:32.445639929Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 12 10:13:32.450443 containerd[1591]: time="2025-07-12T10:13:32.450393940Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 12 10:13:32.453107 containerd[1591]: time="2025-07-12T10:13:32.453015295Z" level=info msg="CreateContainer within sandbox \"a80b5ff988fa1f13d4e7aef438de72e47f67b9bc433e64dd6782996223edc8ce\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 12 10:13:32.468065 containerd[1591]: time="2025-07-12T10:13:32.467434393Z" level=info msg="Container ab708b98dbcd398b414b11a2ac7e1dedbdc148cbf06f490436877b5554135749: CDI devices from CRI Config.CDIDevices: []" Jul 12 10:13:32.479596 containerd[1591]: time="2025-07-12T10:13:32.479547120Z" level=info msg="CreateContainer within sandbox \"a80b5ff988fa1f13d4e7aef438de72e47f67b9bc433e64dd6782996223edc8ce\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ab708b98dbcd398b414b11a2ac7e1dedbdc148cbf06f490436877b5554135749\"" Jul 12 10:13:32.480442 containerd[1591]: time="2025-07-12T10:13:32.480414519Z" level=info msg="StartContainer for \"ab708b98dbcd398b414b11a2ac7e1dedbdc148cbf06f490436877b5554135749\"" Jul 12 10:13:32.482090 containerd[1591]: time="2025-07-12T10:13:32.481981533Z" level=info msg="connecting to shim ab708b98dbcd398b414b11a2ac7e1dedbdc148cbf06f490436877b5554135749" address="unix:///run/containerd/s/8e666cbf77bf18d1409d4e2a087e069fa6ef2c36d088ea4438d48fba1c69c637" protocol=ttrpc version=3 Jul 12 10:13:32.484651 sshd[5011]: Accepted publickey for core from 10.0.0.1 port 47730 ssh2: RSA SHA256:ljiWRYFI9rWPiw0u7CK8T0RxaTLaHmCBjbv/AQVYFjw Jul 12 10:13:32.486425 sshd-session[5011]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 10:13:32.491729 systemd-logind[1577]: New session 9 of user core. Jul 12 10:13:32.501649 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 12 10:13:32.511377 systemd[1]: Started cri-containerd-ab708b98dbcd398b414b11a2ac7e1dedbdc148cbf06f490436877b5554135749.scope - libcontainer container ab708b98dbcd398b414b11a2ac7e1dedbdc148cbf06f490436877b5554135749. Jul 12 10:13:32.592831 containerd[1591]: time="2025-07-12T10:13:32.592749738Z" level=info msg="StartContainer for \"ab708b98dbcd398b414b11a2ac7e1dedbdc148cbf06f490436877b5554135749\" returns successfully" Jul 12 10:13:32.662852 sshd[5028]: Connection closed by 10.0.0.1 port 47730 Jul 12 10:13:32.663255 sshd-session[5011]: pam_unix(sshd:session): session closed for user core Jul 12 10:13:32.668458 systemd[1]: sshd@8-10.0.0.45:22-10.0.0.1:47730.service: Deactivated successfully. Jul 12 10:13:32.671688 systemd[1]: session-9.scope: Deactivated successfully. Jul 12 10:13:32.673098 systemd-logind[1577]: Session 9 logged out. Waiting for processes to exit. Jul 12 10:13:32.675776 systemd-logind[1577]: Removed session 9. Jul 12 10:13:33.284660 kubelet[2749]: I0712 10:13:33.284619 2749 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 10:13:33.294646 kubelet[2749]: I0712 10:13:33.294392 2749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-64dc94bff6-rtzpw" podStartSLOduration=29.168954337 podStartE2EDuration="33.294372098s" podCreationTimestamp="2025-07-12 10:13:00 +0000 UTC" firstStartedPulling="2025-07-12 10:13:28.322701956 +0000 UTC m=+42.570075076" lastFinishedPulling="2025-07-12 10:13:32.448119688 +0000 UTC m=+46.695492837" observedRunningTime="2025-07-12 10:13:33.29384576 +0000 UTC m=+47.541218899" watchObservedRunningTime="2025-07-12 10:13:33.294372098 +0000 UTC m=+47.541745228" Jul 12 10:13:35.038315 containerd[1591]: time="2025-07-12T10:13:35.037924482Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:13:35.039726 containerd[1591]: time="2025-07-12T10:13:35.039683666Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 12 10:13:35.041817 containerd[1591]: time="2025-07-12T10:13:35.041565190Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:13:35.043870 containerd[1591]: time="2025-07-12T10:13:35.043809013Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:13:35.044327 containerd[1591]: time="2025-07-12T10:13:35.044290177Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 2.593857424s" Jul 12 10:13:35.044327 containerd[1591]: time="2025-07-12T10:13:35.044322578Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 12 10:13:35.046313 containerd[1591]: time="2025-07-12T10:13:35.046271820Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 12 10:13:35.064605 containerd[1591]: time="2025-07-12T10:13:35.064487983Z" level=info msg="CreateContainer within sandbox \"d1a681c201e7d9b6dfc029de2086b1441b2ab33cffa975db28ff104ecfc982c9\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 12 10:13:35.077933 containerd[1591]: time="2025-07-12T10:13:35.077833599Z" level=info msg="Container a62ff33e191549802b2458478e5eb41756020c3973b4de6ec9af8de7fd2c8731: CDI devices from CRI Config.CDIDevices: []" Jul 12 10:13:35.287911 containerd[1591]: time="2025-07-12T10:13:35.287805959Z" level=info msg="CreateContainer within sandbox \"d1a681c201e7d9b6dfc029de2086b1441b2ab33cffa975db28ff104ecfc982c9\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"a62ff33e191549802b2458478e5eb41756020c3973b4de6ec9af8de7fd2c8731\"" Jul 12 10:13:35.289203 containerd[1591]: time="2025-07-12T10:13:35.288440811Z" level=info msg="StartContainer for \"a62ff33e191549802b2458478e5eb41756020c3973b4de6ec9af8de7fd2c8731\"" Jul 12 10:13:35.290207 containerd[1591]: time="2025-07-12T10:13:35.290147086Z" level=info msg="connecting to shim a62ff33e191549802b2458478e5eb41756020c3973b4de6ec9af8de7fd2c8731" address="unix:///run/containerd/s/d892dc1e149ab43c8129c0596e3714b948f0282497de326f23fe4b15fcb51dd2" protocol=ttrpc version=3 Jul 12 10:13:35.344347 systemd[1]: Started cri-containerd-a62ff33e191549802b2458478e5eb41756020c3973b4de6ec9af8de7fd2c8731.scope - libcontainer container a62ff33e191549802b2458478e5eb41756020c3973b4de6ec9af8de7fd2c8731. Jul 12 10:13:35.885762 containerd[1591]: time="2025-07-12T10:13:35.885696748Z" level=info msg="StartContainer for \"a62ff33e191549802b2458478e5eb41756020c3973b4de6ec9af8de7fd2c8731\" returns successfully" Jul 12 10:13:37.334075 kubelet[2749]: I0712 10:13:37.333979 2749 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 10:13:37.683545 systemd[1]: Started sshd@9-10.0.0.45:22-10.0.0.1:33024.service - OpenSSH per-connection server daemon (10.0.0.1:33024). Jul 12 10:13:37.764263 sshd[5131]: Accepted publickey for core from 10.0.0.1 port 33024 ssh2: RSA SHA256:ljiWRYFI9rWPiw0u7CK8T0RxaTLaHmCBjbv/AQVYFjw Jul 12 10:13:37.766404 sshd-session[5131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 10:13:37.771404 systemd-logind[1577]: New session 10 of user core. Jul 12 10:13:37.782333 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 12 10:13:37.915061 sshd[5135]: Connection closed by 10.0.0.1 port 33024 Jul 12 10:13:37.915483 sshd-session[5131]: pam_unix(sshd:session): session closed for user core Jul 12 10:13:37.926837 systemd[1]: sshd@9-10.0.0.45:22-10.0.0.1:33024.service: Deactivated successfully. Jul 12 10:13:37.929853 systemd[1]: session-10.scope: Deactivated successfully. Jul 12 10:13:37.931782 systemd-logind[1577]: Session 10 logged out. Waiting for processes to exit. Jul 12 10:13:37.935691 systemd[1]: Started sshd@10-10.0.0.45:22-10.0.0.1:33038.service - OpenSSH per-connection server daemon (10.0.0.1:33038). Jul 12 10:13:37.936970 systemd-logind[1577]: Removed session 10. Jul 12 10:13:37.992107 sshd[5154]: Accepted publickey for core from 10.0.0.1 port 33038 ssh2: RSA SHA256:ljiWRYFI9rWPiw0u7CK8T0RxaTLaHmCBjbv/AQVYFjw Jul 12 10:13:37.994442 sshd-session[5154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 10:13:38.000560 systemd-logind[1577]: New session 11 of user core. Jul 12 10:13:38.006371 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 12 10:13:38.165576 sshd[5157]: Connection closed by 10.0.0.1 port 33038 Jul 12 10:13:38.166432 sshd-session[5154]: pam_unix(sshd:session): session closed for user core Jul 12 10:13:38.179235 systemd[1]: sshd@10-10.0.0.45:22-10.0.0.1:33038.service: Deactivated successfully. Jul 12 10:13:38.183998 systemd[1]: session-11.scope: Deactivated successfully. Jul 12 10:13:38.187417 systemd-logind[1577]: Session 11 logged out. Waiting for processes to exit. Jul 12 10:13:38.190788 systemd[1]: Started sshd@11-10.0.0.45:22-10.0.0.1:33042.service - OpenSSH per-connection server daemon (10.0.0.1:33042). Jul 12 10:13:38.192539 systemd-logind[1577]: Removed session 11. Jul 12 10:13:38.256937 sshd[5169]: Accepted publickey for core from 10.0.0.1 port 33042 ssh2: RSA SHA256:ljiWRYFI9rWPiw0u7CK8T0RxaTLaHmCBjbv/AQVYFjw Jul 12 10:13:38.259393 sshd-session[5169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 10:13:38.266907 systemd-logind[1577]: New session 12 of user core. Jul 12 10:13:38.269426 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 12 10:13:38.310938 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount920894539.mount: Deactivated successfully. Jul 12 10:13:38.419985 sshd[5172]: Connection closed by 10.0.0.1 port 33042 Jul 12 10:13:38.420736 sshd-session[5169]: pam_unix(sshd:session): session closed for user core Jul 12 10:13:38.426191 systemd[1]: sshd@11-10.0.0.45:22-10.0.0.1:33042.service: Deactivated successfully. Jul 12 10:13:38.428891 systemd[1]: session-12.scope: Deactivated successfully. Jul 12 10:13:38.430006 systemd-logind[1577]: Session 12 logged out. Waiting for processes to exit. Jul 12 10:13:38.432726 systemd-logind[1577]: Removed session 12. Jul 12 10:13:39.046965 containerd[1591]: time="2025-07-12T10:13:39.046910949Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:13:39.047971 containerd[1591]: time="2025-07-12T10:13:39.047732782Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 12 10:13:39.048999 containerd[1591]: time="2025-07-12T10:13:39.048936402Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:13:39.051546 containerd[1591]: time="2025-07-12T10:13:39.051519201Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:13:39.052144 containerd[1591]: time="2025-07-12T10:13:39.052118036Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 4.005802043s" Jul 12 10:13:39.052215 containerd[1591]: time="2025-07-12T10:13:39.052149394Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 12 10:13:39.053230 containerd[1591]: time="2025-07-12T10:13:39.053018417Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 12 10:13:39.054012 containerd[1591]: time="2025-07-12T10:13:39.053989009Z" level=info msg="CreateContainer within sandbox \"da2e7f175c5d151d51d6147dee4eaf36e9ca3d60a29073610b2cef6ad2b2436b\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 12 10:13:39.062526 containerd[1591]: time="2025-07-12T10:13:39.062498337Z" level=info msg="Container 7db47e4f3558b0d226178d27f6969f6d53453e23addccf859664511bfb987ad2: CDI devices from CRI Config.CDIDevices: []" Jul 12 10:13:39.069856 containerd[1591]: time="2025-07-12T10:13:39.069829753Z" level=info msg="CreateContainer within sandbox \"da2e7f175c5d151d51d6147dee4eaf36e9ca3d60a29073610b2cef6ad2b2436b\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"7db47e4f3558b0d226178d27f6969f6d53453e23addccf859664511bfb987ad2\"" Jul 12 10:13:39.070316 containerd[1591]: time="2025-07-12T10:13:39.070294506Z" level=info msg="StartContainer for \"7db47e4f3558b0d226178d27f6969f6d53453e23addccf859664511bfb987ad2\"" Jul 12 10:13:39.071260 containerd[1591]: time="2025-07-12T10:13:39.071235133Z" level=info msg="connecting to shim 7db47e4f3558b0d226178d27f6969f6d53453e23addccf859664511bfb987ad2" address="unix:///run/containerd/s/039c111e5587fb2c69d330b5adf52dfcf344ae608abb259b27eea74958ddcae0" protocol=ttrpc version=3 Jul 12 10:13:39.093480 systemd[1]: Started cri-containerd-7db47e4f3558b0d226178d27f6969f6d53453e23addccf859664511bfb987ad2.scope - libcontainer container 7db47e4f3558b0d226178d27f6969f6d53453e23addccf859664511bfb987ad2. Jul 12 10:13:39.141306 containerd[1591]: time="2025-07-12T10:13:39.141229512Z" level=info msg="StartContainer for \"7db47e4f3558b0d226178d27f6969f6d53453e23addccf859664511bfb987ad2\" returns successfully" Jul 12 10:13:39.351866 kubelet[2749]: I0712 10:13:39.351557 2749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-mcxx7" podStartSLOduration=28.673330123 podStartE2EDuration="37.351533463s" podCreationTimestamp="2025-07-12 10:13:02 +0000 UTC" firstStartedPulling="2025-07-12 10:13:30.374714075 +0000 UTC m=+44.622087205" lastFinishedPulling="2025-07-12 10:13:39.052917416 +0000 UTC m=+53.300290545" observedRunningTime="2025-07-12 10:13:39.350890316 +0000 UTC m=+53.598263445" watchObservedRunningTime="2025-07-12 10:13:39.351533463 +0000 UTC m=+53.598906592" Jul 12 10:13:39.353373 kubelet[2749]: I0712 10:13:39.353308 2749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6645b4c756-pfljt" podStartSLOduration=30.351915586 podStartE2EDuration="36.353297075s" podCreationTimestamp="2025-07-12 10:13:03 +0000 UTC" firstStartedPulling="2025-07-12 10:13:29.044457738 +0000 UTC m=+43.291830867" lastFinishedPulling="2025-07-12 10:13:35.045839227 +0000 UTC m=+49.293212356" observedRunningTime="2025-07-12 10:13:36.396156993 +0000 UTC m=+50.643530122" watchObservedRunningTime="2025-07-12 10:13:39.353297075 +0000 UTC m=+53.600670204" Jul 12 10:13:40.325526 kubelet[2749]: I0712 10:13:40.325462 2749 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 10:13:40.432911 containerd[1591]: time="2025-07-12T10:13:40.432841690Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:13:40.434267 containerd[1591]: time="2025-07-12T10:13:40.434217683Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 12 10:13:40.435423 containerd[1591]: time="2025-07-12T10:13:40.435377311Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:13:40.438097 containerd[1591]: time="2025-07-12T10:13:40.437420978Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:13:40.438097 containerd[1591]: time="2025-07-12T10:13:40.437968336Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 1.384919832s" Jul 12 10:13:40.438097 containerd[1591]: time="2025-07-12T10:13:40.437998041Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 12 10:13:40.441470 containerd[1591]: time="2025-07-12T10:13:40.441414736Z" level=info msg="CreateContainer within sandbox \"1edbbd77036bca3af039e12b8ceba2dba6505bf9ad4cb33f0d7fe287c9b8abea\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 12 10:13:40.451649 containerd[1591]: time="2025-07-12T10:13:40.450626152Z" level=info msg="Container 7e8430231b4fe0d5ec42fc9fce592222bf5c3fc4e49d2ef72ca5f0864e0fb895: CDI devices from CRI Config.CDIDevices: []" Jul 12 10:13:40.457375 containerd[1591]: time="2025-07-12T10:13:40.457346260Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7db47e4f3558b0d226178d27f6969f6d53453e23addccf859664511bfb987ad2\" id:\"afe36272f48463ff04a980d4eb3371bcfca56a9f2841ee67c8f4d59be5d15ae1\" pid:5249 exit_status:1 exited_at:{seconds:1752315220 nanos:456789264}" Jul 12 10:13:40.464404 containerd[1591]: time="2025-07-12T10:13:40.464365369Z" level=info msg="CreateContainer within sandbox \"1edbbd77036bca3af039e12b8ceba2dba6505bf9ad4cb33f0d7fe287c9b8abea\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"7e8430231b4fe0d5ec42fc9fce592222bf5c3fc4e49d2ef72ca5f0864e0fb895\"" Jul 12 10:13:40.464869 containerd[1591]: time="2025-07-12T10:13:40.464835782Z" level=info msg="StartContainer for \"7e8430231b4fe0d5ec42fc9fce592222bf5c3fc4e49d2ef72ca5f0864e0fb895\"" Jul 12 10:13:40.466573 containerd[1591]: time="2025-07-12T10:13:40.466541214Z" level=info msg="connecting to shim 7e8430231b4fe0d5ec42fc9fce592222bf5c3fc4e49d2ef72ca5f0864e0fb895" address="unix:///run/containerd/s/3c5d67d046340049024569d8b59e473376bc2faa99b9fb8c6cd8a4cb43638ca3" protocol=ttrpc version=3 Jul 12 10:13:40.494302 systemd[1]: Started cri-containerd-7e8430231b4fe0d5ec42fc9fce592222bf5c3fc4e49d2ef72ca5f0864e0fb895.scope - libcontainer container 7e8430231b4fe0d5ec42fc9fce592222bf5c3fc4e49d2ef72ca5f0864e0fb895. Jul 12 10:13:40.539144 containerd[1591]: time="2025-07-12T10:13:40.539095753Z" level=info msg="StartContainer for \"7e8430231b4fe0d5ec42fc9fce592222bf5c3fc4e49d2ef72ca5f0864e0fb895\" returns successfully" Jul 12 10:13:40.540387 containerd[1591]: time="2025-07-12T10:13:40.540341463Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 12 10:13:41.428111 containerd[1591]: time="2025-07-12T10:13:41.428050276Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7db47e4f3558b0d226178d27f6969f6d53453e23addccf859664511bfb987ad2\" id:\"05001c1e93b732722f27ce7e303855b4a39c267ae84266359ca6f208f24365ce\" pid:5306 exit_status:1 exited_at:{seconds:1752315221 nanos:427711180}" Jul 12 10:13:42.275260 containerd[1591]: time="2025-07-12T10:13:42.275128435Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:13:42.275896 containerd[1591]: time="2025-07-12T10:13:42.275840472Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 12 10:13:42.276965 containerd[1591]: time="2025-07-12T10:13:42.276916262Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:13:42.278875 containerd[1591]: time="2025-07-12T10:13:42.278830626Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 10:13:42.279420 containerd[1591]: time="2025-07-12T10:13:42.279381801Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 1.739002457s" Jul 12 10:13:42.279420 containerd[1591]: time="2025-07-12T10:13:42.279415083Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 12 10:13:42.281569 containerd[1591]: time="2025-07-12T10:13:42.281523201Z" level=info msg="CreateContainer within sandbox \"1edbbd77036bca3af039e12b8ceba2dba6505bf9ad4cb33f0d7fe287c9b8abea\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 12 10:13:42.288772 containerd[1591]: time="2025-07-12T10:13:42.288733077Z" level=info msg="Container 97423bd31dd34cd1d9617b040feca44a1a00e293f822dd8e7ea8f2b391fd6091: CDI devices from CRI Config.CDIDevices: []" Jul 12 10:13:42.298202 containerd[1591]: time="2025-07-12T10:13:42.298158293Z" level=info msg="CreateContainer within sandbox \"1edbbd77036bca3af039e12b8ceba2dba6505bf9ad4cb33f0d7fe287c9b8abea\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"97423bd31dd34cd1d9617b040feca44a1a00e293f822dd8e7ea8f2b391fd6091\"" Jul 12 10:13:42.298658 containerd[1591]: time="2025-07-12T10:13:42.298620620Z" level=info msg="StartContainer for \"97423bd31dd34cd1d9617b040feca44a1a00e293f822dd8e7ea8f2b391fd6091\"" Jul 12 10:13:42.299978 containerd[1591]: time="2025-07-12T10:13:42.299943093Z" level=info msg="connecting to shim 97423bd31dd34cd1d9617b040feca44a1a00e293f822dd8e7ea8f2b391fd6091" address="unix:///run/containerd/s/3c5d67d046340049024569d8b59e473376bc2faa99b9fb8c6cd8a4cb43638ca3" protocol=ttrpc version=3 Jul 12 10:13:42.322315 systemd[1]: Started cri-containerd-97423bd31dd34cd1d9617b040feca44a1a00e293f822dd8e7ea8f2b391fd6091.scope - libcontainer container 97423bd31dd34cd1d9617b040feca44a1a00e293f822dd8e7ea8f2b391fd6091. Jul 12 10:13:42.366000 containerd[1591]: time="2025-07-12T10:13:42.365947872Z" level=info msg="StartContainer for \"97423bd31dd34cd1d9617b040feca44a1a00e293f822dd8e7ea8f2b391fd6091\" returns successfully" Jul 12 10:13:42.907652 kubelet[2749]: I0712 10:13:42.907598 2749 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 12 10:13:42.907652 kubelet[2749]: I0712 10:13:42.907637 2749 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 12 10:13:43.372964 kubelet[2749]: I0712 10:13:43.372854 2749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-qqlk5" podStartSLOduration=29.149843984 podStartE2EDuration="40.372823136s" podCreationTimestamp="2025-07-12 10:13:03 +0000 UTC" firstStartedPulling="2025-07-12 10:13:31.057170629 +0000 UTC m=+45.304601157" lastFinishedPulling="2025-07-12 10:13:42.28020718 +0000 UTC m=+56.527580309" observedRunningTime="2025-07-12 10:13:43.37148342 +0000 UTC m=+57.618856550" watchObservedRunningTime="2025-07-12 10:13:43.372823136 +0000 UTC m=+57.620196265" Jul 12 10:13:43.451636 systemd[1]: Started sshd@12-10.0.0.45:22-10.0.0.1:33056.service - OpenSSH per-connection server daemon (10.0.0.1:33056). Jul 12 10:13:43.517079 sshd[5355]: Accepted publickey for core from 10.0.0.1 port 33056 ssh2: RSA SHA256:ljiWRYFI9rWPiw0u7CK8T0RxaTLaHmCBjbv/AQVYFjw Jul 12 10:13:43.519338 sshd-session[5355]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 10:13:43.524354 systemd-logind[1577]: New session 13 of user core. Jul 12 10:13:43.532345 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 12 10:13:43.673315 sshd[5358]: Connection closed by 10.0.0.1 port 33056 Jul 12 10:13:43.673927 sshd-session[5355]: pam_unix(sshd:session): session closed for user core Jul 12 10:13:43.680079 systemd[1]: sshd@12-10.0.0.45:22-10.0.0.1:33056.service: Deactivated successfully. Jul 12 10:13:43.682648 systemd[1]: session-13.scope: Deactivated successfully. Jul 12 10:13:43.684359 systemd-logind[1577]: Session 13 logged out. Waiting for processes to exit. Jul 12 10:13:43.685716 systemd-logind[1577]: Removed session 13. Jul 12 10:13:44.309335 kubelet[2749]: I0712 10:13:44.309137 2749 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 10:13:44.358091 containerd[1591]: time="2025-07-12T10:13:44.358020109Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a62ff33e191549802b2458478e5eb41756020c3973b4de6ec9af8de7fd2c8731\" id:\"307f896ae57aca1d3f0c913727a9be9b40d5385d87705bf5a42c854aa24bc6da\" pid:5384 exited_at:{seconds:1752315224 nanos:357558924}" Jul 12 10:13:44.400883 containerd[1591]: time="2025-07-12T10:13:44.400796832Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a62ff33e191549802b2458478e5eb41756020c3973b4de6ec9af8de7fd2c8731\" id:\"44ce6990895efb31cdcb5def5456bab131ac35af100df2fb1e2a1c9c3eade154\" pid:5405 exited_at:{seconds:1752315224 nanos:400478505}" Jul 12 10:13:47.775696 containerd[1591]: time="2025-07-12T10:13:47.775640300Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d0565d0910ac10d5296e04f66347e8804962980f7d43919a425eb7467b227a11\" id:\"1187d137b047272b5d50075ffd7dedf50b734eaa2a710ebf24d9c59a556b2117\" pid:5440 exit_status:1 exited_at:{seconds:1752315227 nanos:775300823}" Jul 12 10:13:48.687873 systemd[1]: Started sshd@13-10.0.0.45:22-10.0.0.1:57638.service - OpenSSH per-connection server daemon (10.0.0.1:57638). Jul 12 10:13:48.763624 sshd[5454]: Accepted publickey for core from 10.0.0.1 port 57638 ssh2: RSA SHA256:ljiWRYFI9rWPiw0u7CK8T0RxaTLaHmCBjbv/AQVYFjw Jul 12 10:13:48.765564 sshd-session[5454]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 10:13:48.771681 systemd-logind[1577]: New session 14 of user core. Jul 12 10:13:48.787338 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 12 10:13:48.919129 sshd[5458]: Connection closed by 10.0.0.1 port 57638 Jul 12 10:13:48.919506 sshd-session[5454]: pam_unix(sshd:session): session closed for user core Jul 12 10:13:48.924305 systemd[1]: sshd@13-10.0.0.45:22-10.0.0.1:57638.service: Deactivated successfully. Jul 12 10:13:48.926425 systemd[1]: session-14.scope: Deactivated successfully. Jul 12 10:13:48.927168 systemd-logind[1577]: Session 14 logged out. Waiting for processes to exit. Jul 12 10:13:48.928776 systemd-logind[1577]: Removed session 14. Jul 12 10:13:53.937920 systemd[1]: Started sshd@14-10.0.0.45:22-10.0.0.1:57644.service - OpenSSH per-connection server daemon (10.0.0.1:57644). Jul 12 10:13:54.001318 sshd[5476]: Accepted publickey for core from 10.0.0.1 port 57644 ssh2: RSA SHA256:ljiWRYFI9rWPiw0u7CK8T0RxaTLaHmCBjbv/AQVYFjw Jul 12 10:13:54.003454 sshd-session[5476]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 10:13:54.008074 systemd-logind[1577]: New session 15 of user core. Jul 12 10:13:54.019338 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 12 10:13:54.148314 sshd[5479]: Connection closed by 10.0.0.1 port 57644 Jul 12 10:13:54.148712 sshd-session[5476]: pam_unix(sshd:session): session closed for user core Jul 12 10:13:54.154336 systemd[1]: sshd@14-10.0.0.45:22-10.0.0.1:57644.service: Deactivated successfully. Jul 12 10:13:54.156808 systemd[1]: session-15.scope: Deactivated successfully. Jul 12 10:13:54.157821 systemd-logind[1577]: Session 15 logged out. Waiting for processes to exit. Jul 12 10:13:54.159373 systemd-logind[1577]: Removed session 15. Jul 12 10:13:59.164302 systemd[1]: Started sshd@15-10.0.0.45:22-10.0.0.1:58726.service - OpenSSH per-connection server daemon (10.0.0.1:58726). Jul 12 10:13:59.217769 sshd[5493]: Accepted publickey for core from 10.0.0.1 port 58726 ssh2: RSA SHA256:ljiWRYFI9rWPiw0u7CK8T0RxaTLaHmCBjbv/AQVYFjw Jul 12 10:13:59.219110 sshd-session[5493]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 10:13:59.223251 systemd-logind[1577]: New session 16 of user core. Jul 12 10:13:59.237292 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 12 10:13:59.350814 sshd[5496]: Connection closed by 10.0.0.1 port 58726 Jul 12 10:13:59.351331 sshd-session[5493]: pam_unix(sshd:session): session closed for user core Jul 12 10:13:59.355318 systemd[1]: sshd@15-10.0.0.45:22-10.0.0.1:58726.service: Deactivated successfully. Jul 12 10:13:59.357169 systemd[1]: session-16.scope: Deactivated successfully. Jul 12 10:13:59.357913 systemd-logind[1577]: Session 16 logged out. Waiting for processes to exit. Jul 12 10:13:59.359041 systemd-logind[1577]: Removed session 16. Jul 12 10:14:01.050521 containerd[1591]: time="2025-07-12T10:14:01.050470245Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7db47e4f3558b0d226178d27f6969f6d53453e23addccf859664511bfb987ad2\" id:\"87fbcde6099ee25f4044418c153f660edadd2db49173bc636673022afda38bb3\" pid:5520 exited_at:{seconds:1752315241 nanos:50023857}" Jul 12 10:14:04.370973 systemd[1]: Started sshd@16-10.0.0.45:22-10.0.0.1:58740.service - OpenSSH per-connection server daemon (10.0.0.1:58740). Jul 12 10:14:04.455125 sshd[5535]: Accepted publickey for core from 10.0.0.1 port 58740 ssh2: RSA SHA256:ljiWRYFI9rWPiw0u7CK8T0RxaTLaHmCBjbv/AQVYFjw Jul 12 10:14:04.456695 sshd-session[5535]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 10:14:04.463136 systemd-logind[1577]: New session 17 of user core. Jul 12 10:14:04.472317 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 12 10:14:04.684208 sshd[5538]: Connection closed by 10.0.0.1 port 58740 Jul 12 10:14:04.686813 sshd-session[5535]: pam_unix(sshd:session): session closed for user core Jul 12 10:14:04.696291 systemd[1]: sshd@16-10.0.0.45:22-10.0.0.1:58740.service: Deactivated successfully. Jul 12 10:14:04.699055 systemd[1]: session-17.scope: Deactivated successfully. Jul 12 10:14:04.700788 systemd-logind[1577]: Session 17 logged out. Waiting for processes to exit. Jul 12 10:14:04.704740 systemd[1]: Started sshd@17-10.0.0.45:22-10.0.0.1:58746.service - OpenSSH per-connection server daemon (10.0.0.1:58746). Jul 12 10:14:04.707066 systemd-logind[1577]: Removed session 17. Jul 12 10:14:04.757197 sshd[5551]: Accepted publickey for core from 10.0.0.1 port 58746 ssh2: RSA SHA256:ljiWRYFI9rWPiw0u7CK8T0RxaTLaHmCBjbv/AQVYFjw Jul 12 10:14:04.759303 sshd-session[5551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 10:14:04.764552 systemd-logind[1577]: New session 18 of user core. Jul 12 10:14:04.774434 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 12 10:14:05.044197 sshd[5554]: Connection closed by 10.0.0.1 port 58746 Jul 12 10:14:05.044730 sshd-session[5551]: pam_unix(sshd:session): session closed for user core Jul 12 10:14:05.058199 systemd[1]: sshd@17-10.0.0.45:22-10.0.0.1:58746.service: Deactivated successfully. Jul 12 10:14:05.060248 systemd[1]: session-18.scope: Deactivated successfully. Jul 12 10:14:05.061187 systemd-logind[1577]: Session 18 logged out. Waiting for processes to exit. Jul 12 10:14:05.064221 systemd[1]: Started sshd@18-10.0.0.45:22-10.0.0.1:58752.service - OpenSSH per-connection server daemon (10.0.0.1:58752). Jul 12 10:14:05.064972 systemd-logind[1577]: Removed session 18. Jul 12 10:14:05.119303 sshd[5565]: Accepted publickey for core from 10.0.0.1 port 58752 ssh2: RSA SHA256:ljiWRYFI9rWPiw0u7CK8T0RxaTLaHmCBjbv/AQVYFjw Jul 12 10:14:05.121119 sshd-session[5565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 10:14:05.125620 systemd-logind[1577]: New session 19 of user core. Jul 12 10:14:05.133322 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 12 10:14:06.951212 sshd[5568]: Connection closed by 10.0.0.1 port 58752 Jul 12 10:14:06.951483 sshd-session[5565]: pam_unix(sshd:session): session closed for user core Jul 12 10:14:06.960581 systemd[1]: sshd@18-10.0.0.45:22-10.0.0.1:58752.service: Deactivated successfully. Jul 12 10:14:06.962944 systemd[1]: session-19.scope: Deactivated successfully. Jul 12 10:14:06.963454 systemd[1]: session-19.scope: Consumed 653ms CPU time, 71.4M memory peak. Jul 12 10:14:06.964837 systemd-logind[1577]: Session 19 logged out. Waiting for processes to exit. Jul 12 10:14:06.971058 systemd[1]: Started sshd@19-10.0.0.45:22-10.0.0.1:46810.service - OpenSSH per-connection server daemon (10.0.0.1:46810). Jul 12 10:14:06.971774 systemd-logind[1577]: Removed session 19. Jul 12 10:14:07.049090 sshd[5588]: Accepted publickey for core from 10.0.0.1 port 46810 ssh2: RSA SHA256:ljiWRYFI9rWPiw0u7CK8T0RxaTLaHmCBjbv/AQVYFjw Jul 12 10:14:07.050999 sshd-session[5588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 10:14:07.056142 systemd-logind[1577]: New session 20 of user core. Jul 12 10:14:07.067295 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 12 10:14:07.426633 sshd[5597]: Connection closed by 10.0.0.1 port 46810 Jul 12 10:14:07.427399 sshd-session[5588]: pam_unix(sshd:session): session closed for user core Jul 12 10:14:07.436972 systemd[1]: sshd@19-10.0.0.45:22-10.0.0.1:46810.service: Deactivated successfully. Jul 12 10:14:07.440212 systemd[1]: session-20.scope: Deactivated successfully. Jul 12 10:14:07.441039 systemd-logind[1577]: Session 20 logged out. Waiting for processes to exit. Jul 12 10:14:07.444825 systemd[1]: Started sshd@20-10.0.0.45:22-10.0.0.1:46812.service - OpenSSH per-connection server daemon (10.0.0.1:46812). Jul 12 10:14:07.445485 systemd-logind[1577]: Removed session 20. Jul 12 10:14:07.500603 sshd[5609]: Accepted publickey for core from 10.0.0.1 port 46812 ssh2: RSA SHA256:ljiWRYFI9rWPiw0u7CK8T0RxaTLaHmCBjbv/AQVYFjw Jul 12 10:14:07.502929 sshd-session[5609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 10:14:07.508358 systemd-logind[1577]: New session 21 of user core. Jul 12 10:14:07.521347 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 12 10:14:07.631825 sshd[5612]: Connection closed by 10.0.0.1 port 46812 Jul 12 10:14:07.632220 sshd-session[5609]: pam_unix(sshd:session): session closed for user core Jul 12 10:14:07.636252 systemd[1]: sshd@20-10.0.0.45:22-10.0.0.1:46812.service: Deactivated successfully. Jul 12 10:14:07.638125 systemd[1]: session-21.scope: Deactivated successfully. Jul 12 10:14:07.638924 systemd-logind[1577]: Session 21 logged out. Waiting for processes to exit. Jul 12 10:14:07.640107 systemd-logind[1577]: Removed session 21. Jul 12 10:14:09.980555 containerd[1591]: time="2025-07-12T10:14:09.980499560Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7db47e4f3558b0d226178d27f6969f6d53453e23addccf859664511bfb987ad2\" id:\"e2de649f60a5564e563dde60bfc7b741cdb176947cb6cf572b62c82508a087c1\" pid:5637 exited_at:{seconds:1752315249 nanos:980105496}" Jul 12 10:14:12.645958 systemd[1]: Started sshd@21-10.0.0.45:22-10.0.0.1:46826.service - OpenSSH per-connection server daemon (10.0.0.1:46826). Jul 12 10:14:12.699770 sshd[5655]: Accepted publickey for core from 10.0.0.1 port 46826 ssh2: RSA SHA256:ljiWRYFI9rWPiw0u7CK8T0RxaTLaHmCBjbv/AQVYFjw Jul 12 10:14:12.701296 sshd-session[5655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 10:14:12.705554 systemd-logind[1577]: New session 22 of user core. Jul 12 10:14:12.713344 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 12 10:14:12.824212 sshd[5658]: Connection closed by 10.0.0.1 port 46826 Jul 12 10:14:12.824588 sshd-session[5655]: pam_unix(sshd:session): session closed for user core Jul 12 10:14:12.828788 systemd[1]: sshd@21-10.0.0.45:22-10.0.0.1:46826.service: Deactivated successfully. Jul 12 10:14:12.831074 systemd[1]: session-22.scope: Deactivated successfully. Jul 12 10:14:12.832032 systemd-logind[1577]: Session 22 logged out. Waiting for processes to exit. Jul 12 10:14:12.833783 systemd-logind[1577]: Removed session 22. Jul 12 10:14:14.350367 containerd[1591]: time="2025-07-12T10:14:14.350311743Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a62ff33e191549802b2458478e5eb41756020c3973b4de6ec9af8de7fd2c8731\" id:\"c825db991421566ab575884bb1e49b28117f1570d13d3d30834035fab99b6b97\" pid:5682 exited_at:{seconds:1752315254 nanos:350081434}" Jul 12 10:14:17.778566 containerd[1591]: time="2025-07-12T10:14:17.778500857Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d0565d0910ac10d5296e04f66347e8804962980f7d43919a425eb7467b227a11\" id:\"b63031801e9c571189036da6424b0202bf3711fc83d5727ffa28ab658b863e6a\" pid:5704 exited_at:{seconds:1752315257 nanos:778113339}" Jul 12 10:14:17.838280 systemd[1]: Started sshd@22-10.0.0.45:22-10.0.0.1:47556.service - OpenSSH per-connection server daemon (10.0.0.1:47556). Jul 12 10:14:17.899814 sshd[5717]: Accepted publickey for core from 10.0.0.1 port 47556 ssh2: RSA SHA256:ljiWRYFI9rWPiw0u7CK8T0RxaTLaHmCBjbv/AQVYFjw Jul 12 10:14:17.901298 sshd-session[5717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 10:14:17.906088 systemd-logind[1577]: New session 23 of user core. Jul 12 10:14:17.917345 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 12 10:14:18.080242 sshd[5720]: Connection closed by 10.0.0.1 port 47556 Jul 12 10:14:18.080522 sshd-session[5717]: pam_unix(sshd:session): session closed for user core Jul 12 10:14:18.086254 systemd[1]: sshd@22-10.0.0.45:22-10.0.0.1:47556.service: Deactivated successfully. Jul 12 10:14:18.089090 systemd[1]: session-23.scope: Deactivated successfully. Jul 12 10:14:18.090019 systemd-logind[1577]: Session 23 logged out. Waiting for processes to exit. Jul 12 10:14:18.091668 systemd-logind[1577]: Removed session 23. Jul 12 10:14:23.095384 systemd[1]: Started sshd@23-10.0.0.45:22-10.0.0.1:47572.service - OpenSSH per-connection server daemon (10.0.0.1:47572). Jul 12 10:14:23.183513 sshd[5736]: Accepted publickey for core from 10.0.0.1 port 47572 ssh2: RSA SHA256:ljiWRYFI9rWPiw0u7CK8T0RxaTLaHmCBjbv/AQVYFjw Jul 12 10:14:23.185726 sshd-session[5736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 10:14:23.192004 systemd-logind[1577]: New session 24 of user core. Jul 12 10:14:23.198330 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 12 10:14:23.376756 sshd[5739]: Connection closed by 10.0.0.1 port 47572 Jul 12 10:14:23.377031 sshd-session[5736]: pam_unix(sshd:session): session closed for user core Jul 12 10:14:23.382550 systemd[1]: sshd@23-10.0.0.45:22-10.0.0.1:47572.service: Deactivated successfully. Jul 12 10:14:23.385052 systemd[1]: session-24.scope: Deactivated successfully. Jul 12 10:14:23.385950 systemd-logind[1577]: Session 24 logged out. Waiting for processes to exit. Jul 12 10:14:23.388214 systemd-logind[1577]: Removed session 24. Jul 12 10:14:28.400056 systemd[1]: Started sshd@24-10.0.0.45:22-10.0.0.1:50160.service - OpenSSH per-connection server daemon (10.0.0.1:50160). Jul 12 10:14:28.470750 sshd[5754]: Accepted publickey for core from 10.0.0.1 port 50160 ssh2: RSA SHA256:ljiWRYFI9rWPiw0u7CK8T0RxaTLaHmCBjbv/AQVYFjw Jul 12 10:14:28.472583 sshd-session[5754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 10:14:28.477470 systemd-logind[1577]: New session 25 of user core. Jul 12 10:14:28.486306 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 12 10:14:28.654558 sshd[5757]: Connection closed by 10.0.0.1 port 50160 Jul 12 10:14:28.655037 sshd-session[5754]: pam_unix(sshd:session): session closed for user core Jul 12 10:14:28.659970 systemd[1]: sshd@24-10.0.0.45:22-10.0.0.1:50160.service: Deactivated successfully. Jul 12 10:14:28.662196 systemd[1]: session-25.scope: Deactivated successfully. Jul 12 10:14:28.663166 systemd-logind[1577]: Session 25 logged out. Waiting for processes to exit. Jul 12 10:14:28.665029 systemd-logind[1577]: Removed session 25.