Jul 8 10:11:43.820472 kernel: Linux version 6.12.35-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue Jul 8 08:29:03 -00 2025 Jul 8 10:11:43.820493 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=3331cec13b1d190d4054e739c6cc72d1bbe47015265c5d8f0c303fc32f06c18e Jul 8 10:11:43.820503 kernel: BIOS-provided physical RAM map: Jul 8 10:11:43.820510 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 8 10:11:43.820516 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jul 8 10:11:43.820522 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jul 8 10:11:43.820530 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jul 8 10:11:43.820536 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jul 8 10:11:43.820545 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Jul 8 10:11:43.820551 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jul 8 10:11:43.820557 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Jul 8 10:11:43.820563 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jul 8 10:11:43.820570 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jul 8 10:11:43.820576 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jul 8 10:11:43.820586 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jul 8 10:11:43.820593 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jul 8 10:11:43.820600 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jul 8 10:11:43.820606 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jul 8 10:11:43.820613 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jul 8 10:11:43.820620 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jul 8 10:11:43.820626 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jul 8 10:11:43.820633 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jul 8 10:11:43.820640 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jul 8 10:11:43.820646 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 8 10:11:43.820653 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jul 8 10:11:43.820662 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 8 10:11:43.820668 kernel: NX (Execute Disable) protection: active Jul 8 10:11:43.820675 kernel: APIC: Static calls initialized Jul 8 10:11:43.820684 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Jul 8 10:11:43.820693 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Jul 8 10:11:43.820702 kernel: extended physical RAM map: Jul 8 10:11:43.820711 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 8 10:11:43.820719 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Jul 8 10:11:43.820728 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jul 8 10:11:43.820737 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Jul 8 10:11:43.820745 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jul 8 10:11:43.820757 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Jul 8 10:11:43.820766 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jul 8 10:11:43.820775 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Jul 8 10:11:43.820784 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Jul 8 10:11:43.820798 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Jul 8 10:11:43.820808 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Jul 8 10:11:43.820819 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Jul 8 10:11:43.820829 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jul 8 10:11:43.820839 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jul 8 10:11:43.820849 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jul 8 10:11:43.820859 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jul 8 10:11:43.820868 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jul 8 10:11:43.820878 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jul 8 10:11:43.820888 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jul 8 10:11:43.820897 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jul 8 10:11:43.820907 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jul 8 10:11:43.820919 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jul 8 10:11:43.820970 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jul 8 10:11:43.820980 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jul 8 10:11:43.820990 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 8 10:11:43.821000 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jul 8 10:11:43.821009 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 8 10:11:43.821019 kernel: efi: EFI v2.7 by EDK II Jul 8 10:11:43.821029 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Jul 8 10:11:43.821038 kernel: random: crng init done Jul 8 10:11:43.821048 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Jul 8 10:11:43.821057 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Jul 8 10:11:43.821071 kernel: secureboot: Secure boot disabled Jul 8 10:11:43.821080 kernel: SMBIOS 2.8 present. Jul 8 10:11:43.821090 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Jul 8 10:11:43.821099 kernel: DMI: Memory slots populated: 1/1 Jul 8 10:11:43.821108 kernel: Hypervisor detected: KVM Jul 8 10:11:43.821118 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 8 10:11:43.821127 kernel: kvm-clock: using sched offset of 3639640813 cycles Jul 8 10:11:43.821137 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 8 10:11:43.821147 kernel: tsc: Detected 2794.750 MHz processor Jul 8 10:11:43.821157 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 8 10:11:43.821167 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 8 10:11:43.821179 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jul 8 10:11:43.821189 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jul 8 10:11:43.821198 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 8 10:11:43.821208 kernel: Using GB pages for direct mapping Jul 8 10:11:43.821218 kernel: ACPI: Early table checksum verification disabled Jul 8 10:11:43.821227 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jul 8 10:11:43.821237 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jul 8 10:11:43.821247 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 8 10:11:43.821257 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 8 10:11:43.821270 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jul 8 10:11:43.821280 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 8 10:11:43.821289 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 8 10:11:43.821299 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 8 10:11:43.821309 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 8 10:11:43.821319 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jul 8 10:11:43.821328 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jul 8 10:11:43.821338 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Jul 8 10:11:43.821350 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jul 8 10:11:43.821360 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jul 8 10:11:43.821369 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jul 8 10:11:43.821379 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jul 8 10:11:43.821389 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jul 8 10:11:43.821398 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jul 8 10:11:43.821408 kernel: No NUMA configuration found Jul 8 10:11:43.821418 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Jul 8 10:11:43.821427 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Jul 8 10:11:43.821437 kernel: Zone ranges: Jul 8 10:11:43.821449 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 8 10:11:43.821459 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Jul 8 10:11:43.821469 kernel: Normal empty Jul 8 10:11:43.821478 kernel: Device empty Jul 8 10:11:43.821488 kernel: Movable zone start for each node Jul 8 10:11:43.821497 kernel: Early memory node ranges Jul 8 10:11:43.821507 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jul 8 10:11:43.821517 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jul 8 10:11:43.821526 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jul 8 10:11:43.821538 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Jul 8 10:11:43.821548 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Jul 8 10:11:43.821558 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Jul 8 10:11:43.821567 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Jul 8 10:11:43.821577 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Jul 8 10:11:43.821587 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Jul 8 10:11:43.821596 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 8 10:11:43.821606 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jul 8 10:11:43.821627 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jul 8 10:11:43.821637 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 8 10:11:43.821647 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Jul 8 10:11:43.821657 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Jul 8 10:11:43.821670 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jul 8 10:11:43.821680 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Jul 8 10:11:43.821690 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Jul 8 10:11:43.821700 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 8 10:11:43.821711 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 8 10:11:43.821723 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 8 10:11:43.821733 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 8 10:11:43.821744 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 8 10:11:43.821754 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 8 10:11:43.821764 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 8 10:11:43.821774 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 8 10:11:43.821784 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 8 10:11:43.821795 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 8 10:11:43.821805 kernel: TSC deadline timer available Jul 8 10:11:43.821817 kernel: CPU topo: Max. logical packages: 1 Jul 8 10:11:43.821827 kernel: CPU topo: Max. logical dies: 1 Jul 8 10:11:43.821837 kernel: CPU topo: Max. dies per package: 1 Jul 8 10:11:43.821847 kernel: CPU topo: Max. threads per core: 1 Jul 8 10:11:43.821857 kernel: CPU topo: Num. cores per package: 4 Jul 8 10:11:43.821867 kernel: CPU topo: Num. threads per package: 4 Jul 8 10:11:43.821877 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jul 8 10:11:43.821887 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 8 10:11:43.821897 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 8 10:11:43.821907 kernel: kvm-guest: setup PV sched yield Jul 8 10:11:43.821920 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Jul 8 10:11:43.821955 kernel: Booting paravirtualized kernel on KVM Jul 8 10:11:43.821965 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 8 10:11:43.821976 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jul 8 10:11:43.821986 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jul 8 10:11:43.821996 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jul 8 10:11:43.822006 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 8 10:11:43.822016 kernel: kvm-guest: PV spinlocks enabled Jul 8 10:11:43.822026 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 8 10:11:43.822041 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=3331cec13b1d190d4054e739c6cc72d1bbe47015265c5d8f0c303fc32f06c18e Jul 8 10:11:43.822050 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 8 10:11:43.822059 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 8 10:11:43.822068 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 8 10:11:43.822077 kernel: Fallback order for Node 0: 0 Jul 8 10:11:43.822087 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Jul 8 10:11:43.822097 kernel: Policy zone: DMA32 Jul 8 10:11:43.822107 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 8 10:11:43.822119 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 8 10:11:43.822129 kernel: ftrace: allocating 40097 entries in 157 pages Jul 8 10:11:43.822139 kernel: ftrace: allocated 157 pages with 5 groups Jul 8 10:11:43.822148 kernel: Dynamic Preempt: voluntary Jul 8 10:11:43.822158 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 8 10:11:43.822169 kernel: rcu: RCU event tracing is enabled. Jul 8 10:11:43.822179 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 8 10:11:43.822188 kernel: Trampoline variant of Tasks RCU enabled. Jul 8 10:11:43.822198 kernel: Rude variant of Tasks RCU enabled. Jul 8 10:11:43.822208 kernel: Tracing variant of Tasks RCU enabled. Jul 8 10:11:43.822220 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 8 10:11:43.822231 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 8 10:11:43.822241 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 8 10:11:43.822251 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 8 10:11:43.822261 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 8 10:11:43.822271 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 8 10:11:43.822281 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 8 10:11:43.822291 kernel: Console: colour dummy device 80x25 Jul 8 10:11:43.822305 kernel: printk: legacy console [ttyS0] enabled Jul 8 10:11:43.822315 kernel: ACPI: Core revision 20240827 Jul 8 10:11:43.822327 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 8 10:11:43.822337 kernel: APIC: Switch to symmetric I/O mode setup Jul 8 10:11:43.822347 kernel: x2apic enabled Jul 8 10:11:43.822357 kernel: APIC: Switched APIC routing to: physical x2apic Jul 8 10:11:43.822368 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jul 8 10:11:43.822382 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jul 8 10:11:43.822392 kernel: kvm-guest: setup PV IPIs Jul 8 10:11:43.822402 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 8 10:11:43.822416 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Jul 8 10:11:43.822425 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Jul 8 10:11:43.822436 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 8 10:11:43.822446 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 8 10:11:43.822456 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 8 10:11:43.822465 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 8 10:11:43.822475 kernel: Spectre V2 : Mitigation: Retpolines Jul 8 10:11:43.822485 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 8 10:11:43.822498 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 8 10:11:43.822508 kernel: RETBleed: Mitigation: untrained return thunk Jul 8 10:11:43.822518 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 8 10:11:43.822528 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 8 10:11:43.822538 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jul 8 10:11:43.822549 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jul 8 10:11:43.822558 kernel: x86/bugs: return thunk changed Jul 8 10:11:43.822568 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jul 8 10:11:43.822578 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 8 10:11:43.822591 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 8 10:11:43.822601 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 8 10:11:43.822611 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 8 10:11:43.822622 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 8 10:11:43.822631 kernel: Freeing SMP alternatives memory: 32K Jul 8 10:11:43.822641 kernel: pid_max: default: 32768 minimum: 301 Jul 8 10:11:43.822651 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 8 10:11:43.822661 kernel: landlock: Up and running. Jul 8 10:11:43.822671 kernel: SELinux: Initializing. Jul 8 10:11:43.822684 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 8 10:11:43.822694 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 8 10:11:43.822704 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 8 10:11:43.822714 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 8 10:11:43.822724 kernel: ... version: 0 Jul 8 10:11:43.822734 kernel: ... bit width: 48 Jul 8 10:11:43.822744 kernel: ... generic registers: 6 Jul 8 10:11:43.822754 kernel: ... value mask: 0000ffffffffffff Jul 8 10:11:43.822764 kernel: ... max period: 00007fffffffffff Jul 8 10:11:43.822776 kernel: ... fixed-purpose events: 0 Jul 8 10:11:43.822786 kernel: ... event mask: 000000000000003f Jul 8 10:11:43.822796 kernel: signal: max sigframe size: 1776 Jul 8 10:11:43.822806 kernel: rcu: Hierarchical SRCU implementation. Jul 8 10:11:43.822817 kernel: rcu: Max phase no-delay instances is 400. Jul 8 10:11:43.822827 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 8 10:11:43.822837 kernel: smp: Bringing up secondary CPUs ... Jul 8 10:11:43.822850 kernel: smpboot: x86: Booting SMP configuration: Jul 8 10:11:43.822861 kernel: .... node #0, CPUs: #1 #2 #3 Jul 8 10:11:43.822873 kernel: smp: Brought up 1 node, 4 CPUs Jul 8 10:11:43.822883 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Jul 8 10:11:43.822894 kernel: Memory: 2422668K/2565800K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54592K init, 2376K bss, 137196K reserved, 0K cma-reserved) Jul 8 10:11:43.822904 kernel: devtmpfs: initialized Jul 8 10:11:43.822914 kernel: x86/mm: Memory block size: 128MB Jul 8 10:11:43.822953 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jul 8 10:11:43.822964 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jul 8 10:11:43.822975 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Jul 8 10:11:43.822984 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jul 8 10:11:43.822998 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Jul 8 10:11:43.823009 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jul 8 10:11:43.823019 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 8 10:11:43.823029 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 8 10:11:43.823040 kernel: pinctrl core: initialized pinctrl subsystem Jul 8 10:11:43.823050 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 8 10:11:43.823060 kernel: audit: initializing netlink subsys (disabled) Jul 8 10:11:43.823071 kernel: audit: type=2000 audit(1751969499.536:1): state=initialized audit_enabled=0 res=1 Jul 8 10:11:43.823084 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 8 10:11:43.823094 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 8 10:11:43.823104 kernel: cpuidle: using governor menu Jul 8 10:11:43.823114 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 8 10:11:43.823125 kernel: dca service started, version 1.12.1 Jul 8 10:11:43.823136 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Jul 8 10:11:43.823146 kernel: PCI: Using configuration type 1 for base access Jul 8 10:11:43.823156 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 8 10:11:43.823166 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 8 10:11:43.823179 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 8 10:11:43.823189 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 8 10:11:43.823199 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 8 10:11:43.823209 kernel: ACPI: Added _OSI(Module Device) Jul 8 10:11:43.823219 kernel: ACPI: Added _OSI(Processor Device) Jul 8 10:11:43.823229 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 8 10:11:43.823239 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 8 10:11:43.823249 kernel: ACPI: Interpreter enabled Jul 8 10:11:43.823258 kernel: ACPI: PM: (supports S0 S3 S5) Jul 8 10:11:43.823271 kernel: ACPI: Using IOAPIC for interrupt routing Jul 8 10:11:43.823281 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 8 10:11:43.823290 kernel: PCI: Using E820 reservations for host bridge windows Jul 8 10:11:43.823300 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 8 10:11:43.823310 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 8 10:11:43.823525 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 8 10:11:43.823678 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jul 8 10:11:43.823830 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jul 8 10:11:43.823848 kernel: PCI host bridge to bus 0000:00 Jul 8 10:11:43.824054 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 8 10:11:43.824183 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 8 10:11:43.824310 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 8 10:11:43.824440 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Jul 8 10:11:43.824575 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jul 8 10:11:43.824712 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Jul 8 10:11:43.824867 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 8 10:11:43.825070 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jul 8 10:11:43.825232 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jul 8 10:11:43.825379 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Jul 8 10:11:43.825524 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Jul 8 10:11:43.825668 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Jul 8 10:11:43.825818 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 8 10:11:43.826007 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jul 8 10:11:43.826158 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Jul 8 10:11:43.826333 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Jul 8 10:11:43.826485 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Jul 8 10:11:43.826650 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jul 8 10:11:43.826819 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Jul 8 10:11:43.827005 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Jul 8 10:11:43.827134 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Jul 8 10:11:43.827260 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jul 8 10:11:43.827380 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Jul 8 10:11:43.827492 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Jul 8 10:11:43.827605 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Jul 8 10:11:43.827717 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Jul 8 10:11:43.827846 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jul 8 10:11:43.827999 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 8 10:11:43.828124 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jul 8 10:11:43.828238 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Jul 8 10:11:43.828350 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Jul 8 10:11:43.828479 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jul 8 10:11:43.828598 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Jul 8 10:11:43.828608 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 8 10:11:43.828616 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 8 10:11:43.828624 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 8 10:11:43.828631 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 8 10:11:43.828639 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 8 10:11:43.828646 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 8 10:11:43.828654 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 8 10:11:43.828661 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 8 10:11:43.828673 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 8 10:11:43.828680 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 8 10:11:43.828688 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 8 10:11:43.828695 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 8 10:11:43.828703 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 8 10:11:43.828710 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 8 10:11:43.828717 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 8 10:11:43.828725 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 8 10:11:43.828732 kernel: iommu: Default domain type: Translated Jul 8 10:11:43.828742 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 8 10:11:43.828750 kernel: efivars: Registered efivars operations Jul 8 10:11:43.828757 kernel: PCI: Using ACPI for IRQ routing Jul 8 10:11:43.828765 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 8 10:11:43.828772 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jul 8 10:11:43.828780 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Jul 8 10:11:43.828787 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Jul 8 10:11:43.828794 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Jul 8 10:11:43.828802 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Jul 8 10:11:43.828811 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Jul 8 10:11:43.828819 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Jul 8 10:11:43.828826 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Jul 8 10:11:43.828967 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 8 10:11:43.829100 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 8 10:11:43.829214 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 8 10:11:43.829224 kernel: vgaarb: loaded Jul 8 10:11:43.829232 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 8 10:11:43.829243 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 8 10:11:43.829251 kernel: clocksource: Switched to clocksource kvm-clock Jul 8 10:11:43.829259 kernel: VFS: Disk quotas dquot_6.6.0 Jul 8 10:11:43.829267 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 8 10:11:43.829274 kernel: pnp: PnP ACPI init Jul 8 10:11:43.829401 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Jul 8 10:11:43.829427 kernel: pnp: PnP ACPI: found 6 devices Jul 8 10:11:43.829437 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 8 10:11:43.829446 kernel: NET: Registered PF_INET protocol family Jul 8 10:11:43.829454 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 8 10:11:43.829464 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 8 10:11:43.829472 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 8 10:11:43.829480 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 8 10:11:43.829488 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 8 10:11:43.829496 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 8 10:11:43.829504 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 8 10:11:43.829514 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 8 10:11:43.829521 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 8 10:11:43.829530 kernel: NET: Registered PF_XDP protocol family Jul 8 10:11:43.829647 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Jul 8 10:11:43.829766 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Jul 8 10:11:43.829873 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 8 10:11:43.830014 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 8 10:11:43.830122 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 8 10:11:43.830232 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Jul 8 10:11:43.830339 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jul 8 10:11:43.830443 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Jul 8 10:11:43.830454 kernel: PCI: CLS 0 bytes, default 64 Jul 8 10:11:43.830462 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Jul 8 10:11:43.830470 kernel: Initialise system trusted keyrings Jul 8 10:11:43.830478 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 8 10:11:43.830486 kernel: Key type asymmetric registered Jul 8 10:11:43.830494 kernel: Asymmetric key parser 'x509' registered Jul 8 10:11:43.830504 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 8 10:11:43.830512 kernel: io scheduler mq-deadline registered Jul 8 10:11:43.830522 kernel: io scheduler kyber registered Jul 8 10:11:43.830530 kernel: io scheduler bfq registered Jul 8 10:11:43.830538 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 8 10:11:43.830546 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 8 10:11:43.830556 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 8 10:11:43.830564 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jul 8 10:11:43.830572 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 8 10:11:43.830580 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 8 10:11:43.830588 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 8 10:11:43.830596 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 8 10:11:43.830604 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 8 10:11:43.830720 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 8 10:11:43.830735 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 8 10:11:43.830841 kernel: rtc_cmos 00:04: registered as rtc0 Jul 8 10:11:43.830981 kernel: rtc_cmos 00:04: setting system clock to 2025-07-08T10:11:43 UTC (1751969503) Jul 8 10:11:43.831097 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jul 8 10:11:43.831108 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 8 10:11:43.831116 kernel: efifb: probing for efifb Jul 8 10:11:43.831124 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Jul 8 10:11:43.831132 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jul 8 10:11:43.831140 kernel: efifb: scrolling: redraw Jul 8 10:11:43.831151 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 8 10:11:43.831159 kernel: Console: switching to colour frame buffer device 160x50 Jul 8 10:11:43.831167 kernel: fb0: EFI VGA frame buffer device Jul 8 10:11:43.831175 kernel: pstore: Using crash dump compression: deflate Jul 8 10:11:43.831183 kernel: pstore: Registered efi_pstore as persistent store backend Jul 8 10:11:43.831191 kernel: NET: Registered PF_INET6 protocol family Jul 8 10:11:43.831199 kernel: Segment Routing with IPv6 Jul 8 10:11:43.831206 kernel: In-situ OAM (IOAM) with IPv6 Jul 8 10:11:43.831214 kernel: NET: Registered PF_PACKET protocol family Jul 8 10:11:43.831224 kernel: Key type dns_resolver registered Jul 8 10:11:43.831232 kernel: IPI shorthand broadcast: enabled Jul 8 10:11:43.831240 kernel: sched_clock: Marking stable (5810002253, 154812895)->(5986148056, -21332908) Jul 8 10:11:43.831248 kernel: registered taskstats version 1 Jul 8 10:11:43.831255 kernel: Loading compiled-in X.509 certificates Jul 8 10:11:43.831264 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.35-flatcar: 979ef2c0f02e8e58776916c0ada334818b3eaefe' Jul 8 10:11:43.831271 kernel: Demotion targets for Node 0: null Jul 8 10:11:43.831279 kernel: Key type .fscrypt registered Jul 8 10:11:43.831287 kernel: Key type fscrypt-provisioning registered Jul 8 10:11:43.831297 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 8 10:11:43.831305 kernel: ima: Allocated hash algorithm: sha1 Jul 8 10:11:43.831313 kernel: ima: No architecture policies found Jul 8 10:11:43.831321 kernel: clk: Disabling unused clocks Jul 8 10:11:43.831328 kernel: Warning: unable to open an initial console. Jul 8 10:11:43.831336 kernel: Freeing unused kernel image (initmem) memory: 54592K Jul 8 10:11:43.831344 kernel: Write protecting the kernel read-only data: 24576k Jul 8 10:11:43.831352 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 8 10:11:43.831362 kernel: Run /init as init process Jul 8 10:11:43.831370 kernel: with arguments: Jul 8 10:11:43.831378 kernel: /init Jul 8 10:11:43.831385 kernel: with environment: Jul 8 10:11:43.831393 kernel: HOME=/ Jul 8 10:11:43.831400 kernel: TERM=linux Jul 8 10:11:43.831408 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 8 10:11:43.831417 systemd[1]: Successfully made /usr/ read-only. Jul 8 10:11:43.831428 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 8 10:11:43.831439 systemd[1]: Detected virtualization kvm. Jul 8 10:11:43.831447 systemd[1]: Detected architecture x86-64. Jul 8 10:11:43.831455 systemd[1]: Running in initrd. Jul 8 10:11:43.831464 systemd[1]: No hostname configured, using default hostname. Jul 8 10:11:43.831472 systemd[1]: Hostname set to . Jul 8 10:11:43.831480 systemd[1]: Initializing machine ID from VM UUID. Jul 8 10:11:43.831489 systemd[1]: Queued start job for default target initrd.target. Jul 8 10:11:43.831499 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 8 10:11:43.831507 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 8 10:11:43.831516 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 8 10:11:43.831525 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 8 10:11:43.831534 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 8 10:11:43.831544 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 8 10:11:43.831554 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 8 10:11:43.831565 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 8 10:11:43.831574 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 8 10:11:43.831582 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 8 10:11:43.831591 systemd[1]: Reached target paths.target - Path Units. Jul 8 10:11:43.831600 systemd[1]: Reached target slices.target - Slice Units. Jul 8 10:11:43.831608 systemd[1]: Reached target swap.target - Swaps. Jul 8 10:11:43.831617 systemd[1]: Reached target timers.target - Timer Units. Jul 8 10:11:43.831626 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 8 10:11:43.831635 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 8 10:11:43.831646 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 8 10:11:43.831654 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 8 10:11:43.831663 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 8 10:11:43.831672 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 8 10:11:43.831681 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 8 10:11:43.831690 systemd[1]: Reached target sockets.target - Socket Units. Jul 8 10:11:43.831698 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 8 10:11:43.831707 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 8 10:11:43.831718 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 8 10:11:43.831727 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 8 10:11:43.831736 systemd[1]: Starting systemd-fsck-usr.service... Jul 8 10:11:43.831745 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 8 10:11:43.831769 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 8 10:11:43.831778 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 8 10:11:43.831786 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 8 10:11:43.831797 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 8 10:11:43.831805 systemd[1]: Finished systemd-fsck-usr.service. Jul 8 10:11:43.831814 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 8 10:11:43.831841 systemd-journald[221]: Collecting audit messages is disabled. Jul 8 10:11:43.831864 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 8 10:11:43.831873 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 8 10:11:43.831881 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 8 10:11:43.831892 systemd-journald[221]: Journal started Jul 8 10:11:43.831916 systemd-journald[221]: Runtime Journal (/run/log/journal/ad90bcc8d98649d59ffebff529743634) is 6M, max 48.5M, 42.4M free. Jul 8 10:11:43.817877 systemd-modules-load[222]: Inserted module 'overlay' Jul 8 10:11:43.835962 systemd[1]: Started systemd-journald.service - Journal Service. Jul 8 10:11:43.844964 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 8 10:11:43.847043 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 8 10:11:43.848455 kernel: Bridge firewalling registered Jul 8 10:11:43.847332 systemd-modules-load[222]: Inserted module 'br_netfilter' Jul 8 10:11:43.851052 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 8 10:11:43.853969 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 8 10:11:43.859072 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 8 10:11:43.860279 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 8 10:11:43.862153 systemd-tmpfiles[245]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 8 10:11:43.870088 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 8 10:11:43.870411 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 8 10:11:43.873670 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 8 10:11:43.882456 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 8 10:11:43.884020 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 8 10:11:43.892723 dracut-cmdline[261]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=3331cec13b1d190d4054e739c6cc72d1bbe47015265c5d8f0c303fc32f06c18e Jul 8 10:11:43.932624 systemd-resolved[267]: Positive Trust Anchors: Jul 8 10:11:43.932640 systemd-resolved[267]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 8 10:11:43.932670 systemd-resolved[267]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 8 10:11:43.935090 systemd-resolved[267]: Defaulting to hostname 'linux'. Jul 8 10:11:43.936149 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 8 10:11:43.942162 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 8 10:11:44.131967 kernel: SCSI subsystem initialized Jul 8 10:11:44.140953 kernel: Loading iSCSI transport class v2.0-870. Jul 8 10:11:44.151972 kernel: iscsi: registered transport (tcp) Jul 8 10:11:44.172956 kernel: iscsi: registered transport (qla4xxx) Jul 8 10:11:44.172978 kernel: QLogic iSCSI HBA Driver Jul 8 10:11:44.195645 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 8 10:11:44.223951 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 8 10:11:44.228361 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 8 10:11:44.279185 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 8 10:11:44.280786 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 8 10:11:44.334962 kernel: raid6: avx2x4 gen() 30279 MB/s Jul 8 10:11:44.351954 kernel: raid6: avx2x2 gen() 30703 MB/s Jul 8 10:11:44.368993 kernel: raid6: avx2x1 gen() 25668 MB/s Jul 8 10:11:44.369017 kernel: raid6: using algorithm avx2x2 gen() 30703 MB/s Jul 8 10:11:44.387042 kernel: raid6: .... xor() 19761 MB/s, rmw enabled Jul 8 10:11:44.387079 kernel: raid6: using avx2x2 recovery algorithm Jul 8 10:11:44.406948 kernel: xor: automatically using best checksumming function avx Jul 8 10:11:44.608971 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 8 10:11:44.616286 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 8 10:11:44.619488 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 8 10:11:44.658588 systemd-udevd[474]: Using default interface naming scheme 'v255'. Jul 8 10:11:44.664216 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 8 10:11:44.668272 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 8 10:11:44.695108 dracut-pre-trigger[481]: rd.md=0: removing MD RAID activation Jul 8 10:11:44.724462 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 8 10:11:44.727063 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 8 10:11:44.816647 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 8 10:11:44.818748 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 8 10:11:44.857966 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jul 8 10:11:44.862738 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 8 10:11:44.868947 kernel: cryptd: max_cpu_qlen set to 1000 Jul 8 10:11:44.877767 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 8 10:11:44.877824 kernel: GPT:9289727 != 19775487 Jul 8 10:11:44.877836 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 8 10:11:44.877845 kernel: GPT:9289727 != 19775487 Jul 8 10:11:44.877855 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 8 10:11:44.877865 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 8 10:11:44.890746 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 8 10:11:44.891119 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 8 10:11:44.927544 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 8 10:11:44.931539 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 8 10:11:44.935263 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 8 10:11:44.940947 kernel: AES CTR mode by8 optimization enabled Jul 8 10:11:44.940977 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jul 8 10:11:44.944286 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 8 10:11:44.944398 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 8 10:11:44.946272 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 8 10:11:44.952851 kernel: libata version 3.00 loaded. Jul 8 10:11:44.972391 kernel: ahci 0000:00:1f.2: version 3.0 Jul 8 10:11:44.972605 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 8 10:11:44.982816 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jul 8 10:11:44.983065 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jul 8 10:11:44.983213 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 8 10:11:44.992473 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 8 10:11:45.002176 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 8 10:11:45.009518 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 8 10:11:45.010063 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 8 10:11:45.015948 kernel: scsi host0: ahci Jul 8 10:11:45.017624 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 8 10:11:45.020952 kernel: scsi host1: ahci Jul 8 10:11:45.021149 kernel: scsi host2: ahci Jul 8 10:11:45.022228 kernel: scsi host3: ahci Jul 8 10:11:45.023083 kernel: scsi host4: ahci Jul 8 10:11:45.026953 kernel: scsi host5: ahci Jul 8 10:11:45.027127 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 0 Jul 8 10:11:45.027140 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 0 Jul 8 10:11:45.027150 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 0 Jul 8 10:11:45.027166 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 0 Jul 8 10:11:45.028827 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 0 Jul 8 10:11:45.028841 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 0 Jul 8 10:11:45.029399 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 8 10:11:45.032522 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 8 10:11:45.062775 disk-uuid[635]: Primary Header is updated. Jul 8 10:11:45.062775 disk-uuid[635]: Secondary Entries is updated. Jul 8 10:11:45.062775 disk-uuid[635]: Secondary Header is updated. Jul 8 10:11:45.066987 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 8 10:11:45.071963 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 8 10:11:45.337036 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 8 10:11:45.337096 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 8 10:11:45.337959 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 8 10:11:45.338953 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 8 10:11:45.338969 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jul 8 10:11:45.340305 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 8 10:11:45.340318 kernel: ata3.00: applying bridge limits Jul 8 10:11:45.341356 kernel: ata3.00: configured for UDMA/100 Jul 8 10:11:45.341949 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 8 10:11:45.345965 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 8 10:11:45.416980 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 8 10:11:45.417203 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 8 10:11:45.439095 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 8 10:11:45.831604 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 8 10:11:45.834174 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 8 10:11:45.836640 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 8 10:11:45.838820 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 8 10:11:45.841605 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 8 10:11:45.877763 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 8 10:11:46.075866 disk-uuid[636]: The operation has completed successfully. Jul 8 10:11:46.077232 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 8 10:11:46.109919 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 8 10:11:46.110056 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 8 10:11:46.157434 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 8 10:11:46.186452 sh[665]: Success Jul 8 10:11:46.205150 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 8 10:11:46.205179 kernel: device-mapper: uevent: version 1.0.3 Jul 8 10:11:46.206323 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 8 10:11:46.216956 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jul 8 10:11:46.247567 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 8 10:11:46.251993 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 8 10:11:46.269023 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 8 10:11:46.275782 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 8 10:11:46.275809 kernel: BTRFS: device fsid 8a7b8c84-7fe6-440f-95a1-3ff425e81fda devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (677) Jul 8 10:11:46.278162 kernel: BTRFS info (device dm-0): first mount of filesystem 8a7b8c84-7fe6-440f-95a1-3ff425e81fda Jul 8 10:11:46.278183 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 8 10:11:46.278194 kernel: BTRFS info (device dm-0): using free-space-tree Jul 8 10:11:46.283239 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 8 10:11:46.285482 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 8 10:11:46.287799 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 8 10:11:46.290481 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 8 10:11:46.292407 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 8 10:11:46.322957 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (710) Jul 8 10:11:46.325371 kernel: BTRFS info (device vda6): first mount of filesystem f90b1c2f-8ecc-4345-b0e5-5297e348f3bd Jul 8 10:11:46.325424 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 8 10:11:46.325435 kernel: BTRFS info (device vda6): using free-space-tree Jul 8 10:11:46.332955 kernel: BTRFS info (device vda6): last unmount of filesystem f90b1c2f-8ecc-4345-b0e5-5297e348f3bd Jul 8 10:11:46.333376 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 8 10:11:46.336967 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 8 10:11:46.463566 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 8 10:11:46.468760 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 8 10:11:46.509622 ignition[752]: Ignition 2.21.0 Jul 8 10:11:46.509637 ignition[752]: Stage: fetch-offline Jul 8 10:11:46.509677 ignition[752]: no configs at "/usr/lib/ignition/base.d" Jul 8 10:11:46.509686 ignition[752]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 8 10:11:46.509786 ignition[752]: parsed url from cmdline: "" Jul 8 10:11:46.509790 ignition[752]: no config URL provided Jul 8 10:11:46.509795 ignition[752]: reading system config file "/usr/lib/ignition/user.ign" Jul 8 10:11:46.509803 ignition[752]: no config at "/usr/lib/ignition/user.ign" Jul 8 10:11:46.509826 ignition[752]: op(1): [started] loading QEMU firmware config module Jul 8 10:11:46.509831 ignition[752]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 8 10:11:46.528296 ignition[752]: op(1): [finished] loading QEMU firmware config module Jul 8 10:11:46.549916 systemd-networkd[851]: lo: Link UP Jul 8 10:11:46.549938 systemd-networkd[851]: lo: Gained carrier Jul 8 10:11:46.551416 systemd-networkd[851]: Enumeration completed Jul 8 10:11:46.551505 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 8 10:11:46.551749 systemd-networkd[851]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 8 10:11:46.551753 systemd-networkd[851]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 8 10:11:46.553192 systemd[1]: Reached target network.target - Network. Jul 8 10:11:46.553814 systemd-networkd[851]: eth0: Link UP Jul 8 10:11:46.553819 systemd-networkd[851]: eth0: Gained carrier Jul 8 10:11:46.553826 systemd-networkd[851]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 8 10:11:46.571973 systemd-networkd[851]: eth0: DHCPv4 address 10.0.0.44/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 8 10:11:46.577286 ignition[752]: parsing config with SHA512: c100c73bab0d831ce710ffb145376ff85bc029d1b0a40ecc668a40e78333b60340fe4a83e9a5f71fa39b89075e8e1f21fc078e8cc124c7d1b4cf958fdbd8340f Jul 8 10:11:46.582330 unknown[752]: fetched base config from "system" Jul 8 10:11:46.582342 unknown[752]: fetched user config from "qemu" Jul 8 10:11:46.582731 ignition[752]: fetch-offline: fetch-offline passed Jul 8 10:11:46.582787 ignition[752]: Ignition finished successfully Jul 8 10:11:46.587001 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 8 10:11:46.589537 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 8 10:11:46.591564 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 8 10:11:46.647905 ignition[859]: Ignition 2.21.0 Jul 8 10:11:46.647917 ignition[859]: Stage: kargs Jul 8 10:11:46.648085 ignition[859]: no configs at "/usr/lib/ignition/base.d" Jul 8 10:11:46.648095 ignition[859]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 8 10:11:46.652755 ignition[859]: kargs: kargs passed Jul 8 10:11:46.652845 ignition[859]: Ignition finished successfully Jul 8 10:11:46.657360 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 8 10:11:46.659339 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 8 10:11:46.717969 ignition[867]: Ignition 2.21.0 Jul 8 10:11:46.717983 ignition[867]: Stage: disks Jul 8 10:11:46.718118 ignition[867]: no configs at "/usr/lib/ignition/base.d" Jul 8 10:11:46.718128 ignition[867]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 8 10:11:46.723356 ignition[867]: disks: disks passed Jul 8 10:11:46.723496 ignition[867]: Ignition finished successfully Jul 8 10:11:46.726486 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 8 10:11:46.728629 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 8 10:11:46.730871 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 8 10:11:46.733302 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 8 10:11:46.735285 systemd[1]: Reached target sysinit.target - System Initialization. Jul 8 10:11:46.737224 systemd[1]: Reached target basic.target - Basic System. Jul 8 10:11:46.739982 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 8 10:11:46.791089 systemd-fsck[877]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 8 10:11:46.798806 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 8 10:11:46.801566 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 8 10:11:46.909954 kernel: EXT4-fs (vda9): mounted filesystem 29d3077b-4f9b-456e-9d11-186262f0abd5 r/w with ordered data mode. Quota mode: none. Jul 8 10:11:46.910251 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 8 10:11:46.911648 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 8 10:11:46.914039 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 8 10:11:46.915955 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 8 10:11:46.917356 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 8 10:11:46.917392 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 8 10:11:46.917412 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 8 10:11:46.943548 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 8 10:11:46.945837 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 8 10:11:46.950953 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (885) Jul 8 10:11:46.951001 kernel: BTRFS info (device vda6): first mount of filesystem f90b1c2f-8ecc-4345-b0e5-5297e348f3bd Jul 8 10:11:46.953148 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 8 10:11:46.953169 kernel: BTRFS info (device vda6): using free-space-tree Jul 8 10:11:46.958433 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 8 10:11:46.990675 initrd-setup-root[909]: cut: /sysroot/etc/passwd: No such file or directory Jul 8 10:11:46.995624 initrd-setup-root[916]: cut: /sysroot/etc/group: No such file or directory Jul 8 10:11:47.000060 initrd-setup-root[923]: cut: /sysroot/etc/shadow: No such file or directory Jul 8 10:11:47.004476 initrd-setup-root[930]: cut: /sysroot/etc/gshadow: No such file or directory Jul 8 10:11:47.096845 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 8 10:11:47.099168 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 8 10:11:47.100896 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 8 10:11:47.125007 kernel: BTRFS info (device vda6): last unmount of filesystem f90b1c2f-8ecc-4345-b0e5-5297e348f3bd Jul 8 10:11:47.139660 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 8 10:11:47.160412 ignition[999]: INFO : Ignition 2.21.0 Jul 8 10:11:47.160412 ignition[999]: INFO : Stage: mount Jul 8 10:11:47.197311 ignition[999]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 8 10:11:47.197311 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 8 10:11:47.197311 ignition[999]: INFO : mount: mount passed Jul 8 10:11:47.197311 ignition[999]: INFO : Ignition finished successfully Jul 8 10:11:47.164286 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 8 10:11:47.198090 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 8 10:11:47.274576 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 8 10:11:47.276122 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 8 10:11:47.389671 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1011) Jul 8 10:11:47.389713 kernel: BTRFS info (device vda6): first mount of filesystem f90b1c2f-8ecc-4345-b0e5-5297e348f3bd Jul 8 10:11:47.389725 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 8 10:11:47.390508 kernel: BTRFS info (device vda6): using free-space-tree Jul 8 10:11:47.394886 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 8 10:11:47.427708 ignition[1028]: INFO : Ignition 2.21.0 Jul 8 10:11:47.427708 ignition[1028]: INFO : Stage: files Jul 8 10:11:47.429834 ignition[1028]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 8 10:11:47.429834 ignition[1028]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 8 10:11:47.432030 ignition[1028]: DEBUG : files: compiled without relabeling support, skipping Jul 8 10:11:47.432030 ignition[1028]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 8 10:11:47.432030 ignition[1028]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 8 10:11:47.436053 ignition[1028]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 8 10:11:47.436053 ignition[1028]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 8 10:11:47.436053 ignition[1028]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 8 10:11:47.435097 unknown[1028]: wrote ssh authorized keys file for user: core Jul 8 10:11:47.440995 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jul 8 10:11:47.440995 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jul 8 10:11:47.474078 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 8 10:11:47.551519 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jul 8 10:11:47.551519 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 8 10:11:47.555453 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 8 10:11:47.555453 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 8 10:11:47.555453 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 8 10:11:47.555453 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 8 10:11:47.555453 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 8 10:11:47.555453 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 8 10:11:47.555453 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 8 10:11:47.567967 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 8 10:11:47.567967 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 8 10:11:47.567967 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 8 10:11:47.567967 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 8 10:11:47.567967 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 8 10:11:47.567967 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jul 8 10:11:48.330093 systemd-networkd[851]: eth0: Gained IPv6LL Jul 8 10:11:48.552679 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 8 10:11:49.157253 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 8 10:11:49.157253 ignition[1028]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 8 10:11:49.161153 ignition[1028]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 8 10:11:49.163894 ignition[1028]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 8 10:11:49.163894 ignition[1028]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 8 10:11:49.163894 ignition[1028]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jul 8 10:11:49.169649 ignition[1028]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 8 10:11:49.169649 ignition[1028]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 8 10:11:49.169649 ignition[1028]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jul 8 10:11:49.169649 ignition[1028]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jul 8 10:11:49.228389 ignition[1028]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 8 10:11:49.234419 ignition[1028]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 8 10:11:49.236198 ignition[1028]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jul 8 10:11:49.236198 ignition[1028]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jul 8 10:11:49.239007 ignition[1028]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jul 8 10:11:49.240418 ignition[1028]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 8 10:11:49.242144 ignition[1028]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 8 10:11:49.243780 ignition[1028]: INFO : files: files passed Jul 8 10:11:49.243780 ignition[1028]: INFO : Ignition finished successfully Jul 8 10:11:49.246380 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 8 10:11:49.249488 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 8 10:11:49.252401 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 8 10:11:49.268013 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 8 10:11:49.268142 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 8 10:11:49.272167 initrd-setup-root-after-ignition[1057]: grep: /sysroot/oem/oem-release: No such file or directory Jul 8 10:11:49.275950 initrd-setup-root-after-ignition[1059]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 8 10:11:49.275950 initrd-setup-root-after-ignition[1059]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 8 10:11:49.279033 initrd-setup-root-after-ignition[1063]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 8 10:11:49.281308 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 8 10:11:49.281559 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 8 10:11:49.285758 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 8 10:11:49.332816 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 8 10:11:49.332954 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 8 10:11:49.334054 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 8 10:11:49.334507 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 8 10:11:49.334861 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 8 10:11:49.335611 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 8 10:11:49.375426 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 8 10:11:49.377832 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 8 10:11:49.399737 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 8 10:11:49.400962 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 8 10:11:49.401253 systemd[1]: Stopped target timers.target - Timer Units. Jul 8 10:11:49.401561 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 8 10:11:49.401664 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 8 10:11:49.407822 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 8 10:11:49.408955 systemd[1]: Stopped target basic.target - Basic System. Jul 8 10:11:49.409434 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 8 10:11:49.409765 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 8 10:11:49.410263 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 8 10:11:49.410578 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 8 10:11:49.410908 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 8 10:11:49.411378 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 8 10:11:49.411698 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 8 10:11:49.412180 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 8 10:11:49.412481 systemd[1]: Stopped target swap.target - Swaps. Jul 8 10:11:49.412772 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 8 10:11:49.412884 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 8 10:11:49.431256 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 8 10:11:49.431429 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 8 10:11:49.434572 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 8 10:11:49.434708 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 8 10:11:49.436980 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 8 10:11:49.437096 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 8 10:11:49.440815 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 8 10:11:49.440941 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 8 10:11:49.443092 systemd[1]: Stopped target paths.target - Path Units. Jul 8 10:11:49.444031 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 8 10:11:49.449024 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 8 10:11:49.451608 systemd[1]: Stopped target slices.target - Slice Units. Jul 8 10:11:49.451787 systemd[1]: Stopped target sockets.target - Socket Units. Jul 8 10:11:49.453452 systemd[1]: iscsid.socket: Deactivated successfully. Jul 8 10:11:49.453541 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 8 10:11:49.455162 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 8 10:11:49.455239 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 8 10:11:49.456836 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 8 10:11:49.456965 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 8 10:11:49.458575 systemd[1]: ignition-files.service: Deactivated successfully. Jul 8 10:11:49.458671 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 8 10:11:49.463321 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 8 10:11:49.465462 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 8 10:11:49.465591 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 8 10:11:49.468728 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 8 10:11:49.470436 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 8 10:11:49.470629 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 8 10:11:49.471674 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 8 10:11:49.471864 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 8 10:11:49.476691 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 8 10:11:49.485096 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 8 10:11:49.503392 ignition[1083]: INFO : Ignition 2.21.0 Jul 8 10:11:49.503392 ignition[1083]: INFO : Stage: umount Jul 8 10:11:49.505429 ignition[1083]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 8 10:11:49.505429 ignition[1083]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 8 10:11:49.507913 ignition[1083]: INFO : umount: umount passed Jul 8 10:11:49.507913 ignition[1083]: INFO : Ignition finished successfully Jul 8 10:11:49.507319 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 8 10:11:49.509787 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 8 10:11:49.509953 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 8 10:11:49.510418 systemd[1]: Stopped target network.target - Network. Jul 8 10:11:49.513203 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 8 10:11:49.513269 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 8 10:11:49.515273 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 8 10:11:49.515322 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 8 10:11:49.516502 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 8 10:11:49.516549 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 8 10:11:49.518761 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 8 10:11:49.518813 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 8 10:11:49.519886 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 8 10:11:49.520241 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 8 10:11:49.530507 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 8 10:11:49.530637 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 8 10:11:49.535086 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 8 10:11:49.535351 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 8 10:11:49.535395 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 8 10:11:49.539392 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 8 10:11:49.552510 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 8 10:11:49.552692 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 8 10:11:49.557114 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 8 10:11:49.557320 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 8 10:11:49.558636 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 8 10:11:49.558674 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 8 10:11:49.564463 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 8 10:11:49.564537 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 8 10:11:49.564584 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 8 10:11:49.567945 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 8 10:11:49.567991 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 8 10:11:49.571508 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 8 10:11:49.571571 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 8 10:11:49.572730 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 8 10:11:49.574161 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 8 10:11:49.595997 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 8 10:11:49.596228 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 8 10:11:49.598586 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 8 10:11:49.598731 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 8 10:11:49.601219 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 8 10:11:49.601311 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 8 10:11:49.602525 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 8 10:11:49.602571 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 8 10:11:49.603463 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 8 10:11:49.603526 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 8 10:11:49.604240 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 8 10:11:49.604299 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 8 10:11:49.604902 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 8 10:11:49.604978 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 8 10:11:49.666617 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 8 10:11:49.669716 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 8 10:11:49.669814 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 8 10:11:49.674371 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 8 10:11:49.674430 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 8 10:11:49.677039 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 8 10:11:49.677101 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 8 10:11:49.685746 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 8 10:11:49.685893 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 8 10:11:49.688243 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 8 10:11:49.688335 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 8 10:11:49.691717 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 8 10:11:49.691874 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 8 10:11:49.695426 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 8 10:11:49.698493 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 8 10:11:49.723997 systemd[1]: Switching root. Jul 8 10:11:49.763717 systemd-journald[221]: Journal stopped Jul 8 10:11:51.072615 systemd-journald[221]: Received SIGTERM from PID 1 (systemd). Jul 8 10:11:51.072682 kernel: SELinux: policy capability network_peer_controls=1 Jul 8 10:11:51.072701 kernel: SELinux: policy capability open_perms=1 Jul 8 10:11:51.072713 kernel: SELinux: policy capability extended_socket_class=1 Jul 8 10:11:51.072724 kernel: SELinux: policy capability always_check_network=0 Jul 8 10:11:51.072739 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 8 10:11:51.072763 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 8 10:11:51.072775 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 8 10:11:51.072787 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 8 10:11:51.072798 kernel: SELinux: policy capability userspace_initial_context=0 Jul 8 10:11:51.072810 kernel: audit: type=1403 audit(1751969510.253:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 8 10:11:51.072822 systemd[1]: Successfully loaded SELinux policy in 62.630ms. Jul 8 10:11:51.072842 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.742ms. Jul 8 10:11:51.072855 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 8 10:11:51.072868 systemd[1]: Detected virtualization kvm. Jul 8 10:11:51.072882 systemd[1]: Detected architecture x86-64. Jul 8 10:11:51.072894 systemd[1]: Detected first boot. Jul 8 10:11:51.072905 systemd[1]: Initializing machine ID from VM UUID. Jul 8 10:11:51.072917 zram_generator::config[1128]: No configuration found. Jul 8 10:11:51.072946 kernel: Guest personality initialized and is inactive Jul 8 10:11:51.072958 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 8 10:11:51.072971 kernel: Initialized host personality Jul 8 10:11:51.072984 kernel: NET: Registered PF_VSOCK protocol family Jul 8 10:11:51.072998 systemd[1]: Populated /etc with preset unit settings. Jul 8 10:11:51.073013 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 8 10:11:51.073025 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 8 10:11:51.073037 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 8 10:11:51.073053 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 8 10:11:51.073066 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 8 10:11:51.073078 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 8 10:11:51.073090 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 8 10:11:51.073102 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 8 10:11:51.073117 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 8 10:11:51.073130 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 8 10:11:51.073144 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 8 10:11:51.073156 systemd[1]: Created slice user.slice - User and Session Slice. Jul 8 10:11:51.073168 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 8 10:11:51.073180 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 8 10:11:51.073192 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 8 10:11:51.073204 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 8 10:11:51.073217 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 8 10:11:51.073231 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 8 10:11:51.073243 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 8 10:11:51.073255 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 8 10:11:51.073267 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 8 10:11:51.073279 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 8 10:11:51.073291 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 8 10:11:51.073305 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 8 10:11:51.073317 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 8 10:11:51.073331 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 8 10:11:51.073343 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 8 10:11:51.073355 systemd[1]: Reached target slices.target - Slice Units. Jul 8 10:11:51.073367 systemd[1]: Reached target swap.target - Swaps. Jul 8 10:11:51.073379 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 8 10:11:51.073391 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 8 10:11:51.073403 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 8 10:11:51.073416 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 8 10:11:51.073428 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 8 10:11:51.073442 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 8 10:11:51.073455 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 8 10:11:51.073467 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 8 10:11:51.073479 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 8 10:11:51.073491 systemd[1]: Mounting media.mount - External Media Directory... Jul 8 10:11:51.073503 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 8 10:11:51.073517 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 8 10:11:51.073529 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 8 10:11:51.073541 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 8 10:11:51.073556 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 8 10:11:51.073567 systemd[1]: Reached target machines.target - Containers. Jul 8 10:11:51.073580 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 8 10:11:51.073592 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 8 10:11:51.073604 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 8 10:11:51.073616 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 8 10:11:51.073628 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 8 10:11:51.073640 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 8 10:11:51.073653 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 8 10:11:51.073665 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 8 10:11:51.073677 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 8 10:11:51.073690 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 8 10:11:51.073702 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 8 10:11:51.073714 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 8 10:11:51.073725 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 8 10:11:51.073737 systemd[1]: Stopped systemd-fsck-usr.service. Jul 8 10:11:51.073766 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 8 10:11:51.073778 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 8 10:11:51.073791 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 8 10:11:51.073802 kernel: fuse: init (API version 7.41) Jul 8 10:11:51.073814 kernel: loop: module loaded Jul 8 10:11:51.073826 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 8 10:11:51.073840 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 8 10:11:51.073852 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 8 10:11:51.073865 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 8 10:11:51.073877 systemd[1]: verity-setup.service: Deactivated successfully. Jul 8 10:11:51.073893 systemd[1]: Stopped verity-setup.service. Jul 8 10:11:51.073906 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 8 10:11:51.073918 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 8 10:11:51.073945 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 8 10:11:51.073963 systemd[1]: Mounted media.mount - External Media Directory. Jul 8 10:11:51.073979 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 8 10:11:51.073992 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 8 10:11:51.074004 kernel: ACPI: bus type drm_connector registered Jul 8 10:11:51.074016 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 8 10:11:51.074030 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 8 10:11:51.074042 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 8 10:11:51.074054 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 8 10:11:51.074066 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 8 10:11:51.074078 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 8 10:11:51.074090 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 8 10:11:51.074122 systemd-journald[1199]: Collecting audit messages is disabled. Jul 8 10:11:51.074145 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 8 10:11:51.074160 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 8 10:11:51.074172 systemd-journald[1199]: Journal started Jul 8 10:11:51.074196 systemd-journald[1199]: Runtime Journal (/run/log/journal/ad90bcc8d98649d59ffebff529743634) is 6M, max 48.5M, 42.4M free. Jul 8 10:11:50.797510 systemd[1]: Queued start job for default target multi-user.target. Jul 8 10:11:50.818886 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 8 10:11:50.819497 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 8 10:11:51.076993 systemd[1]: Started systemd-journald.service - Journal Service. Jul 8 10:11:51.078170 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 8 10:11:51.078383 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 8 10:11:51.080008 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 8 10:11:51.080219 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 8 10:11:51.081800 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 8 10:11:51.082063 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 8 10:11:51.083511 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 8 10:11:51.085034 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 8 10:11:51.086778 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 8 10:11:51.088424 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 8 10:11:51.103620 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 8 10:11:51.106129 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 8 10:11:51.108237 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 8 10:11:51.109328 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 8 10:11:51.109357 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 8 10:11:51.111246 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 8 10:11:51.113510 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 8 10:11:51.114618 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 8 10:11:51.119357 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 8 10:11:51.125126 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 8 10:11:51.126429 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 8 10:11:51.129122 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 8 10:11:51.130447 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 8 10:11:51.134032 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 8 10:11:51.134193 systemd-journald[1199]: Time spent on flushing to /var/log/journal/ad90bcc8d98649d59ffebff529743634 is 24.128ms for 1060 entries. Jul 8 10:11:51.134193 systemd-journald[1199]: System Journal (/var/log/journal/ad90bcc8d98649d59ffebff529743634) is 8M, max 195.6M, 187.6M free. Jul 8 10:11:51.164658 systemd-journald[1199]: Received client request to flush runtime journal. Jul 8 10:11:51.137339 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 8 10:11:51.141725 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 8 10:11:51.144614 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 8 10:11:51.147194 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 8 10:11:51.165210 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 8 10:11:51.170794 kernel: loop0: detected capacity change from 0 to 114000 Jul 8 10:11:51.169850 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 8 10:11:51.171894 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 8 10:11:51.174351 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 8 10:11:51.181136 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 8 10:11:51.184908 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 8 10:11:51.198955 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 8 10:11:51.201061 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 8 10:11:51.203689 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 8 10:11:51.216953 kernel: loop1: detected capacity change from 0 to 146488 Jul 8 10:11:51.228835 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 8 10:11:51.235909 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. Jul 8 10:11:51.235980 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. Jul 8 10:11:51.241063 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 8 10:11:51.243175 kernel: loop2: detected capacity change from 0 to 229808 Jul 8 10:11:51.267963 kernel: loop3: detected capacity change from 0 to 114000 Jul 8 10:11:51.275969 kernel: loop4: detected capacity change from 0 to 146488 Jul 8 10:11:51.289963 kernel: loop5: detected capacity change from 0 to 229808 Jul 8 10:11:51.298130 (sd-merge)[1269]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 8 10:11:51.298825 (sd-merge)[1269]: Merged extensions into '/usr'. Jul 8 10:11:51.306601 systemd[1]: Reload requested from client PID 1247 ('systemd-sysext') (unit systemd-sysext.service)... Jul 8 10:11:51.306620 systemd[1]: Reloading... Jul 8 10:11:51.377000 zram_generator::config[1295]: No configuration found. Jul 8 10:11:51.466808 ldconfig[1242]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 8 10:11:51.490726 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 8 10:11:51.571516 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 8 10:11:51.572080 systemd[1]: Reloading finished in 264 ms. Jul 8 10:11:51.598077 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 8 10:11:51.600275 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 8 10:11:51.619673 systemd[1]: Starting ensure-sysext.service... Jul 8 10:11:51.622048 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 8 10:11:51.637555 systemd[1]: Reload requested from client PID 1333 ('systemctl') (unit ensure-sysext.service)... Jul 8 10:11:51.637573 systemd[1]: Reloading... Jul 8 10:11:51.642746 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 8 10:11:51.643093 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 8 10:11:51.643400 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 8 10:11:51.643660 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 8 10:11:51.644545 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 8 10:11:51.644820 systemd-tmpfiles[1335]: ACLs are not supported, ignoring. Jul 8 10:11:51.644892 systemd-tmpfiles[1335]: ACLs are not supported, ignoring. Jul 8 10:11:51.649355 systemd-tmpfiles[1335]: Detected autofs mount point /boot during canonicalization of boot. Jul 8 10:11:51.649367 systemd-tmpfiles[1335]: Skipping /boot Jul 8 10:11:51.660014 systemd-tmpfiles[1335]: Detected autofs mount point /boot during canonicalization of boot. Jul 8 10:11:51.660105 systemd-tmpfiles[1335]: Skipping /boot Jul 8 10:11:51.695009 zram_generator::config[1368]: No configuration found. Jul 8 10:11:51.778405 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 8 10:11:51.859172 systemd[1]: Reloading finished in 221 ms. Jul 8 10:11:51.881794 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 8 10:11:51.908181 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 8 10:11:51.917434 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 8 10:11:51.919848 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 8 10:11:51.922418 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 8 10:11:51.929206 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 8 10:11:51.932718 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 8 10:11:51.937109 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 8 10:11:51.941010 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 8 10:11:51.941178 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 8 10:11:51.947816 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 8 10:11:51.952106 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 8 10:11:51.955955 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 8 10:11:51.957565 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 8 10:11:51.957672 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 8 10:11:51.959465 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 8 10:11:51.960720 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 8 10:11:51.962373 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 8 10:11:51.966539 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 8 10:11:51.969062 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 8 10:11:51.970751 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 8 10:11:51.971245 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 8 10:11:51.975226 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 8 10:11:51.975528 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 8 10:11:51.984265 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 8 10:11:51.986685 systemd-udevd[1406]: Using default interface naming scheme 'v255'. Jul 8 10:11:51.991388 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 8 10:11:51.991648 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 8 10:11:51.994101 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 8 10:11:51.996168 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 8 10:11:51.998187 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 8 10:11:52.092963 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 8 10:11:52.094160 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 8 10:11:52.094288 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 8 10:11:52.096113 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 8 10:11:52.097329 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 8 10:11:52.099664 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 8 10:11:52.100269 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 8 10:11:52.102710 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 8 10:11:52.103588 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 8 10:11:52.105455 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 8 10:11:52.105783 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 8 10:11:52.107522 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 8 10:11:52.109093 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 8 10:11:52.109568 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 8 10:11:52.114377 systemd[1]: Finished ensure-sysext.service. Jul 8 10:11:52.119384 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 8 10:11:52.119446 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 8 10:11:52.121455 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 8 10:11:52.126294 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 8 10:11:52.130326 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 8 10:11:52.134808 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 8 10:11:52.136484 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 8 10:11:52.207245 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 8 10:11:52.232108 augenrules[1485]: No rules Jul 8 10:11:52.243800 systemd[1]: audit-rules.service: Deactivated successfully. Jul 8 10:11:52.244088 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 8 10:11:52.254186 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 8 10:11:52.265965 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 8 10:11:52.276972 kernel: mousedev: PS/2 mouse device common for all mice Jul 8 10:11:52.282971 kernel: ACPI: button: Power Button [PWRF] Jul 8 10:11:52.287937 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 8 10:11:52.295915 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jul 8 10:11:52.296501 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 8 10:11:52.296663 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 8 10:11:52.293289 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 8 10:11:52.313142 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 8 10:11:52.313291 systemd[1]: Reached target time-set.target - System Time Set. Jul 8 10:11:52.313617 systemd-resolved[1404]: Positive Trust Anchors: Jul 8 10:11:52.313625 systemd-resolved[1404]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 8 10:11:52.313659 systemd-resolved[1404]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 8 10:11:52.319806 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 8 10:11:52.321611 systemd-resolved[1404]: Defaulting to hostname 'linux'. Jul 8 10:11:52.323197 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 8 10:11:52.324329 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 8 10:11:52.326015 systemd[1]: Reached target sysinit.target - System Initialization. Jul 8 10:11:52.327162 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 8 10:11:52.328389 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 8 10:11:52.330012 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 8 10:11:52.331270 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 8 10:11:52.332603 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 8 10:11:52.334012 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 8 10:11:52.335438 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 8 10:11:52.335469 systemd[1]: Reached target paths.target - Path Units. Jul 8 10:11:52.337000 systemd[1]: Reached target timers.target - Timer Units. Jul 8 10:11:52.338511 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 8 10:11:52.341959 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 8 10:11:52.346051 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 8 10:11:52.347468 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 8 10:11:52.348821 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 8 10:11:52.352196 systemd-networkd[1451]: lo: Link UP Jul 8 10:11:52.352680 systemd-networkd[1451]: lo: Gained carrier Jul 8 10:11:52.353918 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 8 10:11:52.356105 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 8 10:11:52.359660 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 8 10:11:52.359813 systemd-networkd[1451]: Enumeration completed Jul 8 10:11:52.361032 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 8 10:11:52.363434 systemd[1]: Reached target network.target - Network. Jul 8 10:11:52.364192 systemd-networkd[1451]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 8 10:11:52.364571 systemd[1]: Reached target sockets.target - Socket Units. Jul 8 10:11:52.364692 systemd-networkd[1451]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 8 10:11:52.365521 systemd[1]: Reached target basic.target - Basic System. Jul 8 10:11:52.366618 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 8 10:11:52.366645 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 8 10:11:52.368039 systemd-networkd[1451]: eth0: Link UP Jul 8 10:11:52.368689 systemd-networkd[1451]: eth0: Gained carrier Jul 8 10:11:52.368706 systemd-networkd[1451]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 8 10:11:52.368900 systemd[1]: Starting containerd.service - containerd container runtime... Jul 8 10:11:52.370940 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 8 10:11:52.375139 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 8 10:11:52.378156 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 8 10:11:52.384747 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 8 10:11:52.386006 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 8 10:11:52.386994 systemd-networkd[1451]: eth0: DHCPv4 address 10.0.0.44/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 8 10:11:52.387595 systemd-timesyncd[1446]: Network configuration changed, trying to establish connection. Jul 8 10:11:52.388037 jq[1516]: false Jul 8 10:11:52.388735 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 8 10:11:54.276828 systemd-timesyncd[1446]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 8 10:11:54.276887 systemd-timesyncd[1446]: Initial clock synchronization to Tue 2025-07-08 10:11:54.276756 UTC. Jul 8 10:11:54.277991 systemd-resolved[1404]: Clock change detected. Flushing caches. Jul 8 10:11:54.279305 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 8 10:11:54.331329 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 8 10:11:54.357025 google_oslogin_nss_cache[1520]: oslogin_cache_refresh[1520]: Refreshing passwd entry cache Jul 8 10:11:54.357037 oslogin_cache_refresh[1520]: Refreshing passwd entry cache Jul 8 10:11:54.358036 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 8 10:11:54.360988 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 8 10:11:54.363994 extend-filesystems[1519]: Found /dev/vda6 Jul 8 10:11:54.367276 google_oslogin_nss_cache[1520]: oslogin_cache_refresh[1520]: Failure getting users, quitting Jul 8 10:11:54.367276 google_oslogin_nss_cache[1520]: oslogin_cache_refresh[1520]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 8 10:11:54.367266 oslogin_cache_refresh[1520]: Failure getting users, quitting Jul 8 10:11:54.367381 google_oslogin_nss_cache[1520]: oslogin_cache_refresh[1520]: Refreshing group entry cache Jul 8 10:11:54.367284 oslogin_cache_refresh[1520]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 8 10:11:54.367328 oslogin_cache_refresh[1520]: Refreshing group entry cache Jul 8 10:11:54.367780 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 8 10:11:54.372637 google_oslogin_nss_cache[1520]: oslogin_cache_refresh[1520]: Failure getting groups, quitting Jul 8 10:11:54.372637 google_oslogin_nss_cache[1520]: oslogin_cache_refresh[1520]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 8 10:11:54.372629 oslogin_cache_refresh[1520]: Failure getting groups, quitting Jul 8 10:11:54.372640 oslogin_cache_refresh[1520]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 8 10:11:54.373870 extend-filesystems[1519]: Found /dev/vda9 Jul 8 10:11:54.375670 extend-filesystems[1519]: Checking size of /dev/vda9 Jul 8 10:11:54.379265 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 8 10:11:54.384240 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 8 10:11:54.387929 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 8 10:11:54.388388 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 8 10:11:54.389644 systemd[1]: Starting update-engine.service - Update Engine... Jul 8 10:11:54.391467 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 8 10:11:54.400112 kernel: kvm_amd: TSC scaling supported Jul 8 10:11:54.400177 kernel: kvm_amd: Nested Virtualization enabled Jul 8 10:11:54.400191 kernel: kvm_amd: Nested Paging enabled Jul 8 10:11:54.400203 kernel: kvm_amd: LBR virtualization supported Jul 8 10:11:54.401208 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jul 8 10:11:54.401243 kernel: kvm_amd: Virtual GIF supported Jul 8 10:11:54.402958 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 8 10:11:54.405134 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 8 10:11:54.405576 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 8 10:11:54.413134 jq[1548]: true Jul 8 10:11:54.405994 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 8 10:11:54.408957 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 8 10:11:54.410544 systemd[1]: motdgen.service: Deactivated successfully. Jul 8 10:11:54.410770 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 8 10:11:54.414976 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 8 10:11:54.416364 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 8 10:11:54.427246 update_engine[1547]: I20250708 10:11:54.426899 1547 main.cc:92] Flatcar Update Engine starting Jul 8 10:11:54.429487 extend-filesystems[1519]: Resized partition /dev/vda9 Jul 8 10:11:54.431110 (ntainerd)[1553]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 8 10:11:54.436103 jq[1552]: true Jul 8 10:11:54.467096 extend-filesystems[1571]: resize2fs 1.47.2 (1-Jan-2025) Jul 8 10:11:54.472193 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 8 10:11:54.479685 systemd-logind[1535]: Watching system buttons on /dev/input/event2 (Power Button) Jul 8 10:11:54.479705 systemd-logind[1535]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 8 10:11:54.480771 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 8 10:11:54.481349 systemd-logind[1535]: New seat seat0. Jul 8 10:11:54.489572 systemd[1]: Started systemd-logind.service - User Login Management. Jul 8 10:11:54.507299 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 8 10:11:54.511271 tar[1551]: linux-amd64/LICENSE Jul 8 10:11:54.766095 tar[1551]: linux-amd64/helm Jul 8 10:11:54.896647 sshd_keygen[1550]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 8 10:11:54.920029 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 8 10:11:54.992814 tar[1551]: linux-amd64/README.md Jul 8 10:11:55.017373 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 8 10:11:55.034688 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 8 10:11:55.053019 systemd[1]: issuegen.service: Deactivated successfully. Jul 8 10:11:55.053351 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 8 10:11:55.055117 kernel: EDAC MC: Ver: 3.0.0 Jul 8 10:11:55.055520 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 8 10:11:55.065685 dbus-daemon[1512]: [system] SELinux support is enabled Jul 8 10:11:55.065875 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 8 10:11:55.068842 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 8 10:11:55.071138 update_engine[1547]: I20250708 10:11:55.069777 1547 update_check_scheduler.cc:74] Next update check in 7m16s Jul 8 10:11:55.068871 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 8 10:11:55.068950 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 8 10:11:55.068964 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 8 10:11:55.070168 systemd[1]: Started update-engine.service - Update Engine. Jul 8 10:11:55.071747 dbus-daemon[1512]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 8 10:11:55.073196 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 8 10:11:55.073665 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 8 10:11:55.076571 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 8 10:11:55.084362 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 8 10:11:55.084655 systemd[1]: Reached target getty.target - Login Prompts. Jul 8 10:11:55.219117 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 8 10:11:55.248717 locksmithd[1608]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 8 10:11:55.255331 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 8 10:11:57.316978 containerd[1553]: time="2025-07-08T10:11:55Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 8 10:11:55.337200 systemd-networkd[1451]: eth0: Gained IPv6LL Jul 8 10:11:57.318942 containerd[1553]: time="2025-07-08T10:11:57.317646654Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Jul 8 10:11:55.340520 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 8 10:11:55.342201 systemd[1]: Reached target network-online.target - Network is Online. Jul 8 10:11:55.344674 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 8 10:11:55.347043 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 8 10:11:55.349550 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 8 10:11:55.531398 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 8 10:11:55.531662 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 8 10:11:55.533141 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 8 10:11:57.330385 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 8 10:11:57.331032 containerd[1553]: time="2025-07-08T10:11:57.330978391Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="13.705µs" Jul 8 10:11:57.331032 containerd[1553]: time="2025-07-08T10:11:57.331022353Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 8 10:11:57.331100 containerd[1553]: time="2025-07-08T10:11:57.331042311Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 8 10:11:57.331269 containerd[1553]: time="2025-07-08T10:11:57.331240051Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 8 10:11:57.331269 containerd[1553]: time="2025-07-08T10:11:57.331262233Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 8 10:11:57.331308 containerd[1553]: time="2025-07-08T10:11:57.331288432Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 8 10:11:57.331376 containerd[1553]: time="2025-07-08T10:11:57.331357001Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 8 10:11:57.331376 containerd[1553]: time="2025-07-08T10:11:57.331371438Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 8 10:11:57.331696 containerd[1553]: time="2025-07-08T10:11:57.331664097Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 8 10:11:57.331696 containerd[1553]: time="2025-07-08T10:11:57.331684886Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 8 10:11:57.331696 containerd[1553]: time="2025-07-08T10:11:57.331695095Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 8 10:11:57.331765 containerd[1553]: time="2025-07-08T10:11:57.331703931Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 8 10:11:57.331842 containerd[1553]: time="2025-07-08T10:11:57.331817214Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 8 10:11:57.332078 containerd[1553]: time="2025-07-08T10:11:57.332041304Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 8 10:11:57.332102 containerd[1553]: time="2025-07-08T10:11:57.332092860Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 8 10:11:57.332132 containerd[1553]: time="2025-07-08T10:11:57.332104743Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 8 10:11:57.332180 containerd[1553]: time="2025-07-08T10:11:57.332148304Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 8 10:11:57.332755 containerd[1553]: time="2025-07-08T10:11:57.332456582Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 8 10:11:57.332755 containerd[1553]: time="2025-07-08T10:11:57.332522216Z" level=info msg="metadata content store policy set" policy=shared Jul 8 10:11:57.347586 extend-filesystems[1571]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 8 10:11:57.347586 extend-filesystems[1571]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 8 10:11:57.347586 extend-filesystems[1571]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 8 10:11:57.351695 extend-filesystems[1519]: Resized filesystem in /dev/vda9 Jul 8 10:11:57.353225 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 8 10:11:57.353572 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 8 10:11:57.400647 bash[1588]: Updated "/home/core/.ssh/authorized_keys" Jul 8 10:11:57.401866 containerd[1553]: time="2025-07-08T10:11:57.401811783Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 8 10:11:57.401914 containerd[1553]: time="2025-07-08T10:11:57.401881925Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 8 10:11:57.401914 containerd[1553]: time="2025-07-08T10:11:57.401901882Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 8 10:11:57.401954 containerd[1553]: time="2025-07-08T10:11:57.401913144Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 8 10:11:57.401954 containerd[1553]: time="2025-07-08T10:11:57.401926799Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 8 10:11:57.401954 containerd[1553]: time="2025-07-08T10:11:57.401937018Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 8 10:11:57.402089 containerd[1553]: time="2025-07-08T10:11:57.401957206Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 8 10:11:57.402089 containerd[1553]: time="2025-07-08T10:11:57.401969399Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 8 10:11:57.402089 containerd[1553]: time="2025-07-08T10:11:57.401980039Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 8 10:11:57.402089 containerd[1553]: time="2025-07-08T10:11:57.401995778Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 8 10:11:57.402089 containerd[1553]: time="2025-07-08T10:11:57.402008142Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 8 10:11:57.402089 containerd[1553]: time="2025-07-08T10:11:57.402021346Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 8 10:11:57.402198 containerd[1553]: time="2025-07-08T10:11:57.402182689Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 8 10:11:57.402219 containerd[1553]: time="2025-07-08T10:11:57.402204099Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 8 10:11:57.402238 containerd[1553]: time="2025-07-08T10:11:57.402220099Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 8 10:11:57.402238 containerd[1553]: time="2025-07-08T10:11:57.402230899Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 8 10:11:57.402274 containerd[1553]: time="2025-07-08T10:11:57.402241409Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 8 10:11:57.402274 containerd[1553]: time="2025-07-08T10:11:57.402251788Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 8 10:11:57.402274 containerd[1553]: time="2025-07-08T10:11:57.402262879Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 8 10:11:57.402274 containerd[1553]: time="2025-07-08T10:11:57.402272848Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 8 10:11:57.402357 containerd[1553]: time="2025-07-08T10:11:57.402289710Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 8 10:11:57.402357 containerd[1553]: time="2025-07-08T10:11:57.402299758Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 8 10:11:57.402357 containerd[1553]: time="2025-07-08T10:11:57.402311821Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 8 10:11:57.402410 containerd[1553]: time="2025-07-08T10:11:57.402388194Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 8 10:11:57.402410 containerd[1553]: time="2025-07-08T10:11:57.402403433Z" level=info msg="Start snapshots syncer" Jul 8 10:11:57.402447 containerd[1553]: time="2025-07-08T10:11:57.402426075Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 8 10:11:57.402501 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 8 10:11:57.403791 containerd[1553]: time="2025-07-08T10:11:57.402869527Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 8 10:11:57.403791 containerd[1553]: time="2025-07-08T10:11:57.402916685Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 8 10:11:57.403949 containerd[1553]: time="2025-07-08T10:11:57.402974994Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 8 10:11:57.403949 containerd[1553]: time="2025-07-08T10:11:57.403089690Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 8 10:11:57.403949 containerd[1553]: time="2025-07-08T10:11:57.403107683Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 8 10:11:57.403949 containerd[1553]: time="2025-07-08T10:11:57.403118003Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 8 10:11:57.403949 containerd[1553]: time="2025-07-08T10:11:57.403128983Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 8 10:11:57.403949 containerd[1553]: time="2025-07-08T10:11:57.403140685Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 8 10:11:57.403949 containerd[1553]: time="2025-07-08T10:11:57.403150834Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 8 10:11:57.403949 containerd[1553]: time="2025-07-08T10:11:57.403160562Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 8 10:11:57.403949 containerd[1553]: time="2025-07-08T10:11:57.403181562Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 8 10:11:57.403949 containerd[1553]: time="2025-07-08T10:11:57.403195067Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 8 10:11:57.403949 containerd[1553]: time="2025-07-08T10:11:57.403204555Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 8 10:11:57.403949 containerd[1553]: time="2025-07-08T10:11:57.403237877Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 8 10:11:57.403949 containerd[1553]: time="2025-07-08T10:11:57.403252224Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 8 10:11:57.403949 containerd[1553]: time="2025-07-08T10:11:57.403260961Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 8 10:11:57.404247 containerd[1553]: time="2025-07-08T10:11:57.403270188Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 8 10:11:57.404247 containerd[1553]: time="2025-07-08T10:11:57.403277351Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 8 10:11:57.404247 containerd[1553]: time="2025-07-08T10:11:57.403285857Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 8 10:11:57.404247 containerd[1553]: time="2025-07-08T10:11:57.403295746Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 8 10:11:57.404247 containerd[1553]: time="2025-07-08T10:11:57.403462829Z" level=info msg="runtime interface created" Jul 8 10:11:57.404247 containerd[1553]: time="2025-07-08T10:11:57.403474962Z" level=info msg="created NRI interface" Jul 8 10:11:57.404247 containerd[1553]: time="2025-07-08T10:11:57.403483127Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 8 10:11:57.404247 containerd[1553]: time="2025-07-08T10:11:57.403493346Z" level=info msg="Connect containerd service" Jul 8 10:11:57.404247 containerd[1553]: time="2025-07-08T10:11:57.403519085Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 8 10:11:57.404787 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 8 10:11:57.464668 containerd[1553]: time="2025-07-08T10:11:57.464619979Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 8 10:11:57.786153 containerd[1553]: time="2025-07-08T10:11:57.786011969Z" level=info msg="Start subscribing containerd event" Jul 8 10:11:57.786352 containerd[1553]: time="2025-07-08T10:11:57.786307312Z" level=info msg="Start recovering state" Jul 8 10:11:57.786470 containerd[1553]: time="2025-07-08T10:11:57.786240698Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 8 10:11:57.786470 containerd[1553]: time="2025-07-08T10:11:57.786427899Z" level=info msg="Start event monitor" Jul 8 10:11:57.786470 containerd[1553]: time="2025-07-08T10:11:57.786448828Z" level=info msg="Start cni network conf syncer for default" Jul 8 10:11:57.786470 containerd[1553]: time="2025-07-08T10:11:57.786458796Z" level=info msg="Start streaming server" Jul 8 10:11:57.786544 containerd[1553]: time="2025-07-08T10:11:57.786477852Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 8 10:11:57.786544 containerd[1553]: time="2025-07-08T10:11:57.786486528Z" level=info msg="runtime interface starting up..." Jul 8 10:11:57.786544 containerd[1553]: time="2025-07-08T10:11:57.786491828Z" level=info msg="starting plugins..." Jul 8 10:11:57.786544 containerd[1553]: time="2025-07-08T10:11:57.786507438Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 8 10:11:57.786544 containerd[1553]: time="2025-07-08T10:11:57.786524981Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 8 10:11:57.786813 systemd[1]: Started containerd.service - containerd container runtime. Jul 8 10:11:57.786975 containerd[1553]: time="2025-07-08T10:11:57.786953775Z" level=info msg="containerd successfully booted in 2.255428s" Jul 8 10:11:58.687577 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 8 10:11:58.689230 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 8 10:11:58.690559 systemd[1]: Startup finished in 5.869s (kernel) + 6.614s (initrd) + 6.610s (userspace) = 19.094s. Jul 8 10:11:58.691358 (kubelet)[1667]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 8 10:11:59.168791 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 8 10:11:59.170247 systemd[1]: Started sshd@0-10.0.0.44:22-10.0.0.1:34368.service - OpenSSH per-connection server daemon (10.0.0.1:34368). Jul 8 10:11:59.260992 sshd[1679]: Accepted publickey for core from 10.0.0.1 port 34368 ssh2: RSA SHA256:BQWGw9qDwwKm1WzdalDD28JNIdz3HBcaxKwPB41deh4 Jul 8 10:11:59.262883 sshd-session[1679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 8 10:11:59.269683 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 8 10:11:59.270799 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 8 10:11:59.277738 systemd-logind[1535]: New session 1 of user core. Jul 8 10:11:59.294435 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 8 10:11:59.297378 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 8 10:11:59.308678 kubelet[1667]: E0708 10:11:59.308635 1667 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 8 10:11:59.312661 (systemd)[1684]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 8 10:11:59.312746 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 8 10:11:59.312938 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 8 10:11:59.313315 systemd[1]: kubelet.service: Consumed 1.808s CPU time, 266.1M memory peak. Jul 8 10:11:59.315737 systemd-logind[1535]: New session c1 of user core. Jul 8 10:11:59.537634 systemd[1684]: Queued start job for default target default.target. Jul 8 10:11:59.557496 systemd[1684]: Created slice app.slice - User Application Slice. Jul 8 10:11:59.557524 systemd[1684]: Reached target paths.target - Paths. Jul 8 10:11:59.557566 systemd[1684]: Reached target timers.target - Timers. Jul 8 10:11:59.559115 systemd[1684]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 8 10:11:59.571115 systemd[1684]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 8 10:11:59.571236 systemd[1684]: Reached target sockets.target - Sockets. Jul 8 10:11:59.571276 systemd[1684]: Reached target basic.target - Basic System. Jul 8 10:11:59.571317 systemd[1684]: Reached target default.target - Main User Target. Jul 8 10:11:59.571347 systemd[1684]: Startup finished in 248ms. Jul 8 10:11:59.571945 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 8 10:11:59.573939 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 8 10:11:59.644477 systemd[1]: Started sshd@1-10.0.0.44:22-10.0.0.1:34384.service - OpenSSH per-connection server daemon (10.0.0.1:34384). Jul 8 10:11:59.705716 sshd[1696]: Accepted publickey for core from 10.0.0.1 port 34384 ssh2: RSA SHA256:BQWGw9qDwwKm1WzdalDD28JNIdz3HBcaxKwPB41deh4 Jul 8 10:11:59.709730 sshd-session[1696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 8 10:11:59.713805 systemd-logind[1535]: New session 2 of user core. Jul 8 10:11:59.729201 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 8 10:11:59.784453 sshd[1699]: Connection closed by 10.0.0.1 port 34384 Jul 8 10:11:59.784888 sshd-session[1696]: pam_unix(sshd:session): session closed for user core Jul 8 10:11:59.795791 systemd[1]: sshd@1-10.0.0.44:22-10.0.0.1:34384.service: Deactivated successfully. Jul 8 10:11:59.797678 systemd[1]: session-2.scope: Deactivated successfully. Jul 8 10:11:59.798426 systemd-logind[1535]: Session 2 logged out. Waiting for processes to exit. Jul 8 10:11:59.801501 systemd[1]: Started sshd@2-10.0.0.44:22-10.0.0.1:34390.service - OpenSSH per-connection server daemon (10.0.0.1:34390). Jul 8 10:11:59.802100 systemd-logind[1535]: Removed session 2. Jul 8 10:11:59.858978 sshd[1705]: Accepted publickey for core from 10.0.0.1 port 34390 ssh2: RSA SHA256:BQWGw9qDwwKm1WzdalDD28JNIdz3HBcaxKwPB41deh4 Jul 8 10:11:59.860657 sshd-session[1705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 8 10:11:59.866186 systemd-logind[1535]: New session 3 of user core. Jul 8 10:11:59.883391 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 8 10:11:59.934456 sshd[1708]: Connection closed by 10.0.0.1 port 34390 Jul 8 10:11:59.934960 sshd-session[1705]: pam_unix(sshd:session): session closed for user core Jul 8 10:11:59.953864 systemd[1]: sshd@2-10.0.0.44:22-10.0.0.1:34390.service: Deactivated successfully. Jul 8 10:11:59.955810 systemd[1]: session-3.scope: Deactivated successfully. Jul 8 10:11:59.956568 systemd-logind[1535]: Session 3 logged out. Waiting for processes to exit. Jul 8 10:11:59.959425 systemd[1]: Started sshd@3-10.0.0.44:22-10.0.0.1:34400.service - OpenSSH per-connection server daemon (10.0.0.1:34400). Jul 8 10:11:59.959965 systemd-logind[1535]: Removed session 3. Jul 8 10:12:00.013335 sshd[1714]: Accepted publickey for core from 10.0.0.1 port 34400 ssh2: RSA SHA256:BQWGw9qDwwKm1WzdalDD28JNIdz3HBcaxKwPB41deh4 Jul 8 10:12:00.015301 sshd-session[1714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 8 10:12:00.019811 systemd-logind[1535]: New session 4 of user core. Jul 8 10:12:00.029226 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 8 10:12:00.082742 sshd[1717]: Connection closed by 10.0.0.1 port 34400 Jul 8 10:12:00.083317 sshd-session[1714]: pam_unix(sshd:session): session closed for user core Jul 8 10:12:00.098049 systemd[1]: sshd@3-10.0.0.44:22-10.0.0.1:34400.service: Deactivated successfully. Jul 8 10:12:00.100147 systemd[1]: session-4.scope: Deactivated successfully. Jul 8 10:12:00.100922 systemd-logind[1535]: Session 4 logged out. Waiting for processes to exit. Jul 8 10:12:00.104031 systemd[1]: Started sshd@4-10.0.0.44:22-10.0.0.1:34416.service - OpenSSH per-connection server daemon (10.0.0.1:34416). Jul 8 10:12:00.104718 systemd-logind[1535]: Removed session 4. Jul 8 10:12:00.166959 sshd[1723]: Accepted publickey for core from 10.0.0.1 port 34416 ssh2: RSA SHA256:BQWGw9qDwwKm1WzdalDD28JNIdz3HBcaxKwPB41deh4 Jul 8 10:12:00.167791 sshd-session[1723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 8 10:12:00.172631 systemd-logind[1535]: New session 5 of user core. Jul 8 10:12:00.191329 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 8 10:12:00.252418 sudo[1727]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 8 10:12:00.252762 sudo[1727]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 8 10:12:00.279985 sudo[1727]: pam_unix(sudo:session): session closed for user root Jul 8 10:12:00.281811 sshd[1726]: Connection closed by 10.0.0.1 port 34416 Jul 8 10:12:00.282330 sshd-session[1723]: pam_unix(sshd:session): session closed for user core Jul 8 10:12:00.296728 systemd[1]: sshd@4-10.0.0.44:22-10.0.0.1:34416.service: Deactivated successfully. Jul 8 10:12:00.298932 systemd[1]: session-5.scope: Deactivated successfully. Jul 8 10:12:00.299860 systemd-logind[1535]: Session 5 logged out. Waiting for processes to exit. Jul 8 10:12:00.303646 systemd[1]: Started sshd@5-10.0.0.44:22-10.0.0.1:34428.service - OpenSSH per-connection server daemon (10.0.0.1:34428). Jul 8 10:12:00.304377 systemd-logind[1535]: Removed session 5. Jul 8 10:12:00.358242 sshd[1733]: Accepted publickey for core from 10.0.0.1 port 34428 ssh2: RSA SHA256:BQWGw9qDwwKm1WzdalDD28JNIdz3HBcaxKwPB41deh4 Jul 8 10:12:00.359589 sshd-session[1733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 8 10:12:00.364557 systemd-logind[1535]: New session 6 of user core. Jul 8 10:12:00.374218 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 8 10:12:00.430040 sudo[1738]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 8 10:12:00.430461 sudo[1738]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 8 10:12:00.616851 sudo[1738]: pam_unix(sudo:session): session closed for user root Jul 8 10:12:00.623339 sudo[1737]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 8 10:12:00.623628 sudo[1737]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 8 10:12:00.634667 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 8 10:12:00.691507 augenrules[1760]: No rules Jul 8 10:12:00.692630 systemd[1]: audit-rules.service: Deactivated successfully. Jul 8 10:12:00.692990 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 8 10:12:00.694343 sudo[1737]: pam_unix(sudo:session): session closed for user root Jul 8 10:12:00.696078 sshd[1736]: Connection closed by 10.0.0.1 port 34428 Jul 8 10:12:00.696512 sshd-session[1733]: pam_unix(sshd:session): session closed for user core Jul 8 10:12:00.705135 systemd[1]: sshd@5-10.0.0.44:22-10.0.0.1:34428.service: Deactivated successfully. Jul 8 10:12:00.706831 systemd[1]: session-6.scope: Deactivated successfully. Jul 8 10:12:00.707711 systemd-logind[1535]: Session 6 logged out. Waiting for processes to exit. Jul 8 10:12:00.710858 systemd[1]: Started sshd@6-10.0.0.44:22-10.0.0.1:34432.service - OpenSSH per-connection server daemon (10.0.0.1:34432). Jul 8 10:12:00.711846 systemd-logind[1535]: Removed session 6. Jul 8 10:12:00.782929 sshd[1769]: Accepted publickey for core from 10.0.0.1 port 34432 ssh2: RSA SHA256:BQWGw9qDwwKm1WzdalDD28JNIdz3HBcaxKwPB41deh4 Jul 8 10:12:00.784732 sshd-session[1769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 8 10:12:00.789355 systemd-logind[1535]: New session 7 of user core. Jul 8 10:12:00.803214 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 8 10:12:00.855996 sudo[1773]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 8 10:12:00.856302 sudo[1773]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 8 10:12:01.534499 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 8 10:12:01.556420 (dockerd)[1793]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 8 10:12:02.058420 dockerd[1793]: time="2025-07-08T10:12:02.058351244Z" level=info msg="Starting up" Jul 8 10:12:02.059259 dockerd[1793]: time="2025-07-08T10:12:02.059238046Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 8 10:12:02.080045 dockerd[1793]: time="2025-07-08T10:12:02.079980669Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jul 8 10:12:02.478856 dockerd[1793]: time="2025-07-08T10:12:02.478775008Z" level=info msg="Loading containers: start." Jul 8 10:12:02.490104 kernel: Initializing XFRM netlink socket Jul 8 10:12:02.758822 systemd-networkd[1451]: docker0: Link UP Jul 8 10:12:02.763827 dockerd[1793]: time="2025-07-08T10:12:02.763784681Z" level=info msg="Loading containers: done." Jul 8 10:12:02.780909 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2219422445-merged.mount: Deactivated successfully. Jul 8 10:12:02.781371 dockerd[1793]: time="2025-07-08T10:12:02.781319900Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 8 10:12:02.781439 dockerd[1793]: time="2025-07-08T10:12:02.781398487Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jul 8 10:12:02.781620 dockerd[1793]: time="2025-07-08T10:12:02.781483697Z" level=info msg="Initializing buildkit" Jul 8 10:12:02.815775 dockerd[1793]: time="2025-07-08T10:12:02.815709180Z" level=info msg="Completed buildkit initialization" Jul 8 10:12:02.822552 dockerd[1793]: time="2025-07-08T10:12:02.822491156Z" level=info msg="Daemon has completed initialization" Jul 8 10:12:02.822649 dockerd[1793]: time="2025-07-08T10:12:02.822585132Z" level=info msg="API listen on /run/docker.sock" Jul 8 10:12:02.822761 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 8 10:12:03.582141 containerd[1553]: time="2025-07-08T10:12:03.582058585Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\"" Jul 8 10:12:04.298704 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount518059073.mount: Deactivated successfully. Jul 8 10:12:05.357549 containerd[1553]: time="2025-07-08T10:12:05.357487523Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:12:05.358141 containerd[1553]: time="2025-07-08T10:12:05.358099230Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.2: active requests=0, bytes read=30079099" Jul 8 10:12:05.359230 containerd[1553]: time="2025-07-08T10:12:05.359199032Z" level=info msg="ImageCreate event name:\"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:12:05.361725 containerd[1553]: time="2025-07-08T10:12:05.361689232Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:12:05.362738 containerd[1553]: time="2025-07-08T10:12:05.362704475Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.2\" with image id \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\", size \"30075899\" in 1.780567724s" Jul 8 10:12:05.362771 containerd[1553]: time="2025-07-08T10:12:05.362741264Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\" returns image reference \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\"" Jul 8 10:12:05.363260 containerd[1553]: time="2025-07-08T10:12:05.363238617Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\"" Jul 8 10:12:06.778232 containerd[1553]: time="2025-07-08T10:12:06.778164663Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:12:06.778868 containerd[1553]: time="2025-07-08T10:12:06.778798040Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.2: active requests=0, bytes read=26018946" Jul 8 10:12:06.779866 containerd[1553]: time="2025-07-08T10:12:06.779841567Z" level=info msg="ImageCreate event name:\"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:12:06.782411 containerd[1553]: time="2025-07-08T10:12:06.782386448Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:12:06.783429 containerd[1553]: time="2025-07-08T10:12:06.783404798Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.2\" with image id \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\", size \"27646507\" in 1.420140232s" Jul 8 10:12:06.783468 containerd[1553]: time="2025-07-08T10:12:06.783434884Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\" returns image reference \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\"" Jul 8 10:12:06.784252 containerd[1553]: time="2025-07-08T10:12:06.784217562Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\"" Jul 8 10:12:08.648347 containerd[1553]: time="2025-07-08T10:12:08.648266884Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:12:08.649441 containerd[1553]: time="2025-07-08T10:12:08.649410288Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.2: active requests=0, bytes read=20155055" Jul 8 10:12:08.650835 containerd[1553]: time="2025-07-08T10:12:08.650733198Z" level=info msg="ImageCreate event name:\"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:12:08.653705 containerd[1553]: time="2025-07-08T10:12:08.653662951Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:12:08.654933 containerd[1553]: time="2025-07-08T10:12:08.654892557Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.2\" with image id \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\", size \"21782634\" in 1.870636723s" Jul 8 10:12:08.654970 containerd[1553]: time="2025-07-08T10:12:08.654931099Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\" returns image reference \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\"" Jul 8 10:12:08.655586 containerd[1553]: time="2025-07-08T10:12:08.655555109Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jul 8 10:12:09.316416 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 8 10:12:09.318216 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 8 10:12:09.605021 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 8 10:12:09.624407 (kubelet)[2077]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 8 10:12:09.844313 kubelet[2077]: E0708 10:12:09.844241 2077 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 8 10:12:09.851654 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 8 10:12:09.851862 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 8 10:12:09.852298 systemd[1]: kubelet.service: Consumed 389ms CPU time, 109.6M memory peak. Jul 8 10:12:10.314684 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3756315967.mount: Deactivated successfully. Jul 8 10:12:10.839387 containerd[1553]: time="2025-07-08T10:12:10.839310390Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:12:10.839827 containerd[1553]: time="2025-07-08T10:12:10.839763590Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.2: active requests=0, bytes read=31892746" Jul 8 10:12:10.840864 containerd[1553]: time="2025-07-08T10:12:10.840824859Z" level=info msg="ImageCreate event name:\"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:12:10.843095 containerd[1553]: time="2025-07-08T10:12:10.842820912Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:12:10.843343 containerd[1553]: time="2025-07-08T10:12:10.843293648Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.2\" with image id \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\", repo tag \"registry.k8s.io/kube-proxy:v1.33.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\", size \"31891765\" in 2.187708904s" Jul 8 10:12:10.843343 containerd[1553]: time="2025-07-08T10:12:10.843336779Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\"" Jul 8 10:12:10.844056 containerd[1553]: time="2025-07-08T10:12:10.844005142Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jul 8 10:12:11.356391 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1948797355.mount: Deactivated successfully. Jul 8 10:12:12.911207 containerd[1553]: time="2025-07-08T10:12:12.911040153Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:12:12.912342 containerd[1553]: time="2025-07-08T10:12:12.912254970Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Jul 8 10:12:12.913785 containerd[1553]: time="2025-07-08T10:12:12.913727822Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:12:12.918730 containerd[1553]: time="2025-07-08T10:12:12.918681811Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:12:12.920170 containerd[1553]: time="2025-07-08T10:12:12.920126239Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.076082864s" Jul 8 10:12:12.920170 containerd[1553]: time="2025-07-08T10:12:12.920161004Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jul 8 10:12:12.920729 containerd[1553]: time="2025-07-08T10:12:12.920711246Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 8 10:12:13.383940 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1772321940.mount: Deactivated successfully. Jul 8 10:12:13.401736 containerd[1553]: time="2025-07-08T10:12:13.401663752Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 8 10:12:13.402638 containerd[1553]: time="2025-07-08T10:12:13.402609234Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 8 10:12:13.404087 containerd[1553]: time="2025-07-08T10:12:13.404008989Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 8 10:12:13.405974 containerd[1553]: time="2025-07-08T10:12:13.405941482Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 8 10:12:13.406676 containerd[1553]: time="2025-07-08T10:12:13.406647807Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 485.858495ms" Jul 8 10:12:13.406730 containerd[1553]: time="2025-07-08T10:12:13.406680869Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 8 10:12:13.407152 containerd[1553]: time="2025-07-08T10:12:13.407098752Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jul 8 10:12:14.447303 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount116889253.mount: Deactivated successfully. Jul 8 10:12:16.665102 containerd[1553]: time="2025-07-08T10:12:16.665000116Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:12:16.667036 containerd[1553]: time="2025-07-08T10:12:16.666965201Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58247175" Jul 8 10:12:16.668659 containerd[1553]: time="2025-07-08T10:12:16.668619172Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:12:16.671737 containerd[1553]: time="2025-07-08T10:12:16.671686033Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:12:16.672718 containerd[1553]: time="2025-07-08T10:12:16.672638418Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 3.265510071s" Jul 8 10:12:16.672718 containerd[1553]: time="2025-07-08T10:12:16.672694564Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jul 8 10:12:20.066583 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 8 10:12:20.068484 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 8 10:12:20.270435 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 8 10:12:20.282501 (kubelet)[2236]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 8 10:12:20.360942 kubelet[2236]: E0708 10:12:20.360799 2236 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 8 10:12:20.365458 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 8 10:12:20.365640 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 8 10:12:20.365992 systemd[1]: kubelet.service: Consumed 243ms CPU time, 109.4M memory peak. Jul 8 10:12:20.401907 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 8 10:12:20.402179 systemd[1]: kubelet.service: Consumed 243ms CPU time, 109.4M memory peak. Jul 8 10:12:20.404509 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 8 10:12:20.429713 systemd[1]: Reload requested from client PID 2250 ('systemctl') (unit session-7.scope)... Jul 8 10:12:20.429730 systemd[1]: Reloading... Jul 8 10:12:20.509163 zram_generator::config[2294]: No configuration found. Jul 8 10:12:21.464833 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 8 10:12:21.580151 systemd[1]: Reloading finished in 1150 ms. Jul 8 10:12:21.652779 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 8 10:12:21.652917 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 8 10:12:21.653355 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 8 10:12:21.653510 systemd[1]: kubelet.service: Consumed 164ms CPU time, 98.3M memory peak. Jul 8 10:12:21.655605 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 8 10:12:22.437675 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 8 10:12:22.442234 (kubelet)[2342]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 8 10:12:22.482734 kubelet[2342]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 8 10:12:22.482734 kubelet[2342]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 8 10:12:22.482734 kubelet[2342]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 8 10:12:22.482734 kubelet[2342]: I0708 10:12:22.482190 2342 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 8 10:12:22.923832 kubelet[2342]: I0708 10:12:22.923773 2342 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 8 10:12:22.923832 kubelet[2342]: I0708 10:12:22.923810 2342 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 8 10:12:22.924092 kubelet[2342]: I0708 10:12:22.924050 2342 server.go:956] "Client rotation is on, will bootstrap in background" Jul 8 10:12:22.955107 kubelet[2342]: E0708 10:12:22.955038 2342 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.44:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.44:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 8 10:12:22.957353 kubelet[2342]: I0708 10:12:22.957302 2342 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 8 10:12:22.962958 kubelet[2342]: I0708 10:12:22.962912 2342 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 8 10:12:22.968420 kubelet[2342]: I0708 10:12:22.968386 2342 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 8 10:12:22.968713 kubelet[2342]: I0708 10:12:22.968676 2342 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 8 10:12:22.968871 kubelet[2342]: I0708 10:12:22.968702 2342 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 8 10:12:22.968982 kubelet[2342]: I0708 10:12:22.968876 2342 topology_manager.go:138] "Creating topology manager with none policy" Jul 8 10:12:22.968982 kubelet[2342]: I0708 10:12:22.968887 2342 container_manager_linux.go:303] "Creating device plugin manager" Jul 8 10:12:22.969043 kubelet[2342]: I0708 10:12:22.969029 2342 state_mem.go:36] "Initialized new in-memory state store" Jul 8 10:12:22.971146 kubelet[2342]: I0708 10:12:22.971115 2342 kubelet.go:480] "Attempting to sync node with API server" Jul 8 10:12:22.971146 kubelet[2342]: I0708 10:12:22.971149 2342 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 8 10:12:22.971218 kubelet[2342]: I0708 10:12:22.971171 2342 kubelet.go:386] "Adding apiserver pod source" Jul 8 10:12:22.971218 kubelet[2342]: I0708 10:12:22.971193 2342 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 8 10:12:22.975357 kubelet[2342]: E0708 10:12:22.975284 2342 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.44:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.44:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 8 10:12:22.975610 kubelet[2342]: I0708 10:12:22.975583 2342 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 8 10:12:22.975954 kubelet[2342]: E0708 10:12:22.975920 2342 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.44:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.44:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 8 10:12:22.976057 kubelet[2342]: I0708 10:12:22.976030 2342 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 8 10:12:22.976601 kubelet[2342]: W0708 10:12:22.976578 2342 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 8 10:12:22.979601 kubelet[2342]: I0708 10:12:22.979562 2342 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 8 10:12:22.979673 kubelet[2342]: I0708 10:12:22.979643 2342 server.go:1289] "Started kubelet" Jul 8 10:12:22.980182 kubelet[2342]: I0708 10:12:22.980139 2342 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 8 10:12:22.981125 kubelet[2342]: I0708 10:12:22.981060 2342 server.go:317] "Adding debug handlers to kubelet server" Jul 8 10:12:22.981933 kubelet[2342]: I0708 10:12:22.981909 2342 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 8 10:12:22.984711 kubelet[2342]: I0708 10:12:22.984693 2342 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 8 10:12:22.985687 kubelet[2342]: I0708 10:12:22.985628 2342 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 8 10:12:22.985880 kubelet[2342]: E0708 10:12:22.984627 2342 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.44:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.44:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18503f045a9a1b1f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-08 10:12:22.979590943 +0000 UTC m=+0.532549061,LastTimestamp:2025-07-08 10:12:22.979590943 +0000 UTC m=+0.532549061,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 8 10:12:22.986082 kubelet[2342]: I0708 10:12:22.986044 2342 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 8 10:12:22.986489 kubelet[2342]: E0708 10:12:22.986465 2342 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 8 10:12:22.986523 kubelet[2342]: I0708 10:12:22.986501 2342 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 8 10:12:22.986685 kubelet[2342]: I0708 10:12:22.986661 2342 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 8 10:12:22.986737 kubelet[2342]: E0708 10:12:22.986705 2342 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.44:6443: connect: connection refused" interval="200ms" Jul 8 10:12:22.986737 kubelet[2342]: I0708 10:12:22.986736 2342 reconciler.go:26] "Reconciler: start to sync state" Jul 8 10:12:22.987016 kubelet[2342]: E0708 10:12:22.986990 2342 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.44:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.44:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 8 10:12:22.987659 kubelet[2342]: I0708 10:12:22.987586 2342 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 8 10:12:22.988015 kubelet[2342]: E0708 10:12:22.987998 2342 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 8 10:12:22.988605 kubelet[2342]: I0708 10:12:22.988589 2342 factory.go:223] Registration of the containerd container factory successfully Jul 8 10:12:22.988697 kubelet[2342]: I0708 10:12:22.988674 2342 factory.go:223] Registration of the systemd container factory successfully Jul 8 10:12:23.003103 kubelet[2342]: I0708 10:12:23.002947 2342 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 8 10:12:23.004385 kubelet[2342]: I0708 10:12:23.004352 2342 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 8 10:12:23.004437 kubelet[2342]: I0708 10:12:23.004392 2342 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 8 10:12:23.004437 kubelet[2342]: I0708 10:12:23.004419 2342 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 8 10:12:23.004437 kubelet[2342]: I0708 10:12:23.004430 2342 kubelet.go:2436] "Starting kubelet main sync loop" Jul 8 10:12:23.004502 kubelet[2342]: E0708 10:12:23.004469 2342 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 8 10:12:23.007038 kubelet[2342]: E0708 10:12:23.007009 2342 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.44:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.44:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 8 10:12:23.007476 kubelet[2342]: I0708 10:12:23.007454 2342 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 8 10:12:23.007476 kubelet[2342]: I0708 10:12:23.007468 2342 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 8 10:12:23.007544 kubelet[2342]: I0708 10:12:23.007484 2342 state_mem.go:36] "Initialized new in-memory state store" Jul 8 10:12:23.087124 kubelet[2342]: E0708 10:12:23.087034 2342 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 8 10:12:23.105197 kubelet[2342]: E0708 10:12:23.105154 2342 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 8 10:12:23.187502 kubelet[2342]: E0708 10:12:23.187391 2342 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 8 10:12:23.187995 kubelet[2342]: E0708 10:12:23.187889 2342 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.44:6443: connect: connection refused" interval="400ms" Jul 8 10:12:23.288164 kubelet[2342]: E0708 10:12:23.288126 2342 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 8 10:12:23.305331 kubelet[2342]: E0708 10:12:23.305231 2342 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 8 10:12:23.308173 kubelet[2342]: I0708 10:12:23.308128 2342 policy_none.go:49] "None policy: Start" Jul 8 10:12:23.308215 kubelet[2342]: I0708 10:12:23.308177 2342 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 8 10:12:23.308215 kubelet[2342]: I0708 10:12:23.308196 2342 state_mem.go:35] "Initializing new in-memory state store" Jul 8 10:12:23.315869 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 8 10:12:23.330091 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 8 10:12:23.333245 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 8 10:12:23.346305 kubelet[2342]: E0708 10:12:23.346273 2342 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 8 10:12:23.346559 kubelet[2342]: I0708 10:12:23.346534 2342 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 8 10:12:23.346606 kubelet[2342]: I0708 10:12:23.346558 2342 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 8 10:12:23.346912 kubelet[2342]: I0708 10:12:23.346797 2342 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 8 10:12:23.347784 kubelet[2342]: E0708 10:12:23.347751 2342 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 8 10:12:23.347784 kubelet[2342]: E0708 10:12:23.347799 2342 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 8 10:12:23.449234 kubelet[2342]: I0708 10:12:23.449145 2342 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 8 10:12:23.449662 kubelet[2342]: E0708 10:12:23.449627 2342 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.44:6443/api/v1/nodes\": dial tcp 10.0.0.44:6443: connect: connection refused" node="localhost" Jul 8 10:12:23.589264 kubelet[2342]: E0708 10:12:23.589202 2342 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.44:6443: connect: connection refused" interval="800ms" Jul 8 10:12:23.651503 kubelet[2342]: I0708 10:12:23.651481 2342 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 8 10:12:23.651818 kubelet[2342]: E0708 10:12:23.651775 2342 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.44:6443/api/v1/nodes\": dial tcp 10.0.0.44:6443: connect: connection refused" node="localhost" Jul 8 10:12:23.715962 systemd[1]: Created slice kubepods-burstable-pod364638d190d75d045db744cb0d2a09a6.slice - libcontainer container kubepods-burstable-pod364638d190d75d045db744cb0d2a09a6.slice. Jul 8 10:12:23.726869 kubelet[2342]: E0708 10:12:23.726836 2342 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 8 10:12:23.730305 systemd[1]: Created slice kubepods-burstable-pod84b858ec27c8b2738b1d9ff9927e0dcb.slice - libcontainer container kubepods-burstable-pod84b858ec27c8b2738b1d9ff9927e0dcb.slice. Jul 8 10:12:23.731972 kubelet[2342]: E0708 10:12:23.731949 2342 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 8 10:12:23.733750 systemd[1]: Created slice kubepods-burstable-pod834ee54f1daa06092e339273649eb5ea.slice - libcontainer container kubepods-burstable-pod834ee54f1daa06092e339273649eb5ea.slice. Jul 8 10:12:23.735329 kubelet[2342]: E0708 10:12:23.735299 2342 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 8 10:12:23.791741 kubelet[2342]: I0708 10:12:23.791698 2342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/834ee54f1daa06092e339273649eb5ea-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"834ee54f1daa06092e339273649eb5ea\") " pod="kube-system/kube-scheduler-localhost" Jul 8 10:12:23.791797 kubelet[2342]: I0708 10:12:23.791740 2342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 8 10:12:23.791797 kubelet[2342]: I0708 10:12:23.791761 2342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 8 10:12:23.791797 kubelet[2342]: I0708 10:12:23.791786 2342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 8 10:12:23.791899 kubelet[2342]: I0708 10:12:23.791833 2342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/364638d190d75d045db744cb0d2a09a6-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"364638d190d75d045db744cb0d2a09a6\") " pod="kube-system/kube-apiserver-localhost" Jul 8 10:12:23.791899 kubelet[2342]: I0708 10:12:23.791851 2342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/364638d190d75d045db744cb0d2a09a6-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"364638d190d75d045db744cb0d2a09a6\") " pod="kube-system/kube-apiserver-localhost" Jul 8 10:12:23.791899 kubelet[2342]: I0708 10:12:23.791866 2342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/364638d190d75d045db744cb0d2a09a6-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"364638d190d75d045db744cb0d2a09a6\") " pod="kube-system/kube-apiserver-localhost" Jul 8 10:12:23.791899 kubelet[2342]: I0708 10:12:23.791882 2342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 8 10:12:23.792015 kubelet[2342]: I0708 10:12:23.791920 2342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 8 10:12:23.818276 kubelet[2342]: E0708 10:12:23.818221 2342 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.44:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.44:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 8 10:12:23.832003 kubelet[2342]: E0708 10:12:23.831967 2342 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.44:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.44:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 8 10:12:24.028286 containerd[1553]: time="2025-07-08T10:12:24.028163706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:364638d190d75d045db744cb0d2a09a6,Namespace:kube-system,Attempt:0,}" Jul 8 10:12:24.033631 containerd[1553]: time="2025-07-08T10:12:24.033603125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:84b858ec27c8b2738b1d9ff9927e0dcb,Namespace:kube-system,Attempt:0,}" Jul 8 10:12:24.036153 containerd[1553]: time="2025-07-08T10:12:24.036124753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:834ee54f1daa06092e339273649eb5ea,Namespace:kube-system,Attempt:0,}" Jul 8 10:12:24.053630 kubelet[2342]: I0708 10:12:24.053401 2342 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 8 10:12:24.053923 kubelet[2342]: E0708 10:12:24.053888 2342 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.44:6443/api/v1/nodes\": dial tcp 10.0.0.44:6443: connect: connection refused" node="localhost" Jul 8 10:12:24.065854 containerd[1553]: time="2025-07-08T10:12:24.065791709Z" level=info msg="connecting to shim dee6f1f8f69ef5ab1106972324097e991f4bc1528a930d6c68428c270589a24d" address="unix:///run/containerd/s/2093c2bedfb9e03563a82f4261e21ea63949b160dd9b98334f8f212bbd73559b" namespace=k8s.io protocol=ttrpc version=3 Jul 8 10:12:24.072106 containerd[1553]: time="2025-07-08T10:12:24.071843456Z" level=info msg="connecting to shim 8b996f211b372c72862d4a78c8497bbab7eddce880ac47a573970c7129429061" address="unix:///run/containerd/s/ff40ff388e8541a6cec7d973ec8713f17afeec7d4bffe973994f0b0f07ab4af2" namespace=k8s.io protocol=ttrpc version=3 Jul 8 10:12:24.082151 containerd[1553]: time="2025-07-08T10:12:24.082108893Z" level=info msg="connecting to shim 6035c47d0cd177ade94182e2f0f167021f752112dfbb4eb689b4b2ff2f94e9cd" address="unix:///run/containerd/s/89e7a737eeae5ca175288e0e778839f2bed178809b4b56bf7801e6e8c79981fc" namespace=k8s.io protocol=ttrpc version=3 Jul 8 10:12:24.131234 systemd[1]: Started cri-containerd-8b996f211b372c72862d4a78c8497bbab7eddce880ac47a573970c7129429061.scope - libcontainer container 8b996f211b372c72862d4a78c8497bbab7eddce880ac47a573970c7129429061. Jul 8 10:12:24.135451 systemd[1]: Started cri-containerd-dee6f1f8f69ef5ab1106972324097e991f4bc1528a930d6c68428c270589a24d.scope - libcontainer container dee6f1f8f69ef5ab1106972324097e991f4bc1528a930d6c68428c270589a24d. Jul 8 10:12:24.138738 systemd[1]: Started cri-containerd-6035c47d0cd177ade94182e2f0f167021f752112dfbb4eb689b4b2ff2f94e9cd.scope - libcontainer container 6035c47d0cd177ade94182e2f0f167021f752112dfbb4eb689b4b2ff2f94e9cd. Jul 8 10:12:24.188874 containerd[1553]: time="2025-07-08T10:12:24.188826599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:84b858ec27c8b2738b1d9ff9927e0dcb,Namespace:kube-system,Attempt:0,} returns sandbox id \"8b996f211b372c72862d4a78c8497bbab7eddce880ac47a573970c7129429061\"" Jul 8 10:12:24.191781 containerd[1553]: time="2025-07-08T10:12:24.191653870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:364638d190d75d045db744cb0d2a09a6,Namespace:kube-system,Attempt:0,} returns sandbox id \"dee6f1f8f69ef5ab1106972324097e991f4bc1528a930d6c68428c270589a24d\"" Jul 8 10:12:24.198004 containerd[1553]: time="2025-07-08T10:12:24.197965224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:834ee54f1daa06092e339273649eb5ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"6035c47d0cd177ade94182e2f0f167021f752112dfbb4eb689b4b2ff2f94e9cd\"" Jul 8 10:12:24.198283 containerd[1553]: time="2025-07-08T10:12:24.198260177Z" level=info msg="CreateContainer within sandbox \"8b996f211b372c72862d4a78c8497bbab7eddce880ac47a573970c7129429061\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 8 10:12:24.199704 containerd[1553]: time="2025-07-08T10:12:24.199671183Z" level=info msg="CreateContainer within sandbox \"dee6f1f8f69ef5ab1106972324097e991f4bc1528a930d6c68428c270589a24d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 8 10:12:24.209354 containerd[1553]: time="2025-07-08T10:12:24.209325194Z" level=info msg="Container 8ed6355a66dd16b3a9c9dc5de38909f412f5fde8d6a448d29f7f30b8c714fecb: CDI devices from CRI Config.CDIDevices: []" Jul 8 10:12:24.219447 containerd[1553]: time="2025-07-08T10:12:24.219404803Z" level=info msg="CreateContainer within sandbox \"6035c47d0cd177ade94182e2f0f167021f752112dfbb4eb689b4b2ff2f94e9cd\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 8 10:12:24.226027 containerd[1553]: time="2025-07-08T10:12:24.225993918Z" level=info msg="Container acf26b51fd13c682221698028c6aed042cbba87e6380004bbed9be8dc793aaf1: CDI devices from CRI Config.CDIDevices: []" Jul 8 10:12:24.231881 containerd[1553]: time="2025-07-08T10:12:24.231858484Z" level=info msg="CreateContainer within sandbox \"dee6f1f8f69ef5ab1106972324097e991f4bc1528a930d6c68428c270589a24d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8ed6355a66dd16b3a9c9dc5de38909f412f5fde8d6a448d29f7f30b8c714fecb\"" Jul 8 10:12:24.232379 containerd[1553]: time="2025-07-08T10:12:24.232344776Z" level=info msg="StartContainer for \"8ed6355a66dd16b3a9c9dc5de38909f412f5fde8d6a448d29f7f30b8c714fecb\"" Jul 8 10:12:24.233512 containerd[1553]: time="2025-07-08T10:12:24.233488811Z" level=info msg="connecting to shim 8ed6355a66dd16b3a9c9dc5de38909f412f5fde8d6a448d29f7f30b8c714fecb" address="unix:///run/containerd/s/2093c2bedfb9e03563a82f4261e21ea63949b160dd9b98334f8f212bbd73559b" protocol=ttrpc version=3 Jul 8 10:12:24.236793 containerd[1553]: time="2025-07-08T10:12:24.236756167Z" level=info msg="CreateContainer within sandbox \"8b996f211b372c72862d4a78c8497bbab7eddce880ac47a573970c7129429061\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"acf26b51fd13c682221698028c6aed042cbba87e6380004bbed9be8dc793aaf1\"" Jul 8 10:12:24.237167 containerd[1553]: time="2025-07-08T10:12:24.237144655Z" level=info msg="StartContainer for \"acf26b51fd13c682221698028c6aed042cbba87e6380004bbed9be8dc793aaf1\"" Jul 8 10:12:24.238258 containerd[1553]: time="2025-07-08T10:12:24.238231674Z" level=info msg="connecting to shim acf26b51fd13c682221698028c6aed042cbba87e6380004bbed9be8dc793aaf1" address="unix:///run/containerd/s/ff40ff388e8541a6cec7d973ec8713f17afeec7d4bffe973994f0b0f07ab4af2" protocol=ttrpc version=3 Jul 8 10:12:24.240082 containerd[1553]: time="2025-07-08T10:12:24.239862461Z" level=info msg="Container 3d03de519438702ab36ddbb8c7cdffe943e709458dbb2bda40ce42a71ecf9555: CDI devices from CRI Config.CDIDevices: []" Jul 8 10:12:24.248102 containerd[1553]: time="2025-07-08T10:12:24.247835992Z" level=info msg="CreateContainer within sandbox \"6035c47d0cd177ade94182e2f0f167021f752112dfbb4eb689b4b2ff2f94e9cd\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3d03de519438702ab36ddbb8c7cdffe943e709458dbb2bda40ce42a71ecf9555\"" Jul 8 10:12:24.248701 containerd[1553]: time="2025-07-08T10:12:24.248516077Z" level=info msg="StartContainer for \"3d03de519438702ab36ddbb8c7cdffe943e709458dbb2bda40ce42a71ecf9555\"" Jul 8 10:12:24.250598 containerd[1553]: time="2025-07-08T10:12:24.250574336Z" level=info msg="connecting to shim 3d03de519438702ab36ddbb8c7cdffe943e709458dbb2bda40ce42a71ecf9555" address="unix:///run/containerd/s/89e7a737eeae5ca175288e0e778839f2bed178809b4b56bf7801e6e8c79981fc" protocol=ttrpc version=3 Jul 8 10:12:24.256934 systemd[1]: Started cri-containerd-8ed6355a66dd16b3a9c9dc5de38909f412f5fde8d6a448d29f7f30b8c714fecb.scope - libcontainer container 8ed6355a66dd16b3a9c9dc5de38909f412f5fde8d6a448d29f7f30b8c714fecb. Jul 8 10:12:24.259999 systemd[1]: Started cri-containerd-acf26b51fd13c682221698028c6aed042cbba87e6380004bbed9be8dc793aaf1.scope - libcontainer container acf26b51fd13c682221698028c6aed042cbba87e6380004bbed9be8dc793aaf1. Jul 8 10:12:24.280211 systemd[1]: Started cri-containerd-3d03de519438702ab36ddbb8c7cdffe943e709458dbb2bda40ce42a71ecf9555.scope - libcontainer container 3d03de519438702ab36ddbb8c7cdffe943e709458dbb2bda40ce42a71ecf9555. Jul 8 10:12:24.331580 containerd[1553]: time="2025-07-08T10:12:24.331538905Z" level=info msg="StartContainer for \"8ed6355a66dd16b3a9c9dc5de38909f412f5fde8d6a448d29f7f30b8c714fecb\" returns successfully" Jul 8 10:12:24.389791 kubelet[2342]: E0708 10:12:24.389739 2342 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.44:6443: connect: connection refused" interval="1.6s" Jul 8 10:12:24.449907 containerd[1553]: time="2025-07-08T10:12:24.449858804Z" level=info msg="StartContainer for \"3d03de519438702ab36ddbb8c7cdffe943e709458dbb2bda40ce42a71ecf9555\" returns successfully" Jul 8 10:12:24.459852 containerd[1553]: time="2025-07-08T10:12:24.459706319Z" level=info msg="StartContainer for \"acf26b51fd13c682221698028c6aed042cbba87e6380004bbed9be8dc793aaf1\" returns successfully" Jul 8 10:12:24.856182 kubelet[2342]: I0708 10:12:24.856136 2342 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 8 10:12:25.015688 kubelet[2342]: E0708 10:12:25.015650 2342 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 8 10:12:25.021485 kubelet[2342]: E0708 10:12:25.021458 2342 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 8 10:12:25.027733 kubelet[2342]: E0708 10:12:25.027705 2342 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 8 10:12:25.914848 kubelet[2342]: I0708 10:12:25.914080 2342 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 8 10:12:25.914848 kubelet[2342]: E0708 10:12:25.914126 2342 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 8 10:12:25.972971 kubelet[2342]: I0708 10:12:25.972901 2342 apiserver.go:52] "Watching apiserver" Jul 8 10:12:25.987271 kubelet[2342]: I0708 10:12:25.987216 2342 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 8 10:12:25.987271 kubelet[2342]: I0708 10:12:25.987242 2342 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 8 10:12:25.992335 kubelet[2342]: E0708 10:12:25.992272 2342 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 8 10:12:25.992335 kubelet[2342]: I0708 10:12:25.992309 2342 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 8 10:12:25.993874 kubelet[2342]: E0708 10:12:25.993709 2342 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jul 8 10:12:25.993931 kubelet[2342]: I0708 10:12:25.993876 2342 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 8 10:12:25.994896 kubelet[2342]: E0708 10:12:25.994864 2342 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 8 10:12:26.025819 kubelet[2342]: I0708 10:12:26.025761 2342 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 8 10:12:26.025880 kubelet[2342]: I0708 10:12:26.025821 2342 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 8 10:12:26.026979 kubelet[2342]: E0708 10:12:26.026956 2342 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 8 10:12:26.027300 kubelet[2342]: E0708 10:12:26.027279 2342 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 8 10:12:28.386525 systemd[1]: Reload requested from client PID 2628 ('systemctl') (unit session-7.scope)... Jul 8 10:12:28.386541 systemd[1]: Reloading... Jul 8 10:12:28.474129 zram_generator::config[2671]: No configuration found. Jul 8 10:12:28.564026 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 8 10:12:28.695585 systemd[1]: Reloading finished in 308 ms. Jul 8 10:12:28.730294 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 8 10:12:28.756403 systemd[1]: kubelet.service: Deactivated successfully. Jul 8 10:12:28.756700 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 8 10:12:28.756753 systemd[1]: kubelet.service: Consumed 992ms CPU time, 131.4M memory peak. Jul 8 10:12:28.758622 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 8 10:12:29.004134 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 8 10:12:29.016017 (kubelet)[2716]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 8 10:12:29.063168 kubelet[2716]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 8 10:12:29.063582 kubelet[2716]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 8 10:12:29.063582 kubelet[2716]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 8 10:12:29.063755 kubelet[2716]: I0708 10:12:29.063611 2716 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 8 10:12:29.070671 kubelet[2716]: I0708 10:12:29.070633 2716 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 8 10:12:29.070671 kubelet[2716]: I0708 10:12:29.070658 2716 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 8 10:12:29.070872 kubelet[2716]: I0708 10:12:29.070849 2716 server.go:956] "Client rotation is on, will bootstrap in background" Jul 8 10:12:29.071915 kubelet[2716]: I0708 10:12:29.071890 2716 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jul 8 10:12:29.074371 kubelet[2716]: I0708 10:12:29.074347 2716 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 8 10:12:29.082909 kubelet[2716]: I0708 10:12:29.082877 2716 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 8 10:12:29.087790 kubelet[2716]: I0708 10:12:29.087768 2716 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 8 10:12:29.088142 kubelet[2716]: I0708 10:12:29.088094 2716 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 8 10:12:29.088291 kubelet[2716]: I0708 10:12:29.088121 2716 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 8 10:12:29.088377 kubelet[2716]: I0708 10:12:29.088302 2716 topology_manager.go:138] "Creating topology manager with none policy" Jul 8 10:12:29.088377 kubelet[2716]: I0708 10:12:29.088313 2716 container_manager_linux.go:303] "Creating device plugin manager" Jul 8 10:12:29.088377 kubelet[2716]: I0708 10:12:29.088361 2716 state_mem.go:36] "Initialized new in-memory state store" Jul 8 10:12:29.088598 kubelet[2716]: I0708 10:12:29.088574 2716 kubelet.go:480] "Attempting to sync node with API server" Jul 8 10:12:29.088598 kubelet[2716]: I0708 10:12:29.088591 2716 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 8 10:12:29.088648 kubelet[2716]: I0708 10:12:29.088612 2716 kubelet.go:386] "Adding apiserver pod source" Jul 8 10:12:29.088648 kubelet[2716]: I0708 10:12:29.088637 2716 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 8 10:12:29.089554 kubelet[2716]: I0708 10:12:29.089371 2716 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 8 10:12:29.089953 kubelet[2716]: I0708 10:12:29.089917 2716 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 8 10:12:29.095110 kubelet[2716]: I0708 10:12:29.094549 2716 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 8 10:12:29.095110 kubelet[2716]: I0708 10:12:29.094598 2716 server.go:1289] "Started kubelet" Jul 8 10:12:29.096426 kubelet[2716]: I0708 10:12:29.096364 2716 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 8 10:12:29.096635 kubelet[2716]: I0708 10:12:29.096622 2716 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 8 10:12:29.097777 kubelet[2716]: I0708 10:12:29.097745 2716 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 8 10:12:29.097856 kubelet[2716]: I0708 10:12:29.097809 2716 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 8 10:12:29.099724 kubelet[2716]: I0708 10:12:29.099698 2716 server.go:317] "Adding debug handlers to kubelet server" Jul 8 10:12:29.103511 kubelet[2716]: I0708 10:12:29.103473 2716 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 8 10:12:29.103980 kubelet[2716]: I0708 10:12:29.103967 2716 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 8 10:12:29.104040 kubelet[2716]: I0708 10:12:29.103529 2716 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 8 10:12:29.104599 kubelet[2716]: I0708 10:12:29.104548 2716 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 8 10:12:29.104881 kubelet[2716]: I0708 10:12:29.104869 2716 reconciler.go:26] "Reconciler: start to sync state" Jul 8 10:12:29.108122 kubelet[2716]: E0708 10:12:29.108098 2716 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 8 10:12:29.110458 kubelet[2716]: I0708 10:12:29.110418 2716 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 8 10:12:29.112557 kubelet[2716]: I0708 10:12:29.112502 2716 factory.go:223] Registration of the containerd container factory successfully Jul 8 10:12:29.112557 kubelet[2716]: I0708 10:12:29.112538 2716 factory.go:223] Registration of the systemd container factory successfully Jul 8 10:12:29.116375 kubelet[2716]: I0708 10:12:29.116351 2716 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 8 10:12:29.116482 kubelet[2716]: I0708 10:12:29.116471 2716 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 8 10:12:29.116552 kubelet[2716]: I0708 10:12:29.116541 2716 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 8 10:12:29.116599 kubelet[2716]: I0708 10:12:29.116591 2716 kubelet.go:2436] "Starting kubelet main sync loop" Jul 8 10:12:29.116697 kubelet[2716]: E0708 10:12:29.116678 2716 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 8 10:12:29.143814 kubelet[2716]: I0708 10:12:29.143783 2716 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 8 10:12:29.143814 kubelet[2716]: I0708 10:12:29.143803 2716 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 8 10:12:29.143814 kubelet[2716]: I0708 10:12:29.143827 2716 state_mem.go:36] "Initialized new in-memory state store" Jul 8 10:12:29.145093 kubelet[2716]: I0708 10:12:29.144009 2716 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 8 10:12:29.145093 kubelet[2716]: I0708 10:12:29.144042 2716 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 8 10:12:29.145093 kubelet[2716]: I0708 10:12:29.144225 2716 policy_none.go:49] "None policy: Start" Jul 8 10:12:29.145093 kubelet[2716]: I0708 10:12:29.144242 2716 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 8 10:12:29.145093 kubelet[2716]: I0708 10:12:29.144256 2716 state_mem.go:35] "Initializing new in-memory state store" Jul 8 10:12:29.145093 kubelet[2716]: I0708 10:12:29.144375 2716 state_mem.go:75] "Updated machine memory state" Jul 8 10:12:29.149473 kubelet[2716]: E0708 10:12:29.149451 2716 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 8 10:12:29.149643 kubelet[2716]: I0708 10:12:29.149625 2716 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 8 10:12:29.149690 kubelet[2716]: I0708 10:12:29.149641 2716 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 8 10:12:29.150040 kubelet[2716]: I0708 10:12:29.149826 2716 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 8 10:12:29.150919 kubelet[2716]: E0708 10:12:29.150904 2716 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 8 10:12:29.217589 kubelet[2716]: I0708 10:12:29.217545 2716 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 8 10:12:29.217746 kubelet[2716]: I0708 10:12:29.217637 2716 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 8 10:12:29.217746 kubelet[2716]: I0708 10:12:29.217641 2716 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 8 10:12:29.253268 kubelet[2716]: I0708 10:12:29.253236 2716 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 8 10:12:29.258248 kubelet[2716]: I0708 10:12:29.258168 2716 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 8 10:12:29.258248 kubelet[2716]: I0708 10:12:29.258232 2716 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 8 10:12:29.306084 kubelet[2716]: I0708 10:12:29.306001 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/834ee54f1daa06092e339273649eb5ea-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"834ee54f1daa06092e339273649eb5ea\") " pod="kube-system/kube-scheduler-localhost" Jul 8 10:12:29.306084 kubelet[2716]: I0708 10:12:29.306051 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/364638d190d75d045db744cb0d2a09a6-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"364638d190d75d045db744cb0d2a09a6\") " pod="kube-system/kube-apiserver-localhost" Jul 8 10:12:29.306084 kubelet[2716]: I0708 10:12:29.306086 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/364638d190d75d045db744cb0d2a09a6-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"364638d190d75d045db744cb0d2a09a6\") " pod="kube-system/kube-apiserver-localhost" Jul 8 10:12:29.306277 kubelet[2716]: I0708 10:12:29.306136 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/364638d190d75d045db744cb0d2a09a6-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"364638d190d75d045db744cb0d2a09a6\") " pod="kube-system/kube-apiserver-localhost" Jul 8 10:12:29.306277 kubelet[2716]: I0708 10:12:29.306181 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 8 10:12:29.306277 kubelet[2716]: I0708 10:12:29.306203 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 8 10:12:29.306277 kubelet[2716]: I0708 10:12:29.306222 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 8 10:12:29.306277 kubelet[2716]: I0708 10:12:29.306236 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 8 10:12:29.306389 kubelet[2716]: I0708 10:12:29.306254 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 8 10:12:30.088941 kubelet[2716]: I0708 10:12:30.088886 2716 apiserver.go:52] "Watching apiserver" Jul 8 10:12:30.104903 kubelet[2716]: I0708 10:12:30.104850 2716 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 8 10:12:30.129264 kubelet[2716]: I0708 10:12:30.129227 2716 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 8 10:12:30.129790 kubelet[2716]: I0708 10:12:30.129771 2716 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 8 10:12:30.136882 kubelet[2716]: E0708 10:12:30.136796 2716 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 8 10:12:30.137626 kubelet[2716]: E0708 10:12:30.137526 2716 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 8 10:12:30.146951 kubelet[2716]: I0708 10:12:30.146878 2716 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.146855883 podStartE2EDuration="1.146855883s" podCreationTimestamp="2025-07-08 10:12:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-08 10:12:30.146826567 +0000 UTC m=+1.123763855" watchObservedRunningTime="2025-07-08 10:12:30.146855883 +0000 UTC m=+1.123793171" Jul 8 10:12:30.161642 kubelet[2716]: I0708 10:12:30.161578 2716 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.1615564680000001 podStartE2EDuration="1.161556468s" podCreationTimestamp="2025-07-08 10:12:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-08 10:12:30.153923278 +0000 UTC m=+1.130860566" watchObservedRunningTime="2025-07-08 10:12:30.161556468 +0000 UTC m=+1.138493756" Jul 8 10:12:30.172538 kubelet[2716]: I0708 10:12:30.172461 2716 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.172439686 podStartE2EDuration="1.172439686s" podCreationTimestamp="2025-07-08 10:12:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-08 10:12:30.161853697 +0000 UTC m=+1.138790985" watchObservedRunningTime="2025-07-08 10:12:30.172439686 +0000 UTC m=+1.149376974" Jul 8 10:12:32.959996 kubelet[2716]: I0708 10:12:32.959898 2716 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 8 10:12:32.961443 containerd[1553]: time="2025-07-08T10:12:32.961234787Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 8 10:12:32.962109 kubelet[2716]: I0708 10:12:32.962056 2716 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 8 10:12:32.997347 systemd[1]: Created slice kubepods-besteffort-pod4f9c7eec_23b1_496b_b92a_b5de81ba81a2.slice - libcontainer container kubepods-besteffort-pod4f9c7eec_23b1_496b_b92a_b5de81ba81a2.slice. Jul 8 10:12:33.031375 kubelet[2716]: I0708 10:12:33.031322 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4f9c7eec-23b1-496b-b92a-b5de81ba81a2-kube-proxy\") pod \"kube-proxy-m4t2w\" (UID: \"4f9c7eec-23b1-496b-b92a-b5de81ba81a2\") " pod="kube-system/kube-proxy-m4t2w" Jul 8 10:12:33.031375 kubelet[2716]: I0708 10:12:33.031365 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4f9c7eec-23b1-496b-b92a-b5de81ba81a2-xtables-lock\") pod \"kube-proxy-m4t2w\" (UID: \"4f9c7eec-23b1-496b-b92a-b5de81ba81a2\") " pod="kube-system/kube-proxy-m4t2w" Jul 8 10:12:33.031375 kubelet[2716]: I0708 10:12:33.031381 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4f9c7eec-23b1-496b-b92a-b5de81ba81a2-lib-modules\") pod \"kube-proxy-m4t2w\" (UID: \"4f9c7eec-23b1-496b-b92a-b5de81ba81a2\") " pod="kube-system/kube-proxy-m4t2w" Jul 8 10:12:33.031608 kubelet[2716]: I0708 10:12:33.031405 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npgpt\" (UniqueName: \"kubernetes.io/projected/4f9c7eec-23b1-496b-b92a-b5de81ba81a2-kube-api-access-npgpt\") pod \"kube-proxy-m4t2w\" (UID: \"4f9c7eec-23b1-496b-b92a-b5de81ba81a2\") " pod="kube-system/kube-proxy-m4t2w" Jul 8 10:12:33.310408 containerd[1553]: time="2025-07-08T10:12:33.310260640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-m4t2w,Uid:4f9c7eec-23b1-496b-b92a-b5de81ba81a2,Namespace:kube-system,Attempt:0,}" Jul 8 10:12:33.330875 containerd[1553]: time="2025-07-08T10:12:33.330808471Z" level=info msg="connecting to shim 25572d956821af8f61b301c8ddd3b13dde6565133e081badd7e4ec6781653903" address="unix:///run/containerd/s/6edfd62d18d0712e761bc33d83c5b317641dbf16e2d855f9623213db18003171" namespace=k8s.io protocol=ttrpc version=3 Jul 8 10:12:33.360317 systemd[1]: Started cri-containerd-25572d956821af8f61b301c8ddd3b13dde6565133e081badd7e4ec6781653903.scope - libcontainer container 25572d956821af8f61b301c8ddd3b13dde6565133e081badd7e4ec6781653903. Jul 8 10:12:33.388687 containerd[1553]: time="2025-07-08T10:12:33.388643061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-m4t2w,Uid:4f9c7eec-23b1-496b-b92a-b5de81ba81a2,Namespace:kube-system,Attempt:0,} returns sandbox id \"25572d956821af8f61b301c8ddd3b13dde6565133e081badd7e4ec6781653903\"" Jul 8 10:12:33.394011 containerd[1553]: time="2025-07-08T10:12:33.393971855Z" level=info msg="CreateContainer within sandbox \"25572d956821af8f61b301c8ddd3b13dde6565133e081badd7e4ec6781653903\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 8 10:12:33.406982 containerd[1553]: time="2025-07-08T10:12:33.406404602Z" level=info msg="Container 33c43316c0284ee8faf57de3f9deb30544052f7970bbebdfefd0853eede9085c: CDI devices from CRI Config.CDIDevices: []" Jul 8 10:12:33.409359 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3341395382.mount: Deactivated successfully. Jul 8 10:12:33.418292 containerd[1553]: time="2025-07-08T10:12:33.418225451Z" level=info msg="CreateContainer within sandbox \"25572d956821af8f61b301c8ddd3b13dde6565133e081badd7e4ec6781653903\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"33c43316c0284ee8faf57de3f9deb30544052f7970bbebdfefd0853eede9085c\"" Jul 8 10:12:33.420063 containerd[1553]: time="2025-07-08T10:12:33.418954012Z" level=info msg="StartContainer for \"33c43316c0284ee8faf57de3f9deb30544052f7970bbebdfefd0853eede9085c\"" Jul 8 10:12:33.421254 containerd[1553]: time="2025-07-08T10:12:33.421004337Z" level=info msg="connecting to shim 33c43316c0284ee8faf57de3f9deb30544052f7970bbebdfefd0853eede9085c" address="unix:///run/containerd/s/6edfd62d18d0712e761bc33d83c5b317641dbf16e2d855f9623213db18003171" protocol=ttrpc version=3 Jul 8 10:12:33.447250 systemd[1]: Started cri-containerd-33c43316c0284ee8faf57de3f9deb30544052f7970bbebdfefd0853eede9085c.scope - libcontainer container 33c43316c0284ee8faf57de3f9deb30544052f7970bbebdfefd0853eede9085c. Jul 8 10:12:33.494273 containerd[1553]: time="2025-07-08T10:12:33.494231645Z" level=info msg="StartContainer for \"33c43316c0284ee8faf57de3f9deb30544052f7970bbebdfefd0853eede9085c\" returns successfully" Jul 8 10:12:33.798843 systemd[1]: Created slice kubepods-besteffort-pod9baa914e_0ab0_42ce_8f42_ddc81d1f1885.slice - libcontainer container kubepods-besteffort-pod9baa914e_0ab0_42ce_8f42_ddc81d1f1885.slice. Jul 8 10:12:33.835647 kubelet[2716]: I0708 10:12:33.835610 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9baa914e-0ab0-42ce-8f42-ddc81d1f1885-var-lib-calico\") pod \"tigera-operator-747864d56d-lm24t\" (UID: \"9baa914e-0ab0-42ce-8f42-ddc81d1f1885\") " pod="tigera-operator/tigera-operator-747864d56d-lm24t" Jul 8 10:12:33.835784 kubelet[2716]: I0708 10:12:33.835653 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p58v9\" (UniqueName: \"kubernetes.io/projected/9baa914e-0ab0-42ce-8f42-ddc81d1f1885-kube-api-access-p58v9\") pod \"tigera-operator-747864d56d-lm24t\" (UID: \"9baa914e-0ab0-42ce-8f42-ddc81d1f1885\") " pod="tigera-operator/tigera-operator-747864d56d-lm24t" Jul 8 10:12:34.102534 containerd[1553]: time="2025-07-08T10:12:34.102490258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-lm24t,Uid:9baa914e-0ab0-42ce-8f42-ddc81d1f1885,Namespace:tigera-operator,Attempt:0,}" Jul 8 10:12:34.122174 containerd[1553]: time="2025-07-08T10:12:34.122132598Z" level=info msg="connecting to shim 78c6ff2f65c01cb552826f5a5a535cd17d7ff048602c35f3110e7e3489a4f013" address="unix:///run/containerd/s/e4640eec8c3e0977d7d3ba82a921ef228a8f6d052375b89128f4abcf64673c1a" namespace=k8s.io protocol=ttrpc version=3 Jul 8 10:12:34.157326 systemd[1]: Started cri-containerd-78c6ff2f65c01cb552826f5a5a535cd17d7ff048602c35f3110e7e3489a4f013.scope - libcontainer container 78c6ff2f65c01cb552826f5a5a535cd17d7ff048602c35f3110e7e3489a4f013. Jul 8 10:12:34.201466 containerd[1553]: time="2025-07-08T10:12:34.201422051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-lm24t,Uid:9baa914e-0ab0-42ce-8f42-ddc81d1f1885,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"78c6ff2f65c01cb552826f5a5a535cd17d7ff048602c35f3110e7e3489a4f013\"" Jul 8 10:12:34.203274 containerd[1553]: time="2025-07-08T10:12:34.203234137Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 8 10:12:34.458032 kubelet[2716]: I0708 10:12:34.457674 2716 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-m4t2w" podStartSLOduration=2.4576492229999998 podStartE2EDuration="2.457649223s" podCreationTimestamp="2025-07-08 10:12:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-08 10:12:34.153914219 +0000 UTC m=+5.130851507" watchObservedRunningTime="2025-07-08 10:12:34.457649223 +0000 UTC m=+5.434586511" Jul 8 10:12:35.477491 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1126499937.mount: Deactivated successfully. Jul 8 10:12:37.628985 containerd[1553]: time="2025-07-08T10:12:37.628907134Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:12:37.629663 containerd[1553]: time="2025-07-08T10:12:37.629615451Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 8 10:12:37.630890 containerd[1553]: time="2025-07-08T10:12:37.630835961Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:12:37.632865 containerd[1553]: time="2025-07-08T10:12:37.632831145Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:12:37.633415 containerd[1553]: time="2025-07-08T10:12:37.633367264Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 3.430093993s" Jul 8 10:12:37.633415 containerd[1553]: time="2025-07-08T10:12:37.633413182Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 8 10:12:37.638141 containerd[1553]: time="2025-07-08T10:12:37.638089394Z" level=info msg="CreateContainer within sandbox \"78c6ff2f65c01cb552826f5a5a535cd17d7ff048602c35f3110e7e3489a4f013\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 8 10:12:37.646675 containerd[1553]: time="2025-07-08T10:12:37.646636786Z" level=info msg="Container 0a3cf4b8c37fc6dc7c86ec479b3e1bf1694b500aa875aa7627e36180cfe2843f: CDI devices from CRI Config.CDIDevices: []" Jul 8 10:12:37.651812 containerd[1553]: time="2025-07-08T10:12:37.651770107Z" level=info msg="CreateContainer within sandbox \"78c6ff2f65c01cb552826f5a5a535cd17d7ff048602c35f3110e7e3489a4f013\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"0a3cf4b8c37fc6dc7c86ec479b3e1bf1694b500aa875aa7627e36180cfe2843f\"" Jul 8 10:12:37.652291 containerd[1553]: time="2025-07-08T10:12:37.652218209Z" level=info msg="StartContainer for \"0a3cf4b8c37fc6dc7c86ec479b3e1bf1694b500aa875aa7627e36180cfe2843f\"" Jul 8 10:12:37.653395 containerd[1553]: time="2025-07-08T10:12:37.653353538Z" level=info msg="connecting to shim 0a3cf4b8c37fc6dc7c86ec479b3e1bf1694b500aa875aa7627e36180cfe2843f" address="unix:///run/containerd/s/e4640eec8c3e0977d7d3ba82a921ef228a8f6d052375b89128f4abcf64673c1a" protocol=ttrpc version=3 Jul 8 10:12:37.704298 systemd[1]: Started cri-containerd-0a3cf4b8c37fc6dc7c86ec479b3e1bf1694b500aa875aa7627e36180cfe2843f.scope - libcontainer container 0a3cf4b8c37fc6dc7c86ec479b3e1bf1694b500aa875aa7627e36180cfe2843f. Jul 8 10:12:37.735018 containerd[1553]: time="2025-07-08T10:12:37.734977562Z" level=info msg="StartContainer for \"0a3cf4b8c37fc6dc7c86ec479b3e1bf1694b500aa875aa7627e36180cfe2843f\" returns successfully" Jul 8 10:12:40.412377 kubelet[2716]: I0708 10:12:40.412303 2716 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-lm24t" podStartSLOduration=3.980771881 podStartE2EDuration="7.412284134s" podCreationTimestamp="2025-07-08 10:12:33 +0000 UTC" firstStartedPulling="2025-07-08 10:12:34.202691613 +0000 UTC m=+5.179628901" lastFinishedPulling="2025-07-08 10:12:37.634203866 +0000 UTC m=+8.611141154" observedRunningTime="2025-07-08 10:12:38.155219775 +0000 UTC m=+9.132157063" watchObservedRunningTime="2025-07-08 10:12:40.412284134 +0000 UTC m=+11.389221422" Jul 8 10:12:40.599104 update_engine[1547]: I20250708 10:12:40.599006 1547 update_attempter.cc:509] Updating boot flags... Jul 8 10:12:43.390370 sudo[1773]: pam_unix(sudo:session): session closed for user root Jul 8 10:12:43.392471 sshd[1772]: Connection closed by 10.0.0.1 port 34432 Jul 8 10:12:43.393746 sshd-session[1769]: pam_unix(sshd:session): session closed for user core Jul 8 10:12:43.401213 systemd[1]: sshd@6-10.0.0.44:22-10.0.0.1:34432.service: Deactivated successfully. Jul 8 10:12:43.408176 systemd[1]: session-7.scope: Deactivated successfully. Jul 8 10:12:43.409241 systemd[1]: session-7.scope: Consumed 6.376s CPU time, 233.5M memory peak. Jul 8 10:12:43.410924 systemd-logind[1535]: Session 7 logged out. Waiting for processes to exit. Jul 8 10:12:43.414112 systemd-logind[1535]: Removed session 7. Jul 8 10:12:47.675039 systemd[1]: Created slice kubepods-besteffort-pod067ea924_d99e_49a2_b5ee_37fde9d5bc49.slice - libcontainer container kubepods-besteffort-pod067ea924_d99e_49a2_b5ee_37fde9d5bc49.slice. Jul 8 10:12:47.722212 kubelet[2716]: I0708 10:12:47.722156 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/067ea924-d99e-49a2-b5ee-37fde9d5bc49-tigera-ca-bundle\") pod \"calico-typha-84b4b8fd4f-mx5ds\" (UID: \"067ea924-d99e-49a2-b5ee-37fde9d5bc49\") " pod="calico-system/calico-typha-84b4b8fd4f-mx5ds" Jul 8 10:12:47.722212 kubelet[2716]: I0708 10:12:47.722199 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4xtt\" (UniqueName: \"kubernetes.io/projected/067ea924-d99e-49a2-b5ee-37fde9d5bc49-kube-api-access-c4xtt\") pod \"calico-typha-84b4b8fd4f-mx5ds\" (UID: \"067ea924-d99e-49a2-b5ee-37fde9d5bc49\") " pod="calico-system/calico-typha-84b4b8fd4f-mx5ds" Jul 8 10:12:47.722212 kubelet[2716]: I0708 10:12:47.722215 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/067ea924-d99e-49a2-b5ee-37fde9d5bc49-typha-certs\") pod \"calico-typha-84b4b8fd4f-mx5ds\" (UID: \"067ea924-d99e-49a2-b5ee-37fde9d5bc49\") " pod="calico-system/calico-typha-84b4b8fd4f-mx5ds" Jul 8 10:12:47.815843 systemd[1]: Created slice kubepods-besteffort-podd33355c0_571d_4fd9_8838_1521c9051b32.slice - libcontainer container kubepods-besteffort-podd33355c0_571d_4fd9_8838_1521c9051b32.slice. Jul 8 10:12:47.822726 kubelet[2716]: I0708 10:12:47.822688 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d33355c0-571d-4fd9-8838-1521c9051b32-cni-net-dir\") pod \"calico-node-cjhm5\" (UID: \"d33355c0-571d-4fd9-8838-1521c9051b32\") " pod="calico-system/calico-node-cjhm5" Jul 8 10:12:47.822726 kubelet[2716]: I0708 10:12:47.822726 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d33355c0-571d-4fd9-8838-1521c9051b32-lib-modules\") pod \"calico-node-cjhm5\" (UID: \"d33355c0-571d-4fd9-8838-1521c9051b32\") " pod="calico-system/calico-node-cjhm5" Jul 8 10:12:47.822875 kubelet[2716]: I0708 10:12:47.822745 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d33355c0-571d-4fd9-8838-1521c9051b32-node-certs\") pod \"calico-node-cjhm5\" (UID: \"d33355c0-571d-4fd9-8838-1521c9051b32\") " pod="calico-system/calico-node-cjhm5" Jul 8 10:12:47.822875 kubelet[2716]: I0708 10:12:47.822762 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d33355c0-571d-4fd9-8838-1521c9051b32-tigera-ca-bundle\") pod \"calico-node-cjhm5\" (UID: \"d33355c0-571d-4fd9-8838-1521c9051b32\") " pod="calico-system/calico-node-cjhm5" Jul 8 10:12:47.822875 kubelet[2716]: I0708 10:12:47.822776 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d33355c0-571d-4fd9-8838-1521c9051b32-cni-log-dir\") pod \"calico-node-cjhm5\" (UID: \"d33355c0-571d-4fd9-8838-1521c9051b32\") " pod="calico-system/calico-node-cjhm5" Jul 8 10:12:47.822875 kubelet[2716]: I0708 10:12:47.822790 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d33355c0-571d-4fd9-8838-1521c9051b32-flexvol-driver-host\") pod \"calico-node-cjhm5\" (UID: \"d33355c0-571d-4fd9-8838-1521c9051b32\") " pod="calico-system/calico-node-cjhm5" Jul 8 10:12:47.822875 kubelet[2716]: I0708 10:12:47.822848 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d33355c0-571d-4fd9-8838-1521c9051b32-policysync\") pod \"calico-node-cjhm5\" (UID: \"d33355c0-571d-4fd9-8838-1521c9051b32\") " pod="calico-system/calico-node-cjhm5" Jul 8 10:12:47.822998 kubelet[2716]: I0708 10:12:47.822893 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d33355c0-571d-4fd9-8838-1521c9051b32-xtables-lock\") pod \"calico-node-cjhm5\" (UID: \"d33355c0-571d-4fd9-8838-1521c9051b32\") " pod="calico-system/calico-node-cjhm5" Jul 8 10:12:47.822998 kubelet[2716]: I0708 10:12:47.822918 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdkbx\" (UniqueName: \"kubernetes.io/projected/d33355c0-571d-4fd9-8838-1521c9051b32-kube-api-access-kdkbx\") pod \"calico-node-cjhm5\" (UID: \"d33355c0-571d-4fd9-8838-1521c9051b32\") " pod="calico-system/calico-node-cjhm5" Jul 8 10:12:47.822998 kubelet[2716]: I0708 10:12:47.822946 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d33355c0-571d-4fd9-8838-1521c9051b32-cni-bin-dir\") pod \"calico-node-cjhm5\" (UID: \"d33355c0-571d-4fd9-8838-1521c9051b32\") " pod="calico-system/calico-node-cjhm5" Jul 8 10:12:47.822998 kubelet[2716]: I0708 10:12:47.822959 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d33355c0-571d-4fd9-8838-1521c9051b32-var-lib-calico\") pod \"calico-node-cjhm5\" (UID: \"d33355c0-571d-4fd9-8838-1521c9051b32\") " pod="calico-system/calico-node-cjhm5" Jul 8 10:12:47.822998 kubelet[2716]: I0708 10:12:47.822975 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d33355c0-571d-4fd9-8838-1521c9051b32-var-run-calico\") pod \"calico-node-cjhm5\" (UID: \"d33355c0-571d-4fd9-8838-1521c9051b32\") " pod="calico-system/calico-node-cjhm5" Jul 8 10:12:47.931401 kubelet[2716]: E0708 10:12:47.931137 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:47.931401 kubelet[2716]: W0708 10:12:47.931162 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:47.931401 kubelet[2716]: E0708 10:12:47.931191 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:47.932751 kubelet[2716]: E0708 10:12:47.932725 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:47.932751 kubelet[2716]: W0708 10:12:47.932748 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:47.932827 kubelet[2716]: E0708 10:12:47.932769 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:47.980245 containerd[1553]: time="2025-07-08T10:12:47.980190033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-84b4b8fd4f-mx5ds,Uid:067ea924-d99e-49a2-b5ee-37fde9d5bc49,Namespace:calico-system,Attempt:0,}" Jul 8 10:12:48.026992 containerd[1553]: time="2025-07-08T10:12:48.026918306Z" level=info msg="connecting to shim 94b01bd0ed468680726fe83425a1e67639f5d0188a1b552ac63de3b3c430d323" address="unix:///run/containerd/s/e5e0173d79e319b96857edd9d2dd627e2de95fd5ff23137629117b32e32403e1" namespace=k8s.io protocol=ttrpc version=3 Jul 8 10:12:48.063365 systemd[1]: Started cri-containerd-94b01bd0ed468680726fe83425a1e67639f5d0188a1b552ac63de3b3c430d323.scope - libcontainer container 94b01bd0ed468680726fe83425a1e67639f5d0188a1b552ac63de3b3c430d323. Jul 8 10:12:48.068904 kubelet[2716]: E0708 10:12:48.068724 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cd6h8" podUID="9d03d297-85cc-4cfe-b78f-31085a48166e" Jul 8 10:12:48.112443 kubelet[2716]: E0708 10:12:48.112401 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:48.112443 kubelet[2716]: W0708 10:12:48.112431 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:48.112593 kubelet[2716]: E0708 10:12:48.112458 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:48.112697 kubelet[2716]: E0708 10:12:48.112648 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:48.112697 kubelet[2716]: W0708 10:12:48.112661 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:48.112756 kubelet[2716]: E0708 10:12:48.112723 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:48.113001 kubelet[2716]: E0708 10:12:48.112984 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:48.113001 kubelet[2716]: W0708 10:12:48.112996 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:48.113077 kubelet[2716]: E0708 10:12:48.113006 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:48.113338 kubelet[2716]: E0708 10:12:48.113320 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:48.113338 kubelet[2716]: W0708 10:12:48.113331 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:48.113338 kubelet[2716]: E0708 10:12:48.113340 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:48.113589 kubelet[2716]: E0708 10:12:48.113564 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:48.113589 kubelet[2716]: W0708 10:12:48.113578 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:48.113642 kubelet[2716]: E0708 10:12:48.113619 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:48.113949 kubelet[2716]: E0708 10:12:48.113928 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:48.113949 kubelet[2716]: W0708 10:12:48.113942 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:48.114021 kubelet[2716]: E0708 10:12:48.113952 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:48.114212 kubelet[2716]: E0708 10:12:48.114195 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:48.114212 kubelet[2716]: W0708 10:12:48.114206 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:48.114274 kubelet[2716]: E0708 10:12:48.114214 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:48.114432 kubelet[2716]: E0708 10:12:48.114415 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:48.114432 kubelet[2716]: W0708 10:12:48.114426 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:48.114519 kubelet[2716]: E0708 10:12:48.114436 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:48.114676 kubelet[2716]: E0708 10:12:48.114659 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:48.114676 kubelet[2716]: W0708 10:12:48.114671 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:48.114734 kubelet[2716]: E0708 10:12:48.114680 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:48.114915 kubelet[2716]: E0708 10:12:48.114897 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:48.114915 kubelet[2716]: W0708 10:12:48.114907 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:48.114915 kubelet[2716]: E0708 10:12:48.114916 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:48.115149 kubelet[2716]: E0708 10:12:48.115130 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:48.115149 kubelet[2716]: W0708 10:12:48.115142 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:48.115149 kubelet[2716]: E0708 10:12:48.115150 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:48.115371 kubelet[2716]: E0708 10:12:48.115344 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:48.115371 kubelet[2716]: W0708 10:12:48.115365 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:48.115425 kubelet[2716]: E0708 10:12:48.115374 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:48.115562 kubelet[2716]: E0708 10:12:48.115546 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:48.115562 kubelet[2716]: W0708 10:12:48.115557 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:48.115613 kubelet[2716]: E0708 10:12:48.115565 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:48.115777 kubelet[2716]: E0708 10:12:48.115750 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:48.115777 kubelet[2716]: W0708 10:12:48.115773 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:48.115827 kubelet[2716]: E0708 10:12:48.115782 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:48.115981 kubelet[2716]: E0708 10:12:48.115961 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:48.115981 kubelet[2716]: W0708 10:12:48.115982 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:48.116046 kubelet[2716]: E0708 10:12:48.115991 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:48.116230 kubelet[2716]: E0708 10:12:48.116213 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:48.116230 kubelet[2716]: W0708 10:12:48.116225 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:48.116312 kubelet[2716]: E0708 10:12:48.116233 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:48.116493 kubelet[2716]: E0708 10:12:48.116476 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:48.116493 kubelet[2716]: W0708 10:12:48.116487 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:48.116563 kubelet[2716]: E0708 10:12:48.116495 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:48.117145 kubelet[2716]: E0708 10:12:48.117125 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:48.117145 kubelet[2716]: W0708 10:12:48.117136 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:48.117145 kubelet[2716]: E0708 10:12:48.117144 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:48.117335 kubelet[2716]: E0708 10:12:48.117319 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:48.117335 kubelet[2716]: W0708 10:12:48.117332 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:48.117401 kubelet[2716]: E0708 10:12:48.117342 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:48.117525 kubelet[2716]: E0708 10:12:48.117510 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:48.117525 kubelet[2716]: W0708 10:12:48.117520 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:48.117599 kubelet[2716]: E0708 10:12:48.117528 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:48.120153 containerd[1553]: time="2025-07-08T10:12:48.120113518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-cjhm5,Uid:d33355c0-571d-4fd9-8838-1521c9051b32,Namespace:calico-system,Attempt:0,}" Jul 8 10:12:48.120273 containerd[1553]: time="2025-07-08T10:12:48.120179473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-84b4b8fd4f-mx5ds,Uid:067ea924-d99e-49a2-b5ee-37fde9d5bc49,Namespace:calico-system,Attempt:0,} returns sandbox id \"94b01bd0ed468680726fe83425a1e67639f5d0188a1b552ac63de3b3c430d323\"" Jul 8 10:12:48.121774 containerd[1553]: time="2025-07-08T10:12:48.121741532Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 8 10:12:48.127992 kubelet[2716]: E0708 10:12:48.127774 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:48.127992 kubelet[2716]: W0708 10:12:48.127803 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:48.127992 kubelet[2716]: E0708 10:12:48.127825 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:48.127992 kubelet[2716]: I0708 10:12:48.127859 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9d03d297-85cc-4cfe-b78f-31085a48166e-socket-dir\") pod \"csi-node-driver-cd6h8\" (UID: \"9d03d297-85cc-4cfe-b78f-31085a48166e\") " pod="calico-system/csi-node-driver-cd6h8" Jul 8 10:12:48.128334 kubelet[2716]: E0708 10:12:48.128158 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:48.128334 kubelet[2716]: W0708 10:12:48.128172 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:48.128334 kubelet[2716]: E0708 10:12:48.128185 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:48.128334 kubelet[2716]: I0708 10:12:48.128208 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfmkn\" (UniqueName: \"kubernetes.io/projected/9d03d297-85cc-4cfe-b78f-31085a48166e-kube-api-access-tfmkn\") pod \"csi-node-driver-cd6h8\" (UID: \"9d03d297-85cc-4cfe-b78f-31085a48166e\") " pod="calico-system/csi-node-driver-cd6h8" Jul 8 10:12:48.128450 kubelet[2716]: E0708 10:12:48.128428 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:48.128450 kubelet[2716]: W0708 10:12:48.128440 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:48.128450 kubelet[2716]: E0708 10:12:48.128448 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:48.128583 kubelet[2716]: I0708 10:12:48.128475 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9d03d297-85cc-4cfe-b78f-31085a48166e-registration-dir\") pod \"csi-node-driver-cd6h8\" (UID: \"9d03d297-85cc-4cfe-b78f-31085a48166e\") " pod="calico-system/csi-node-driver-cd6h8" Jul 8 10:12:48.128672 kubelet[2716]: E0708 10:12:48.128657 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:48.128672 kubelet[2716]: W0708 10:12:48.128667 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:48.128755 kubelet[2716]: E0708 10:12:48.128676 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:48.128755 kubelet[2716]: I0708 10:12:48.128695 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9d03d297-85cc-4cfe-b78f-31085a48166e-kubelet-dir\") pod \"csi-node-driver-cd6h8\" (UID: \"9d03d297-85cc-4cfe-b78f-31085a48166e\") " pod="calico-system/csi-node-driver-cd6h8" Jul 8 10:12:48.128903 kubelet[2716]: E0708 10:12:48.128885 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:48.128903 kubelet[2716]: W0708 10:12:48.128898 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:48.128964 kubelet[2716]: E0708 10:12:48.128908 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:48.129313 kubelet[2716]: E0708 10:12:48.129286 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:48.129313 kubelet[2716]: W0708 10:12:48.129301 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:48.129313 kubelet[2716]: E0708 10:12:48.129311 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:48.129621 kubelet[2716]: E0708 10:12:48.129591 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:48.129621 kubelet[2716]: W0708 10:12:48.129606 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:48.129621 kubelet[2716]: E0708 10:12:48.129616 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:48.129963 kubelet[2716]: E0708 10:12:48.129943 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:48.129963 kubelet[2716]: W0708 10:12:48.129958 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:48.130057 kubelet[2716]: E0708 10:12:48.129969 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:48.130217 kubelet[2716]: E0708 10:12:48.130198 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:48.130217 kubelet[2716]: W0708 10:12:48.130212 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:48.130318 kubelet[2716]: E0708 10:12:48.130224 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:48.130987 kubelet[2716]: E0708 10:12:48.130842 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:48.130987 kubelet[2716]: W0708 10:12:48.130857 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:48.130987 kubelet[2716]: E0708 10:12:48.130878 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:48.131187 kubelet[2716]: I0708 10:12:48.131139 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/9d03d297-85cc-4cfe-b78f-31085a48166e-varrun\") pod \"csi-node-driver-cd6h8\" (UID: \"9d03d297-85cc-4cfe-b78f-31085a48166e\") " pod="calico-system/csi-node-driver-cd6h8" Jul 8 10:12:48.131376 kubelet[2716]: E0708 10:12:48.131356 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:48.131376 kubelet[2716]: W0708 10:12:48.131369 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:48.131376 kubelet[2716]: E0708 10:12:48.131379 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:48.131555 kubelet[2716]: E0708 10:12:48.131540 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:48.131555 kubelet[2716]: W0708 10:12:48.131550 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:48.131630 kubelet[2716]: E0708 10:12:48.131558 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:48.131754 kubelet[2716]: E0708 10:12:48.131723 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:48.131754 kubelet[2716]: W0708 10:12:48.131733 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:48.131754 kubelet[2716]: E0708 10:12:48.131741 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:48.131917 kubelet[2716]: E0708 10:12:48.131898 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:48.131917 kubelet[2716]: W0708 10:12:48.131914 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:48.131966 kubelet[2716]: E0708 10:12:48.131923 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:48.132221 kubelet[2716]: E0708 10:12:48.132195 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:48.132221 kubelet[2716]: W0708 10:12:48.132211 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:48.132221 kubelet[2716]: E0708 10:12:48.132220 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:48.151103 containerd[1553]: time="2025-07-08T10:12:48.150921333Z" level=info msg="connecting to shim 79d2861a408a20d0c5e1ab56bda9d17295344fb3fb9261e7f91c2d2e12d93c4c" address="unix:///run/containerd/s/9ddd0acf9171398716d6627acc4bddd7e3a8f64050a30d348403ffc2caac993a" namespace=k8s.io protocol=ttrpc version=3 Jul 8 10:12:48.190316 systemd[1]: Started cri-containerd-79d2861a408a20d0c5e1ab56bda9d17295344fb3fb9261e7f91c2d2e12d93c4c.scope - libcontainer container 79d2861a408a20d0c5e1ab56bda9d17295344fb3fb9261e7f91c2d2e12d93c4c. Jul 8 10:12:48.220391 containerd[1553]: time="2025-07-08T10:12:48.220344164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-cjhm5,Uid:d33355c0-571d-4fd9-8838-1521c9051b32,Namespace:calico-system,Attempt:0,} returns sandbox id \"79d2861a408a20d0c5e1ab56bda9d17295344fb3fb9261e7f91c2d2e12d93c4c\"" Jul 8 10:12:48.233242 kubelet[2716]: E0708 10:12:48.233207 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:48.233242 kubelet[2716]: W0708 10:12:48.233236 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:48.233368 kubelet[2716]: E0708 10:12:48.233262 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:48.233505 kubelet[2716]: E0708 10:12:48.233488 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:48.233528 kubelet[2716]: W0708 10:12:48.233505 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:48.233528 kubelet[2716]: E0708 10:12:48.233516 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:48.233951 kubelet[2716]: E0708 10:12:48.233912 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:48.233951 kubelet[2716]: W0708 10:12:48.233942 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:48.234004 kubelet[2716]: E0708 10:12:48.233967 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:48.234245 kubelet[2716]: E0708 10:12:48.234217 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:48.234245 kubelet[2716]: W0708 10:12:48.234231 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:48.234245 kubelet[2716]: E0708 10:12:48.234242 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:48.234485 kubelet[2716]: E0708 10:12:48.234466 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:48.234485 kubelet[2716]: W0708 10:12:48.234479 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:48.234538 kubelet[2716]: E0708 10:12:48.234489 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:48.234834 kubelet[2716]: E0708 10:12:48.234802 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:48.234834 kubelet[2716]: W0708 10:12:48.234821 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:48.234890 kubelet[2716]: E0708 10:12:48.234833 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:48.235145 kubelet[2716]: E0708 10:12:48.235116 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:48.235145 kubelet[2716]: W0708 10:12:48.235132 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:48.235145 kubelet[2716]: E0708 10:12:48.235145 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:48.235363 kubelet[2716]: E0708 10:12:48.235345 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:48.235363 kubelet[2716]: W0708 10:12:48.235358 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:48.235425 kubelet[2716]: E0708 10:12:48.235368 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:48.235575 kubelet[2716]: E0708 10:12:48.235558 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:48.235575 kubelet[2716]: W0708 10:12:48.235570 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:48.235575 kubelet[2716]: E0708 10:12:48.235580 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:48.235853 kubelet[2716]: E0708 10:12:48.235823 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:48.235853 kubelet[2716]: W0708 10:12:48.235835 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:48.235853 kubelet[2716]: E0708 10:12:48.235845 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:48.237168 kubelet[2716]: E0708 10:12:48.237146 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:48.237168 kubelet[2716]: W0708 10:12:48.237162 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:48.237226 kubelet[2716]: E0708 10:12:48.237175 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:48.237447 kubelet[2716]: E0708 10:12:48.237429 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:48.237447 kubelet[2716]: W0708 10:12:48.237442 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:48.237504 kubelet[2716]: E0708 10:12:48.237453 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:48.237711 kubelet[2716]: E0708 10:12:48.237677 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:48.237711 kubelet[2716]: W0708 10:12:48.237690 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:48.237711 kubelet[2716]: E0708 10:12:48.237698 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:48.238056 kubelet[2716]: E0708 10:12:48.238028 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:48.238056 kubelet[2716]: W0708 10:12:48.238043 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:48.238131 kubelet[2716]: E0708 10:12:48.238076 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:48.238309 kubelet[2716]: E0708 10:12:48.238279 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:48.238309 kubelet[2716]: W0708 10:12:48.238292 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:48.238309 kubelet[2716]: E0708 10:12:48.238302 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:48.238621 kubelet[2716]: E0708 10:12:48.238581 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:48.238621 kubelet[2716]: W0708 10:12:48.238595 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:48.238621 kubelet[2716]: E0708 10:12:48.238606 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:48.238950 kubelet[2716]: E0708 10:12:48.238910 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:48.238950 kubelet[2716]: W0708 10:12:48.238945 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:48.239034 kubelet[2716]: E0708 10:12:48.238967 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:48.239451 kubelet[2716]: E0708 10:12:48.239433 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:48.239451 kubelet[2716]: W0708 10:12:48.239447 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:48.239512 kubelet[2716]: E0708 10:12:48.239461 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:48.239721 kubelet[2716]: E0708 10:12:48.239705 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:48.239721 kubelet[2716]: W0708 10:12:48.239718 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:48.239768 kubelet[2716]: E0708 10:12:48.239756 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:48.240125 kubelet[2716]: E0708 10:12:48.240106 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:48.240125 kubelet[2716]: W0708 10:12:48.240122 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:48.240183 kubelet[2716]: E0708 10:12:48.240134 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:48.240454 kubelet[2716]: E0708 10:12:48.240437 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:48.240454 kubelet[2716]: W0708 10:12:48.240451 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:48.240510 kubelet[2716]: E0708 10:12:48.240463 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:48.240728 kubelet[2716]: E0708 10:12:48.240709 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:48.240728 kubelet[2716]: W0708 10:12:48.240723 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:48.240785 kubelet[2716]: E0708 10:12:48.240734 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:48.241086 kubelet[2716]: E0708 10:12:48.240996 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:48.241086 kubelet[2716]: W0708 10:12:48.241029 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:48.241086 kubelet[2716]: E0708 10:12:48.241054 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:48.241576 kubelet[2716]: E0708 10:12:48.241563 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:48.241651 kubelet[2716]: W0708 10:12:48.241621 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:48.241651 kubelet[2716]: E0708 10:12:48.241636 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:48.241874 kubelet[2716]: E0708 10:12:48.241846 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:48.241874 kubelet[2716]: W0708 10:12:48.241859 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:48.241874 kubelet[2716]: E0708 10:12:48.241866 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:48.245042 kubelet[2716]: E0708 10:12:48.244999 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:48.245042 kubelet[2716]: W0708 10:12:48.245028 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:48.245042 kubelet[2716]: E0708 10:12:48.245039 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:50.117652 kubelet[2716]: E0708 10:12:50.117594 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cd6h8" podUID="9d03d297-85cc-4cfe-b78f-31085a48166e" Jul 8 10:12:51.368172 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount918635244.mount: Deactivated successfully. Jul 8 10:12:51.751676 containerd[1553]: time="2025-07-08T10:12:51.751544795Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:12:51.752807 containerd[1553]: time="2025-07-08T10:12:51.752782829Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Jul 8 10:12:51.754227 containerd[1553]: time="2025-07-08T10:12:51.754183360Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:12:51.756628 containerd[1553]: time="2025-07-08T10:12:51.756193311Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:12:51.756740 containerd[1553]: time="2025-07-08T10:12:51.756702562Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 3.634929982s" Jul 8 10:12:51.756783 containerd[1553]: time="2025-07-08T10:12:51.756741675Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 8 10:12:51.757783 containerd[1553]: time="2025-07-08T10:12:51.757749506Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 8 10:12:51.770963 containerd[1553]: time="2025-07-08T10:12:51.770913519Z" level=info msg="CreateContainer within sandbox \"94b01bd0ed468680726fe83425a1e67639f5d0188a1b552ac63de3b3c430d323\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 8 10:12:51.780096 containerd[1553]: time="2025-07-08T10:12:51.779542291Z" level=info msg="Container 6c6980d419e0cb22dd5f8123f5de74ca6f55721c43fca5f786fb1449bf50292b: CDI devices from CRI Config.CDIDevices: []" Jul 8 10:12:51.787329 containerd[1553]: time="2025-07-08T10:12:51.787288157Z" level=info msg="CreateContainer within sandbox \"94b01bd0ed468680726fe83425a1e67639f5d0188a1b552ac63de3b3c430d323\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"6c6980d419e0cb22dd5f8123f5de74ca6f55721c43fca5f786fb1449bf50292b\"" Jul 8 10:12:51.787789 containerd[1553]: time="2025-07-08T10:12:51.787766319Z" level=info msg="StartContainer for \"6c6980d419e0cb22dd5f8123f5de74ca6f55721c43fca5f786fb1449bf50292b\"" Jul 8 10:12:51.788880 containerd[1553]: time="2025-07-08T10:12:51.788844542Z" level=info msg="connecting to shim 6c6980d419e0cb22dd5f8123f5de74ca6f55721c43fca5f786fb1449bf50292b" address="unix:///run/containerd/s/e5e0173d79e319b96857edd9d2dd627e2de95fd5ff23137629117b32e32403e1" protocol=ttrpc version=3 Jul 8 10:12:51.809205 systemd[1]: Started cri-containerd-6c6980d419e0cb22dd5f8123f5de74ca6f55721c43fca5f786fb1449bf50292b.scope - libcontainer container 6c6980d419e0cb22dd5f8123f5de74ca6f55721c43fca5f786fb1449bf50292b. Jul 8 10:12:51.859268 containerd[1553]: time="2025-07-08T10:12:51.858637902Z" level=info msg="StartContainer for \"6c6980d419e0cb22dd5f8123f5de74ca6f55721c43fca5f786fb1449bf50292b\" returns successfully" Jul 8 10:12:52.117400 kubelet[2716]: E0708 10:12:52.117352 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cd6h8" podUID="9d03d297-85cc-4cfe-b78f-31085a48166e" Jul 8 10:12:52.188819 kubelet[2716]: I0708 10:12:52.188719 2716 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-84b4b8fd4f-mx5ds" podStartSLOduration=1.552701855 podStartE2EDuration="5.188688759s" podCreationTimestamp="2025-07-08 10:12:47 +0000 UTC" firstStartedPulling="2025-07-08 10:12:48.121423171 +0000 UTC m=+19.098360459" lastFinishedPulling="2025-07-08 10:12:51.757410075 +0000 UTC m=+22.734347363" observedRunningTime="2025-07-08 10:12:52.187940128 +0000 UTC m=+23.164877416" watchObservedRunningTime="2025-07-08 10:12:52.188688759 +0000 UTC m=+23.165626037" Jul 8 10:12:52.241236 kubelet[2716]: E0708 10:12:52.241189 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:52.241236 kubelet[2716]: W0708 10:12:52.241223 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:52.241414 kubelet[2716]: E0708 10:12:52.241247 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:52.241490 kubelet[2716]: E0708 10:12:52.241460 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:52.241490 kubelet[2716]: W0708 10:12:52.241487 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:52.241538 kubelet[2716]: E0708 10:12:52.241496 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:52.241687 kubelet[2716]: E0708 10:12:52.241671 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:52.241687 kubelet[2716]: W0708 10:12:52.241681 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:52.241737 kubelet[2716]: E0708 10:12:52.241688 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:52.241902 kubelet[2716]: E0708 10:12:52.241878 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:52.241902 kubelet[2716]: W0708 10:12:52.241889 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:52.241902 kubelet[2716]: E0708 10:12:52.241897 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:52.242120 kubelet[2716]: E0708 10:12:52.242105 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:52.242120 kubelet[2716]: W0708 10:12:52.242116 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:52.242173 kubelet[2716]: E0708 10:12:52.242124 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:52.242287 kubelet[2716]: E0708 10:12:52.242273 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:52.242287 kubelet[2716]: W0708 10:12:52.242282 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:52.242341 kubelet[2716]: E0708 10:12:52.242289 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:52.242447 kubelet[2716]: E0708 10:12:52.242432 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:52.242447 kubelet[2716]: W0708 10:12:52.242442 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:52.242493 kubelet[2716]: E0708 10:12:52.242448 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:52.242620 kubelet[2716]: E0708 10:12:52.242606 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:52.242620 kubelet[2716]: W0708 10:12:52.242616 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:52.242672 kubelet[2716]: E0708 10:12:52.242624 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:52.242798 kubelet[2716]: E0708 10:12:52.242783 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:52.242798 kubelet[2716]: W0708 10:12:52.242795 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:52.242842 kubelet[2716]: E0708 10:12:52.242802 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:52.242968 kubelet[2716]: E0708 10:12:52.242945 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:52.242968 kubelet[2716]: W0708 10:12:52.242964 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:52.243017 kubelet[2716]: E0708 10:12:52.242971 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:52.243147 kubelet[2716]: E0708 10:12:52.243131 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:52.243147 kubelet[2716]: W0708 10:12:52.243142 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:52.243147 kubelet[2716]: E0708 10:12:52.243149 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:52.243315 kubelet[2716]: E0708 10:12:52.243301 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:52.243315 kubelet[2716]: W0708 10:12:52.243311 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:52.243359 kubelet[2716]: E0708 10:12:52.243318 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:52.243497 kubelet[2716]: E0708 10:12:52.243482 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:52.243497 kubelet[2716]: W0708 10:12:52.243492 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:52.243547 kubelet[2716]: E0708 10:12:52.243501 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:52.243675 kubelet[2716]: E0708 10:12:52.243660 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:52.243675 kubelet[2716]: W0708 10:12:52.243670 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:52.243722 kubelet[2716]: E0708 10:12:52.243679 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:52.243836 kubelet[2716]: E0708 10:12:52.243821 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:52.243836 kubelet[2716]: W0708 10:12:52.243831 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:52.243883 kubelet[2716]: E0708 10:12:52.243838 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:52.266417 kubelet[2716]: E0708 10:12:52.266378 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:52.266417 kubelet[2716]: W0708 10:12:52.266402 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:52.266518 kubelet[2716]: E0708 10:12:52.266425 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:52.266687 kubelet[2716]: E0708 10:12:52.266661 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:52.266687 kubelet[2716]: W0708 10:12:52.266675 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:52.266687 kubelet[2716]: E0708 10:12:52.266684 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:52.267008 kubelet[2716]: E0708 10:12:52.266963 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:52.267008 kubelet[2716]: W0708 10:12:52.266991 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:52.267008 kubelet[2716]: E0708 10:12:52.267019 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:52.267343 kubelet[2716]: E0708 10:12:52.267325 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:52.267343 kubelet[2716]: W0708 10:12:52.267338 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:52.267401 kubelet[2716]: E0708 10:12:52.267348 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:52.267555 kubelet[2716]: E0708 10:12:52.267508 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:52.267555 kubelet[2716]: W0708 10:12:52.267555 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:52.267633 kubelet[2716]: E0708 10:12:52.267565 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:52.267765 kubelet[2716]: E0708 10:12:52.267742 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:52.267765 kubelet[2716]: W0708 10:12:52.267756 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:52.267765 kubelet[2716]: E0708 10:12:52.267766 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:52.267943 kubelet[2716]: E0708 10:12:52.267925 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:52.267943 kubelet[2716]: W0708 10:12:52.267936 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:52.267943 kubelet[2716]: E0708 10:12:52.267944 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:52.268348 kubelet[2716]: E0708 10:12:52.268330 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:52.268348 kubelet[2716]: W0708 10:12:52.268345 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:52.268421 kubelet[2716]: E0708 10:12:52.268354 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:52.268540 kubelet[2716]: E0708 10:12:52.268520 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:52.268540 kubelet[2716]: W0708 10:12:52.268533 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:52.268540 kubelet[2716]: E0708 10:12:52.268541 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:52.268874 kubelet[2716]: E0708 10:12:52.268851 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:52.268874 kubelet[2716]: W0708 10:12:52.268866 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:52.268935 kubelet[2716]: E0708 10:12:52.268876 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:52.269090 kubelet[2716]: E0708 10:12:52.269053 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:52.269090 kubelet[2716]: W0708 10:12:52.269063 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:52.269090 kubelet[2716]: E0708 10:12:52.269087 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:52.269300 kubelet[2716]: E0708 10:12:52.269284 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:52.269300 kubelet[2716]: W0708 10:12:52.269294 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:52.269352 kubelet[2716]: E0708 10:12:52.269302 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:52.269567 kubelet[2716]: E0708 10:12:52.269549 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:52.269567 kubelet[2716]: W0708 10:12:52.269562 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:52.269616 kubelet[2716]: E0708 10:12:52.269570 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:52.269779 kubelet[2716]: E0708 10:12:52.269764 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:52.269779 kubelet[2716]: W0708 10:12:52.269774 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:52.269834 kubelet[2716]: E0708 10:12:52.269781 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:52.269985 kubelet[2716]: E0708 10:12:52.269968 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:52.269985 kubelet[2716]: W0708 10:12:52.269978 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:52.269985 kubelet[2716]: E0708 10:12:52.269986 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:52.270171 kubelet[2716]: E0708 10:12:52.270155 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:52.270171 kubelet[2716]: W0708 10:12:52.270167 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:52.270217 kubelet[2716]: E0708 10:12:52.270175 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:52.270383 kubelet[2716]: E0708 10:12:52.270366 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:52.270383 kubelet[2716]: W0708 10:12:52.270377 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:52.270438 kubelet[2716]: E0708 10:12:52.270384 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:52.270571 kubelet[2716]: E0708 10:12:52.270557 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:52.270571 kubelet[2716]: W0708 10:12:52.270566 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:52.270616 kubelet[2716]: E0708 10:12:52.270574 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:53.249273 kubelet[2716]: E0708 10:12:53.249212 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:53.249273 kubelet[2716]: W0708 10:12:53.249241 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:53.249273 kubelet[2716]: E0708 10:12:53.249270 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:53.249857 kubelet[2716]: E0708 10:12:53.249476 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:53.249857 kubelet[2716]: W0708 10:12:53.249485 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:53.249857 kubelet[2716]: E0708 10:12:53.249494 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:53.249857 kubelet[2716]: E0708 10:12:53.249701 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:53.249857 kubelet[2716]: W0708 10:12:53.249710 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:53.249857 kubelet[2716]: E0708 10:12:53.249721 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:53.250046 kubelet[2716]: E0708 10:12:53.249927 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:53.250046 kubelet[2716]: W0708 10:12:53.249949 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:53.250046 kubelet[2716]: E0708 10:12:53.249961 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:53.250211 kubelet[2716]: E0708 10:12:53.250192 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:53.250211 kubelet[2716]: W0708 10:12:53.250204 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:53.250268 kubelet[2716]: E0708 10:12:53.250214 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:53.250402 kubelet[2716]: E0708 10:12:53.250385 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:53.250402 kubelet[2716]: W0708 10:12:53.250397 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:53.250471 kubelet[2716]: E0708 10:12:53.250407 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:53.250600 kubelet[2716]: E0708 10:12:53.250583 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:53.250600 kubelet[2716]: W0708 10:12:53.250595 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:53.250665 kubelet[2716]: E0708 10:12:53.250605 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:53.250790 kubelet[2716]: E0708 10:12:53.250774 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:53.250790 kubelet[2716]: W0708 10:12:53.250785 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:53.250859 kubelet[2716]: E0708 10:12:53.250795 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:53.251010 kubelet[2716]: E0708 10:12:53.250991 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:53.251010 kubelet[2716]: W0708 10:12:53.251005 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:53.251100 kubelet[2716]: E0708 10:12:53.251016 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:53.251225 kubelet[2716]: E0708 10:12:53.251209 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:53.251225 kubelet[2716]: W0708 10:12:53.251220 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:53.251287 kubelet[2716]: E0708 10:12:53.251230 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:53.251417 kubelet[2716]: E0708 10:12:53.251399 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:53.251417 kubelet[2716]: W0708 10:12:53.251411 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:53.251484 kubelet[2716]: E0708 10:12:53.251420 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:53.251614 kubelet[2716]: E0708 10:12:53.251598 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:53.251614 kubelet[2716]: W0708 10:12:53.251609 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:53.251674 kubelet[2716]: E0708 10:12:53.251618 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:53.251806 kubelet[2716]: E0708 10:12:53.251789 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:53.251806 kubelet[2716]: W0708 10:12:53.251801 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:53.251871 kubelet[2716]: E0708 10:12:53.251811 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:53.252008 kubelet[2716]: E0708 10:12:53.251991 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:53.252008 kubelet[2716]: W0708 10:12:53.252003 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:53.252089 kubelet[2716]: E0708 10:12:53.252012 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:53.252241 kubelet[2716]: E0708 10:12:53.252225 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:53.252241 kubelet[2716]: W0708 10:12:53.252237 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:53.252307 kubelet[2716]: E0708 10:12:53.252246 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:53.274640 kubelet[2716]: E0708 10:12:53.274610 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:53.274640 kubelet[2716]: W0708 10:12:53.274631 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:53.274716 kubelet[2716]: E0708 10:12:53.274652 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:53.274849 kubelet[2716]: E0708 10:12:53.274834 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:53.274849 kubelet[2716]: W0708 10:12:53.274842 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:53.274849 kubelet[2716]: E0708 10:12:53.274849 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:53.275081 kubelet[2716]: E0708 10:12:53.275048 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:53.275081 kubelet[2716]: W0708 10:12:53.275057 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:53.275081 kubelet[2716]: E0708 10:12:53.275080 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:53.275333 kubelet[2716]: E0708 10:12:53.275299 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:53.275333 kubelet[2716]: W0708 10:12:53.275314 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:53.275333 kubelet[2716]: E0708 10:12:53.275326 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:53.275615 kubelet[2716]: E0708 10:12:53.275585 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:53.275615 kubelet[2716]: W0708 10:12:53.275596 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:53.275615 kubelet[2716]: E0708 10:12:53.275608 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:53.275829 kubelet[2716]: E0708 10:12:53.275812 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:53.275829 kubelet[2716]: W0708 10:12:53.275825 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:53.275890 kubelet[2716]: E0708 10:12:53.275837 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:53.276103 kubelet[2716]: E0708 10:12:53.276090 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:53.276103 kubelet[2716]: W0708 10:12:53.276100 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:53.276152 kubelet[2716]: E0708 10:12:53.276111 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:53.276424 kubelet[2716]: E0708 10:12:53.276402 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:53.276424 kubelet[2716]: W0708 10:12:53.276416 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:53.276476 kubelet[2716]: E0708 10:12:53.276430 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:53.276645 kubelet[2716]: E0708 10:12:53.276633 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:53.276672 kubelet[2716]: W0708 10:12:53.276643 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:53.276672 kubelet[2716]: E0708 10:12:53.276653 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:53.276846 kubelet[2716]: E0708 10:12:53.276828 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:53.276846 kubelet[2716]: W0708 10:12:53.276839 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:53.276888 kubelet[2716]: E0708 10:12:53.276847 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:53.277042 kubelet[2716]: E0708 10:12:53.277029 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:53.277042 kubelet[2716]: W0708 10:12:53.277040 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:53.277100 kubelet[2716]: E0708 10:12:53.277049 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:53.277258 kubelet[2716]: E0708 10:12:53.277245 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:53.277258 kubelet[2716]: W0708 10:12:53.277255 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:53.277303 kubelet[2716]: E0708 10:12:53.277264 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:53.277450 kubelet[2716]: E0708 10:12:53.277437 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:53.277450 kubelet[2716]: W0708 10:12:53.277447 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:53.277502 kubelet[2716]: E0708 10:12:53.277458 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:53.277858 kubelet[2716]: E0708 10:12:53.277822 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:53.277858 kubelet[2716]: W0708 10:12:53.277851 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:53.277904 kubelet[2716]: E0708 10:12:53.277873 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:53.278111 kubelet[2716]: E0708 10:12:53.278095 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:53.278111 kubelet[2716]: W0708 10:12:53.278107 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:53.278156 kubelet[2716]: E0708 10:12:53.278118 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:53.278384 kubelet[2716]: E0708 10:12:53.278369 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:53.278384 kubelet[2716]: W0708 10:12:53.278381 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:53.278443 kubelet[2716]: E0708 10:12:53.278391 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:53.278673 kubelet[2716]: E0708 10:12:53.278660 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:53.278701 kubelet[2716]: W0708 10:12:53.278674 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:53.278701 kubelet[2716]: E0708 10:12:53.278685 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:53.278894 kubelet[2716]: E0708 10:12:53.278881 2716 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:12:53.278894 kubelet[2716]: W0708 10:12:53.278891 2716 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:12:53.278959 kubelet[2716]: E0708 10:12:53.278902 2716 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:12:53.368637 containerd[1553]: time="2025-07-08T10:12:53.368569938Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:12:53.369298 containerd[1553]: time="2025-07-08T10:12:53.369250070Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Jul 8 10:12:53.370328 containerd[1553]: time="2025-07-08T10:12:53.370283498Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:12:53.374996 containerd[1553]: time="2025-07-08T10:12:53.374943039Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:12:53.375613 containerd[1553]: time="2025-07-08T10:12:53.375566232Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 1.617780849s" Jul 8 10:12:53.375649 containerd[1553]: time="2025-07-08T10:12:53.375614624Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 8 10:12:53.380253 containerd[1553]: time="2025-07-08T10:12:53.380218770Z" level=info msg="CreateContainer within sandbox \"79d2861a408a20d0c5e1ab56bda9d17295344fb3fb9261e7f91c2d2e12d93c4c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 8 10:12:53.388689 containerd[1553]: time="2025-07-08T10:12:53.388654707Z" level=info msg="Container 6d75b1bb1e5e3ed84998d89002f77931543d6a96c4d8eebbf7c493bcdb495cff: CDI devices from CRI Config.CDIDevices: []" Jul 8 10:12:53.398283 containerd[1553]: time="2025-07-08T10:12:53.398228988Z" level=info msg="CreateContainer within sandbox \"79d2861a408a20d0c5e1ab56bda9d17295344fb3fb9261e7f91c2d2e12d93c4c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"6d75b1bb1e5e3ed84998d89002f77931543d6a96c4d8eebbf7c493bcdb495cff\"" Jul 8 10:12:53.398624 containerd[1553]: time="2025-07-08T10:12:53.398572286Z" level=info msg="StartContainer for \"6d75b1bb1e5e3ed84998d89002f77931543d6a96c4d8eebbf7c493bcdb495cff\"" Jul 8 10:12:53.399964 containerd[1553]: time="2025-07-08T10:12:53.399938951Z" level=info msg="connecting to shim 6d75b1bb1e5e3ed84998d89002f77931543d6a96c4d8eebbf7c493bcdb495cff" address="unix:///run/containerd/s/9ddd0acf9171398716d6627acc4bddd7e3a8f64050a30d348403ffc2caac993a" protocol=ttrpc version=3 Jul 8 10:12:53.426357 systemd[1]: Started cri-containerd-6d75b1bb1e5e3ed84998d89002f77931543d6a96c4d8eebbf7c493bcdb495cff.scope - libcontainer container 6d75b1bb1e5e3ed84998d89002f77931543d6a96c4d8eebbf7c493bcdb495cff. Jul 8 10:12:53.485210 systemd[1]: cri-containerd-6d75b1bb1e5e3ed84998d89002f77931543d6a96c4d8eebbf7c493bcdb495cff.scope: Deactivated successfully. Jul 8 10:12:53.487010 containerd[1553]: time="2025-07-08T10:12:53.486976877Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6d75b1bb1e5e3ed84998d89002f77931543d6a96c4d8eebbf7c493bcdb495cff\" id:\"6d75b1bb1e5e3ed84998d89002f77931543d6a96c4d8eebbf7c493bcdb495cff\" pid:3474 exited_at:{seconds:1751969573 nanos:486583215}" Jul 8 10:12:53.596500 containerd[1553]: time="2025-07-08T10:12:53.596426737Z" level=info msg="received exit event container_id:\"6d75b1bb1e5e3ed84998d89002f77931543d6a96c4d8eebbf7c493bcdb495cff\" id:\"6d75b1bb1e5e3ed84998d89002f77931543d6a96c4d8eebbf7c493bcdb495cff\" pid:3474 exited_at:{seconds:1751969573 nanos:486583215}" Jul 8 10:12:53.606677 containerd[1553]: time="2025-07-08T10:12:53.606630895Z" level=info msg="StartContainer for \"6d75b1bb1e5e3ed84998d89002f77931543d6a96c4d8eebbf7c493bcdb495cff\" returns successfully" Jul 8 10:12:53.619790 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6d75b1bb1e5e3ed84998d89002f77931543d6a96c4d8eebbf7c493bcdb495cff-rootfs.mount: Deactivated successfully. Jul 8 10:12:54.117577 kubelet[2716]: E0708 10:12:54.117525 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cd6h8" podUID="9d03d297-85cc-4cfe-b78f-31085a48166e" Jul 8 10:12:54.183312 containerd[1553]: time="2025-07-08T10:12:54.183265188Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 8 10:12:56.118796 kubelet[2716]: E0708 10:12:56.117601 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cd6h8" podUID="9d03d297-85cc-4cfe-b78f-31085a48166e" Jul 8 10:12:56.891168 containerd[1553]: time="2025-07-08T10:12:56.891108600Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:12:56.891834 containerd[1553]: time="2025-07-08T10:12:56.891786316Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 8 10:12:56.893035 containerd[1553]: time="2025-07-08T10:12:56.892994520Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:12:56.897086 containerd[1553]: time="2025-07-08T10:12:56.896341251Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:12:56.897386 containerd[1553]: time="2025-07-08T10:12:56.897357595Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 2.714050948s" Jul 8 10:12:56.897465 containerd[1553]: time="2025-07-08T10:12:56.897449949Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 8 10:12:56.902959 containerd[1553]: time="2025-07-08T10:12:56.902913365Z" level=info msg="CreateContainer within sandbox \"79d2861a408a20d0c5e1ab56bda9d17295344fb3fb9261e7f91c2d2e12d93c4c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 8 10:12:56.920125 containerd[1553]: time="2025-07-08T10:12:56.920045962Z" level=info msg="Container c4b45ef8a716d6f876af29dd867bdfb21f5e3618743195a038579fe1c0241c41: CDI devices from CRI Config.CDIDevices: []" Jul 8 10:12:56.929972 containerd[1553]: time="2025-07-08T10:12:56.929921084Z" level=info msg="CreateContainer within sandbox \"79d2861a408a20d0c5e1ab56bda9d17295344fb3fb9261e7f91c2d2e12d93c4c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c4b45ef8a716d6f876af29dd867bdfb21f5e3618743195a038579fe1c0241c41\"" Jul 8 10:12:56.930531 containerd[1553]: time="2025-07-08T10:12:56.930498862Z" level=info msg="StartContainer for \"c4b45ef8a716d6f876af29dd867bdfb21f5e3618743195a038579fe1c0241c41\"" Jul 8 10:12:56.931896 containerd[1553]: time="2025-07-08T10:12:56.931859453Z" level=info msg="connecting to shim c4b45ef8a716d6f876af29dd867bdfb21f5e3618743195a038579fe1c0241c41" address="unix:///run/containerd/s/9ddd0acf9171398716d6627acc4bddd7e3a8f64050a30d348403ffc2caac993a" protocol=ttrpc version=3 Jul 8 10:12:56.957292 systemd[1]: Started cri-containerd-c4b45ef8a716d6f876af29dd867bdfb21f5e3618743195a038579fe1c0241c41.scope - libcontainer container c4b45ef8a716d6f876af29dd867bdfb21f5e3618743195a038579fe1c0241c41. Jul 8 10:12:57.000171 containerd[1553]: time="2025-07-08T10:12:57.000114622Z" level=info msg="StartContainer for \"c4b45ef8a716d6f876af29dd867bdfb21f5e3618743195a038579fe1c0241c41\" returns successfully" Jul 8 10:12:58.117380 kubelet[2716]: E0708 10:12:58.117340 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cd6h8" podUID="9d03d297-85cc-4cfe-b78f-31085a48166e" Jul 8 10:12:58.544753 containerd[1553]: time="2025-07-08T10:12:58.544621954Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 8 10:12:58.547511 systemd[1]: cri-containerd-c4b45ef8a716d6f876af29dd867bdfb21f5e3618743195a038579fe1c0241c41.scope: Deactivated successfully. Jul 8 10:12:58.548057 systemd[1]: cri-containerd-c4b45ef8a716d6f876af29dd867bdfb21f5e3618743195a038579fe1c0241c41.scope: Consumed 526ms CPU time, 177.4M memory peak, 3.3M read from disk, 171.2M written to disk. Jul 8 10:12:58.548466 containerd[1553]: time="2025-07-08T10:12:58.548439007Z" level=info msg="received exit event container_id:\"c4b45ef8a716d6f876af29dd867bdfb21f5e3618743195a038579fe1c0241c41\" id:\"c4b45ef8a716d6f876af29dd867bdfb21f5e3618743195a038579fe1c0241c41\" pid:3532 exited_at:{seconds:1751969578 nanos:548277112}" Jul 8 10:12:58.548721 containerd[1553]: time="2025-07-08T10:12:58.548667276Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c4b45ef8a716d6f876af29dd867bdfb21f5e3618743195a038579fe1c0241c41\" id:\"c4b45ef8a716d6f876af29dd867bdfb21f5e3618743195a038579fe1c0241c41\" pid:3532 exited_at:{seconds:1751969578 nanos:548277112}" Jul 8 10:12:58.568421 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c4b45ef8a716d6f876af29dd867bdfb21f5e3618743195a038579fe1c0241c41-rootfs.mount: Deactivated successfully. Jul 8 10:12:58.626790 kubelet[2716]: I0708 10:12:58.626676 2716 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 8 10:12:58.752507 systemd[1]: Created slice kubepods-burstable-podb2379b87_3302_4e51_a739_e07127b7a8fb.slice - libcontainer container kubepods-burstable-podb2379b87_3302_4e51_a739_e07127b7a8fb.slice. Jul 8 10:12:58.763284 systemd[1]: Created slice kubepods-burstable-podde0ffe7a_e2b0_48fd_b874_045162653d78.slice - libcontainer container kubepods-burstable-podde0ffe7a_e2b0_48fd_b874_045162653d78.slice. Jul 8 10:12:58.769812 systemd[1]: Created slice kubepods-besteffort-podbd14101c_66fe_456d_94fb_8c66c533c1b5.slice - libcontainer container kubepods-besteffort-podbd14101c_66fe_456d_94fb_8c66c533c1b5.slice. Jul 8 10:12:58.775177 systemd[1]: Created slice kubepods-besteffort-poda7af3032_323d_4592_aa41_5139c600325d.slice - libcontainer container kubepods-besteffort-poda7af3032_323d_4592_aa41_5139c600325d.slice. Jul 8 10:12:58.780958 systemd[1]: Created slice kubepods-besteffort-pode1abf930_a2f5_4b0a_891f_2fb307b9e2a0.slice - libcontainer container kubepods-besteffort-pode1abf930_a2f5_4b0a_891f_2fb307b9e2a0.slice. Jul 8 10:12:58.787794 systemd[1]: Created slice kubepods-besteffort-pod9081cece_19b9_4b34_92d7_7256bea099db.slice - libcontainer container kubepods-besteffort-pod9081cece_19b9_4b34_92d7_7256bea099db.slice. Jul 8 10:12:58.793448 systemd[1]: Created slice kubepods-besteffort-pod3c4eb006_3f30_452e_b844_82020e73618d.slice - libcontainer container kubepods-besteffort-pod3c4eb006_3f30_452e_b844_82020e73618d.slice. Jul 8 10:12:58.811258 kubelet[2716]: I0708 10:12:58.811125 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e1abf930-a2f5-4b0a-891f-2fb307b9e2a0-whisker-ca-bundle\") pod \"whisker-5fc677b87c-rvslg\" (UID: \"e1abf930-a2f5-4b0a-891f-2fb307b9e2a0\") " pod="calico-system/whisker-5fc677b87c-rvslg" Jul 8 10:12:58.811258 kubelet[2716]: I0708 10:12:58.811171 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6cnt\" (UniqueName: \"kubernetes.io/projected/b2379b87-3302-4e51-a739-e07127b7a8fb-kube-api-access-v6cnt\") pod \"coredns-674b8bbfcf-rmwf4\" (UID: \"b2379b87-3302-4e51-a739-e07127b7a8fb\") " pod="kube-system/coredns-674b8bbfcf-rmwf4" Jul 8 10:12:58.811258 kubelet[2716]: I0708 10:12:58.811198 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mdvt\" (UniqueName: \"kubernetes.io/projected/e1abf930-a2f5-4b0a-891f-2fb307b9e2a0-kube-api-access-6mdvt\") pod \"whisker-5fc677b87c-rvslg\" (UID: \"e1abf930-a2f5-4b0a-891f-2fb307b9e2a0\") " pod="calico-system/whisker-5fc677b87c-rvslg" Jul 8 10:12:58.811258 kubelet[2716]: I0708 10:12:58.811213 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b2379b87-3302-4e51-a739-e07127b7a8fb-config-volume\") pod \"coredns-674b8bbfcf-rmwf4\" (UID: \"b2379b87-3302-4e51-a739-e07127b7a8fb\") " pod="kube-system/coredns-674b8bbfcf-rmwf4" Jul 8 10:12:58.811258 kubelet[2716]: I0708 10:12:58.811229 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a7af3032-323d-4592-aa41-5139c600325d-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-d9q6m\" (UID: \"a7af3032-323d-4592-aa41-5139c600325d\") " pod="calico-system/goldmane-768f4c5c69-d9q6m" Jul 8 10:12:58.811476 kubelet[2716]: I0708 10:12:58.811245 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bd14101c-66fe-456d-94fb-8c66c533c1b5-tigera-ca-bundle\") pod \"calico-kube-controllers-746475f85f-hkq8h\" (UID: \"bd14101c-66fe-456d-94fb-8c66c533c1b5\") " pod="calico-system/calico-kube-controllers-746475f85f-hkq8h" Jul 8 10:12:58.811476 kubelet[2716]: I0708 10:12:58.811343 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bljm8\" (UniqueName: \"kubernetes.io/projected/bd14101c-66fe-456d-94fb-8c66c533c1b5-kube-api-access-bljm8\") pod \"calico-kube-controllers-746475f85f-hkq8h\" (UID: \"bd14101c-66fe-456d-94fb-8c66c533c1b5\") " pod="calico-system/calico-kube-controllers-746475f85f-hkq8h" Jul 8 10:12:58.811476 kubelet[2716]: I0708 10:12:58.811386 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9081cece-19b9-4b34-92d7-7256bea099db-calico-apiserver-certs\") pod \"calico-apiserver-77c59df47c-d77nl\" (UID: \"9081cece-19b9-4b34-92d7-7256bea099db\") " pod="calico-apiserver/calico-apiserver-77c59df47c-d77nl" Jul 8 10:12:58.811476 kubelet[2716]: I0708 10:12:58.811405 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/de0ffe7a-e2b0-48fd-b874-045162653d78-config-volume\") pod \"coredns-674b8bbfcf-z998s\" (UID: \"de0ffe7a-e2b0-48fd-b874-045162653d78\") " pod="kube-system/coredns-674b8bbfcf-z998s" Jul 8 10:12:58.811476 kubelet[2716]: I0708 10:12:58.811434 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3c4eb006-3f30-452e-b844-82020e73618d-calico-apiserver-certs\") pod \"calico-apiserver-77c59df47c-p96dh\" (UID: \"3c4eb006-3f30-452e-b844-82020e73618d\") " pod="calico-apiserver/calico-apiserver-77c59df47c-p96dh" Jul 8 10:12:58.811597 kubelet[2716]: I0708 10:12:58.811451 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/a7af3032-323d-4592-aa41-5139c600325d-goldmane-key-pair\") pod \"goldmane-768f4c5c69-d9q6m\" (UID: \"a7af3032-323d-4592-aa41-5139c600325d\") " pod="calico-system/goldmane-768f4c5c69-d9q6m" Jul 8 10:12:58.811597 kubelet[2716]: I0708 10:12:58.811463 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxsqm\" (UniqueName: \"kubernetes.io/projected/a7af3032-323d-4592-aa41-5139c600325d-kube-api-access-pxsqm\") pod \"goldmane-768f4c5c69-d9q6m\" (UID: \"a7af3032-323d-4592-aa41-5139c600325d\") " pod="calico-system/goldmane-768f4c5c69-d9q6m" Jul 8 10:12:58.811597 kubelet[2716]: I0708 10:12:58.811487 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e1abf930-a2f5-4b0a-891f-2fb307b9e2a0-whisker-backend-key-pair\") pod \"whisker-5fc677b87c-rvslg\" (UID: \"e1abf930-a2f5-4b0a-891f-2fb307b9e2a0\") " pod="calico-system/whisker-5fc677b87c-rvslg" Jul 8 10:12:58.811597 kubelet[2716]: I0708 10:12:58.811505 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbb4s\" (UniqueName: \"kubernetes.io/projected/3c4eb006-3f30-452e-b844-82020e73618d-kube-api-access-gbb4s\") pod \"calico-apiserver-77c59df47c-p96dh\" (UID: \"3c4eb006-3f30-452e-b844-82020e73618d\") " pod="calico-apiserver/calico-apiserver-77c59df47c-p96dh" Jul 8 10:12:58.811597 kubelet[2716]: I0708 10:12:58.811519 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xkcb\" (UniqueName: \"kubernetes.io/projected/9081cece-19b9-4b34-92d7-7256bea099db-kube-api-access-8xkcb\") pod \"calico-apiserver-77c59df47c-d77nl\" (UID: \"9081cece-19b9-4b34-92d7-7256bea099db\") " pod="calico-apiserver/calico-apiserver-77c59df47c-d77nl" Jul 8 10:12:58.811718 kubelet[2716]: I0708 10:12:58.811556 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7af3032-323d-4592-aa41-5139c600325d-config\") pod \"goldmane-768f4c5c69-d9q6m\" (UID: \"a7af3032-323d-4592-aa41-5139c600325d\") " pod="calico-system/goldmane-768f4c5c69-d9q6m" Jul 8 10:12:58.811718 kubelet[2716]: I0708 10:12:58.811574 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7jw6\" (UniqueName: \"kubernetes.io/projected/de0ffe7a-e2b0-48fd-b874-045162653d78-kube-api-access-l7jw6\") pod \"coredns-674b8bbfcf-z998s\" (UID: \"de0ffe7a-e2b0-48fd-b874-045162653d78\") " pod="kube-system/coredns-674b8bbfcf-z998s" Jul 8 10:12:59.058798 containerd[1553]: time="2025-07-08T10:12:59.058759895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rmwf4,Uid:b2379b87-3302-4e51-a739-e07127b7a8fb,Namespace:kube-system,Attempt:0,}" Jul 8 10:12:59.066428 containerd[1553]: time="2025-07-08T10:12:59.066393126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-z998s,Uid:de0ffe7a-e2b0-48fd-b874-045162653d78,Namespace:kube-system,Attempt:0,}" Jul 8 10:12:59.074340 containerd[1553]: time="2025-07-08T10:12:59.074280957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-746475f85f-hkq8h,Uid:bd14101c-66fe-456d-94fb-8c66c533c1b5,Namespace:calico-system,Attempt:0,}" Jul 8 10:12:59.079584 containerd[1553]: time="2025-07-08T10:12:59.079537105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-d9q6m,Uid:a7af3032-323d-4592-aa41-5139c600325d,Namespace:calico-system,Attempt:0,}" Jul 8 10:12:59.084343 containerd[1553]: time="2025-07-08T10:12:59.084307190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5fc677b87c-rvslg,Uid:e1abf930-a2f5-4b0a-891f-2fb307b9e2a0,Namespace:calico-system,Attempt:0,}" Jul 8 10:12:59.091870 containerd[1553]: time="2025-07-08T10:12:59.091818571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77c59df47c-d77nl,Uid:9081cece-19b9-4b34-92d7-7256bea099db,Namespace:calico-apiserver,Attempt:0,}" Jul 8 10:12:59.096448 containerd[1553]: time="2025-07-08T10:12:59.096411552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77c59df47c-p96dh,Uid:3c4eb006-3f30-452e-b844-82020e73618d,Namespace:calico-apiserver,Attempt:0,}" Jul 8 10:12:59.172012 containerd[1553]: time="2025-07-08T10:12:59.171471400Z" level=error msg="Failed to destroy network for sandbox \"cf6b904c5c209261b1bc18c882b0ad432714e507c94e4dfcf022247eb28c50c9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 8 10:12:59.173499 containerd[1553]: time="2025-07-08T10:12:59.172537455Z" level=error msg="Failed to destroy network for sandbox \"85097118c984d628ef9d40a2b90bf70c6c0cc2e80b30474363e7908d1a0340c9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 8 10:12:59.217509 containerd[1553]: time="2025-07-08T10:12:59.217454943Z" level=error msg="Failed to destroy network for sandbox \"d7b6b629b97590efe1bbd55952257891c77b5b41a5ea70f460469087521dc816\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 8 10:12:59.232403 containerd[1553]: time="2025-07-08T10:12:59.232366798Z" level=error msg="Failed to destroy network for sandbox \"4223036f13d9e0c8b534b3e1454eb8b58309f3727fd70e15f52d69b9492b76f8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 8 10:12:59.232690 containerd[1553]: time="2025-07-08T10:12:59.232628440Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-d9q6m,Uid:a7af3032-323d-4592-aa41-5139c600325d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7b6b629b97590efe1bbd55952257891c77b5b41a5ea70f460469087521dc816\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 8 10:12:59.233144 containerd[1553]: time="2025-07-08T10:12:59.233115516Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-z998s,Uid:de0ffe7a-e2b0-48fd-b874-045162653d78,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"85097118c984d628ef9d40a2b90bf70c6c0cc2e80b30474363e7908d1a0340c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 8 10:12:59.233361 containerd[1553]: time="2025-07-08T10:12:59.233337585Z" level=error msg="Failed to destroy network for sandbox \"82bdd0254e1c32616d305346a4bb1ae152aeb19431534c6554a32a03a7f0d3e0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 8 10:12:59.234000 containerd[1553]: time="2025-07-08T10:12:59.233852644Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77c59df47c-d77nl,Uid:9081cece-19b9-4b34-92d7-7256bea099db,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4223036f13d9e0c8b534b3e1454eb8b58309f3727fd70e15f52d69b9492b76f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 8 10:12:59.234904 containerd[1553]: time="2025-07-08T10:12:59.234876720Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rmwf4,Uid:b2379b87-3302-4e51-a739-e07127b7a8fb,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"82bdd0254e1c32616d305346a4bb1ae152aeb19431534c6554a32a03a7f0d3e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 8 10:12:59.234982 containerd[1553]: time="2025-07-08T10:12:59.234935641Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-746475f85f-hkq8h,Uid:bd14101c-66fe-456d-94fb-8c66c533c1b5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf6b904c5c209261b1bc18c882b0ad432714e507c94e4dfcf022247eb28c50c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 8 10:12:59.240341 kubelet[2716]: E0708 10:12:59.240292 2716 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7b6b629b97590efe1bbd55952257891c77b5b41a5ea70f460469087521dc816\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 8 10:12:59.240688 kubelet[2716]: E0708 10:12:59.240577 2716 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85097118c984d628ef9d40a2b90bf70c6c0cc2e80b30474363e7908d1a0340c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 8 10:12:59.240716 kubelet[2716]: E0708 10:12:59.240697 2716 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf6b904c5c209261b1bc18c882b0ad432714e507c94e4dfcf022247eb28c50c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 8 10:12:59.240776 kubelet[2716]: E0708 10:12:59.240753 2716 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4223036f13d9e0c8b534b3e1454eb8b58309f3727fd70e15f52d69b9492b76f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 8 10:12:59.241468 kubelet[2716]: E0708 10:12:59.241436 2716 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85097118c984d628ef9d40a2b90bf70c6c0cc2e80b30474363e7908d1a0340c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-z998s" Jul 8 10:12:59.242257 kubelet[2716]: E0708 10:12:59.241842 2716 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85097118c984d628ef9d40a2b90bf70c6c0cc2e80b30474363e7908d1a0340c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-z998s" Jul 8 10:12:59.242257 kubelet[2716]: E0708 10:12:59.241455 2716 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf6b904c5c209261b1bc18c882b0ad432714e507c94e4dfcf022247eb28c50c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-746475f85f-hkq8h" Jul 8 10:12:59.242257 kubelet[2716]: E0708 10:12:59.241886 2716 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf6b904c5c209261b1bc18c882b0ad432714e507c94e4dfcf022247eb28c50c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-746475f85f-hkq8h" Jul 8 10:12:59.242357 kubelet[2716]: E0708 10:12:59.241906 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-z998s_kube-system(de0ffe7a-e2b0-48fd-b874-045162653d78)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-z998s_kube-system(de0ffe7a-e2b0-48fd-b874-045162653d78)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"85097118c984d628ef9d40a2b90bf70c6c0cc2e80b30474363e7908d1a0340c9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-z998s" podUID="de0ffe7a-e2b0-48fd-b874-045162653d78" Jul 8 10:12:59.242357 kubelet[2716]: E0708 10:12:59.241530 2716 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4223036f13d9e0c8b534b3e1454eb8b58309f3727fd70e15f52d69b9492b76f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77c59df47c-d77nl" Jul 8 10:12:59.242357 kubelet[2716]: E0708 10:12:59.241949 2716 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4223036f13d9e0c8b534b3e1454eb8b58309f3727fd70e15f52d69b9492b76f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77c59df47c-d77nl" Jul 8 10:12:59.242456 kubelet[2716]: E0708 10:12:59.241960 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-746475f85f-hkq8h_calico-system(bd14101c-66fe-456d-94fb-8c66c533c1b5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-746475f85f-hkq8h_calico-system(bd14101c-66fe-456d-94fb-8c66c533c1b5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cf6b904c5c209261b1bc18c882b0ad432714e507c94e4dfcf022247eb28c50c9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-746475f85f-hkq8h" podUID="bd14101c-66fe-456d-94fb-8c66c533c1b5" Jul 8 10:12:59.242456 kubelet[2716]: E0708 10:12:59.241972 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-77c59df47c-d77nl_calico-apiserver(9081cece-19b9-4b34-92d7-7256bea099db)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-77c59df47c-d77nl_calico-apiserver(9081cece-19b9-4b34-92d7-7256bea099db)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4223036f13d9e0c8b534b3e1454eb8b58309f3727fd70e15f52d69b9492b76f8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77c59df47c-d77nl" podUID="9081cece-19b9-4b34-92d7-7256bea099db" Jul 8 10:12:59.242456 kubelet[2716]: E0708 10:12:59.241538 2716 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82bdd0254e1c32616d305346a4bb1ae152aeb19431534c6554a32a03a7f0d3e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 8 10:12:59.242566 kubelet[2716]: E0708 10:12:59.242021 2716 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82bdd0254e1c32616d305346a4bb1ae152aeb19431534c6554a32a03a7f0d3e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-rmwf4" Jul 8 10:12:59.242566 kubelet[2716]: E0708 10:12:59.242032 2716 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82bdd0254e1c32616d305346a4bb1ae152aeb19431534c6554a32a03a7f0d3e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-rmwf4" Jul 8 10:12:59.242566 kubelet[2716]: E0708 10:12:59.242058 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-rmwf4_kube-system(b2379b87-3302-4e51-a739-e07127b7a8fb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-rmwf4_kube-system(b2379b87-3302-4e51-a739-e07127b7a8fb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"82bdd0254e1c32616d305346a4bb1ae152aeb19431534c6554a32a03a7f0d3e0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-rmwf4" podUID="b2379b87-3302-4e51-a739-e07127b7a8fb" Jul 8 10:12:59.243351 kubelet[2716]: E0708 10:12:59.243327 2716 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7b6b629b97590efe1bbd55952257891c77b5b41a5ea70f460469087521dc816\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-d9q6m" Jul 8 10:12:59.244322 kubelet[2716]: E0708 10:12:59.244221 2716 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7b6b629b97590efe1bbd55952257891c77b5b41a5ea70f460469087521dc816\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-d9q6m" Jul 8 10:12:59.244322 kubelet[2716]: E0708 10:12:59.244283 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-d9q6m_calico-system(a7af3032-323d-4592-aa41-5139c600325d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-d9q6m_calico-system(a7af3032-323d-4592-aa41-5139c600325d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d7b6b629b97590efe1bbd55952257891c77b5b41a5ea70f460469087521dc816\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-d9q6m" podUID="a7af3032-323d-4592-aa41-5139c600325d" Jul 8 10:12:59.249958 containerd[1553]: time="2025-07-08T10:12:59.249912799Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 8 10:12:59.262121 containerd[1553]: time="2025-07-08T10:12:59.261806005Z" level=error msg="Failed to destroy network for sandbox \"b9d97f8afc9330c37541d1172f2181b56e0ed0faa617bc9e6a6819f315046c1c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 8 10:12:59.265051 containerd[1553]: time="2025-07-08T10:12:59.265007297Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77c59df47c-p96dh,Uid:3c4eb006-3f30-452e-b844-82020e73618d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9d97f8afc9330c37541d1172f2181b56e0ed0faa617bc9e6a6819f315046c1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 8 10:12:59.265382 kubelet[2716]: E0708 10:12:59.265282 2716 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9d97f8afc9330c37541d1172f2181b56e0ed0faa617bc9e6a6819f315046c1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 8 10:12:59.265382 kubelet[2716]: E0708 10:12:59.265332 2716 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9d97f8afc9330c37541d1172f2181b56e0ed0faa617bc9e6a6819f315046c1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77c59df47c-p96dh" Jul 8 10:12:59.265382 kubelet[2716]: E0708 10:12:59.265351 2716 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9d97f8afc9330c37541d1172f2181b56e0ed0faa617bc9e6a6819f315046c1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77c59df47c-p96dh" Jul 8 10:12:59.265559 kubelet[2716]: E0708 10:12:59.265523 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-77c59df47c-p96dh_calico-apiserver(3c4eb006-3f30-452e-b844-82020e73618d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-77c59df47c-p96dh_calico-apiserver(3c4eb006-3f30-452e-b844-82020e73618d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b9d97f8afc9330c37541d1172f2181b56e0ed0faa617bc9e6a6819f315046c1c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77c59df47c-p96dh" podUID="3c4eb006-3f30-452e-b844-82020e73618d" Jul 8 10:12:59.271097 containerd[1553]: time="2025-07-08T10:12:59.271035279Z" level=error msg="Failed to destroy network for sandbox \"10a39707a157c9026f37d944547354045f12e80d7cb4fde05b9588c4a975de60\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 8 10:12:59.272200 containerd[1553]: time="2025-07-08T10:12:59.272157209Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5fc677b87c-rvslg,Uid:e1abf930-a2f5-4b0a-891f-2fb307b9e2a0,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"10a39707a157c9026f37d944547354045f12e80d7cb4fde05b9588c4a975de60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 8 10:12:59.272367 kubelet[2716]: E0708 10:12:59.272330 2716 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10a39707a157c9026f37d944547354045f12e80d7cb4fde05b9588c4a975de60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 8 10:12:59.272367 kubelet[2716]: E0708 10:12:59.272374 2716 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10a39707a157c9026f37d944547354045f12e80d7cb4fde05b9588c4a975de60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5fc677b87c-rvslg" Jul 8 10:12:59.272578 kubelet[2716]: E0708 10:12:59.272392 2716 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10a39707a157c9026f37d944547354045f12e80d7cb4fde05b9588c4a975de60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5fc677b87c-rvslg" Jul 8 10:12:59.272578 kubelet[2716]: E0708 10:12:59.272437 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5fc677b87c-rvslg_calico-system(e1abf930-a2f5-4b0a-891f-2fb307b9e2a0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5fc677b87c-rvslg_calico-system(e1abf930-a2f5-4b0a-891f-2fb307b9e2a0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"10a39707a157c9026f37d944547354045f12e80d7cb4fde05b9588c4a975de60\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5fc677b87c-rvslg" podUID="e1abf930-a2f5-4b0a-891f-2fb307b9e2a0" Jul 8 10:13:00.123460 systemd[1]: Created slice kubepods-besteffort-pod9d03d297_85cc_4cfe_b78f_31085a48166e.slice - libcontainer container kubepods-besteffort-pod9d03d297_85cc_4cfe_b78f_31085a48166e.slice. Jul 8 10:13:00.125778 containerd[1553]: time="2025-07-08T10:13:00.125746247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cd6h8,Uid:9d03d297-85cc-4cfe-b78f-31085a48166e,Namespace:calico-system,Attempt:0,}" Jul 8 10:13:00.322494 containerd[1553]: time="2025-07-08T10:13:00.322427273Z" level=error msg="Failed to destroy network for sandbox \"9e434a066c6602bc4f03c824c0b1de894132b59e4e1c4b7c2878e5b49584ed69\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 8 10:13:00.323785 containerd[1553]: time="2025-07-08T10:13:00.323724052Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cd6h8,Uid:9d03d297-85cc-4cfe-b78f-31085a48166e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e434a066c6602bc4f03c824c0b1de894132b59e4e1c4b7c2878e5b49584ed69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 8 10:13:00.324041 kubelet[2716]: E0708 10:13:00.323985 2716 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e434a066c6602bc4f03c824c0b1de894132b59e4e1c4b7c2878e5b49584ed69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 8 10:13:00.324409 kubelet[2716]: E0708 10:13:00.324059 2716 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e434a066c6602bc4f03c824c0b1de894132b59e4e1c4b7c2878e5b49584ed69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cd6h8" Jul 8 10:13:00.324409 kubelet[2716]: E0708 10:13:00.324090 2716 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e434a066c6602bc4f03c824c0b1de894132b59e4e1c4b7c2878e5b49584ed69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cd6h8" Jul 8 10:13:00.324409 kubelet[2716]: E0708 10:13:00.324160 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-cd6h8_calico-system(9d03d297-85cc-4cfe-b78f-31085a48166e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-cd6h8_calico-system(9d03d297-85cc-4cfe-b78f-31085a48166e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9e434a066c6602bc4f03c824c0b1de894132b59e4e1c4b7c2878e5b49584ed69\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-cd6h8" podUID="9d03d297-85cc-4cfe-b78f-31085a48166e" Jul 8 10:13:00.326188 systemd[1]: run-netns-cni\x2d1e2f242a\x2d7b12\x2dac8d\x2d9f5f\x2d400f9089101c.mount: Deactivated successfully. Jul 8 10:13:09.684786 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3270003267.mount: Deactivated successfully. Jul 8 10:13:10.118352 containerd[1553]: time="2025-07-08T10:13:10.118294039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77c59df47c-p96dh,Uid:3c4eb006-3f30-452e-b844-82020e73618d,Namespace:calico-apiserver,Attempt:0,}" Jul 8 10:13:10.816832 systemd[1]: Started sshd@7-10.0.0.44:22-10.0.0.1:36530.service - OpenSSH per-connection server daemon (10.0.0.1:36530). Jul 8 10:13:11.023920 containerd[1553]: time="2025-07-08T10:13:11.023837465Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:13:11.033373 containerd[1553]: time="2025-07-08T10:13:11.033299615Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 8 10:13:11.052571 containerd[1553]: time="2025-07-08T10:13:11.052059103Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:13:11.063252 containerd[1553]: time="2025-07-08T10:13:11.063188835Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:13:11.064041 containerd[1553]: time="2025-07-08T10:13:11.064004897Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 11.814047895s" Jul 8 10:13:11.064041 containerd[1553]: time="2025-07-08T10:13:11.064034163Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 8 10:13:11.087532 sshd[3851]: Accepted publickey for core from 10.0.0.1 port 36530 ssh2: RSA SHA256:BQWGw9qDwwKm1WzdalDD28JNIdz3HBcaxKwPB41deh4 Jul 8 10:13:11.089302 sshd-session[3851]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 8 10:13:11.094344 systemd-logind[1535]: New session 8 of user core. Jul 8 10:13:11.100226 containerd[1553]: time="2025-07-08T10:13:11.100134797Z" level=info msg="CreateContainer within sandbox \"79d2861a408a20d0c5e1ab56bda9d17295344fb3fb9261e7f91c2d2e12d93c4c\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 8 10:13:11.100248 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 8 10:13:11.103124 containerd[1553]: time="2025-07-08T10:13:11.102775236Z" level=error msg="Failed to destroy network for sandbox \"704e3cba093af03dc9444b04803d271a6cc3a8e2695b90a68f8614755dab4895\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 8 10:13:11.105401 systemd[1]: run-netns-cni\x2dc468d74b\x2da73a\x2da46b\x2db4d5\x2d75bf967aabf5.mount: Deactivated successfully. Jul 8 10:13:11.105613 containerd[1553]: time="2025-07-08T10:13:11.105579462Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77c59df47c-p96dh,Uid:3c4eb006-3f30-452e-b844-82020e73618d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"704e3cba093af03dc9444b04803d271a6cc3a8e2695b90a68f8614755dab4895\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 8 10:13:11.105863 kubelet[2716]: E0708 10:13:11.105816 2716 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"704e3cba093af03dc9444b04803d271a6cc3a8e2695b90a68f8614755dab4895\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 8 10:13:11.106181 kubelet[2716]: E0708 10:13:11.105879 2716 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"704e3cba093af03dc9444b04803d271a6cc3a8e2695b90a68f8614755dab4895\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77c59df47c-p96dh" Jul 8 10:13:11.106181 kubelet[2716]: E0708 10:13:11.105900 2716 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"704e3cba093af03dc9444b04803d271a6cc3a8e2695b90a68f8614755dab4895\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77c59df47c-p96dh" Jul 8 10:13:11.106181 kubelet[2716]: E0708 10:13:11.105946 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-77c59df47c-p96dh_calico-apiserver(3c4eb006-3f30-452e-b844-82020e73618d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-77c59df47c-p96dh_calico-apiserver(3c4eb006-3f30-452e-b844-82020e73618d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"704e3cba093af03dc9444b04803d271a6cc3a8e2695b90a68f8614755dab4895\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77c59df47c-p96dh" podUID="3c4eb006-3f30-452e-b844-82020e73618d" Jul 8 10:13:11.112248 containerd[1553]: time="2025-07-08T10:13:11.112211898Z" level=info msg="Container c19c544da539a8d44cdb17e1da27e28b8bf60b6fded8ebbf5a6bd14d9b5c518a: CDI devices from CRI Config.CDIDevices: []" Jul 8 10:13:11.117856 containerd[1553]: time="2025-07-08T10:13:11.117830539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-746475f85f-hkq8h,Uid:bd14101c-66fe-456d-94fb-8c66c533c1b5,Namespace:calico-system,Attempt:0,}" Jul 8 10:13:11.124663 containerd[1553]: time="2025-07-08T10:13:11.124619608Z" level=info msg="CreateContainer within sandbox \"79d2861a408a20d0c5e1ab56bda9d17295344fb3fb9261e7f91c2d2e12d93c4c\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c19c544da539a8d44cdb17e1da27e28b8bf60b6fded8ebbf5a6bd14d9b5c518a\"" Jul 8 10:13:11.125319 containerd[1553]: time="2025-07-08T10:13:11.125290187Z" level=info msg="StartContainer for \"c19c544da539a8d44cdb17e1da27e28b8bf60b6fded8ebbf5a6bd14d9b5c518a\"" Jul 8 10:13:11.126939 containerd[1553]: time="2025-07-08T10:13:11.126905591Z" level=info msg="connecting to shim c19c544da539a8d44cdb17e1da27e28b8bf60b6fded8ebbf5a6bd14d9b5c518a" address="unix:///run/containerd/s/9ddd0acf9171398716d6627acc4bddd7e3a8f64050a30d348403ffc2caac993a" protocol=ttrpc version=3 Jul 8 10:13:11.151549 systemd[1]: Started cri-containerd-c19c544da539a8d44cdb17e1da27e28b8bf60b6fded8ebbf5a6bd14d9b5c518a.scope - libcontainer container c19c544da539a8d44cdb17e1da27e28b8bf60b6fded8ebbf5a6bd14d9b5c518a. Jul 8 10:13:11.179197 containerd[1553]: time="2025-07-08T10:13:11.179108716Z" level=error msg="Failed to destroy network for sandbox \"04de22d2fa2a328aad61033cddc1dc9dbde573ceb7b4a17daa9ff7a04fc7b57a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 8 10:13:11.188153 containerd[1553]: time="2025-07-08T10:13:11.187959577Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-746475f85f-hkq8h,Uid:bd14101c-66fe-456d-94fb-8c66c533c1b5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"04de22d2fa2a328aad61033cddc1dc9dbde573ceb7b4a17daa9ff7a04fc7b57a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 8 10:13:11.188322 kubelet[2716]: E0708 10:13:11.188228 2716 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"04de22d2fa2a328aad61033cddc1dc9dbde573ceb7b4a17daa9ff7a04fc7b57a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 8 10:13:11.188322 kubelet[2716]: E0708 10:13:11.188310 2716 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"04de22d2fa2a328aad61033cddc1dc9dbde573ceb7b4a17daa9ff7a04fc7b57a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-746475f85f-hkq8h" Jul 8 10:13:11.188390 kubelet[2716]: E0708 10:13:11.188334 2716 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"04de22d2fa2a328aad61033cddc1dc9dbde573ceb7b4a17daa9ff7a04fc7b57a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-746475f85f-hkq8h" Jul 8 10:13:11.188419 kubelet[2716]: E0708 10:13:11.188399 2716 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-746475f85f-hkq8h_calico-system(bd14101c-66fe-456d-94fb-8c66c533c1b5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-746475f85f-hkq8h_calico-system(bd14101c-66fe-456d-94fb-8c66c533c1b5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"04de22d2fa2a328aad61033cddc1dc9dbde573ceb7b4a17daa9ff7a04fc7b57a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-746475f85f-hkq8h" podUID="bd14101c-66fe-456d-94fb-8c66c533c1b5" Jul 8 10:13:11.209286 containerd[1553]: time="2025-07-08T10:13:11.209232727Z" level=info msg="StartContainer for \"c19c544da539a8d44cdb17e1da27e28b8bf60b6fded8ebbf5a6bd14d9b5c518a\" returns successfully" Jul 8 10:13:11.255186 sshd[3887]: Connection closed by 10.0.0.1 port 36530 Jul 8 10:13:11.255808 sshd-session[3851]: pam_unix(sshd:session): session closed for user core Jul 8 10:13:11.262877 systemd[1]: sshd@7-10.0.0.44:22-10.0.0.1:36530.service: Deactivated successfully. Jul 8 10:13:11.265889 systemd[1]: session-8.scope: Deactivated successfully. Jul 8 10:13:11.267526 systemd-logind[1535]: Session 8 logged out. Waiting for processes to exit. Jul 8 10:13:11.269011 systemd-logind[1535]: Removed session 8. Jul 8 10:13:11.303374 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 8 10:13:11.303503 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 8 10:13:11.317050 kubelet[2716]: I0708 10:13:11.316837 2716 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-cjhm5" podStartSLOduration=1.473716718 podStartE2EDuration="24.316818638s" podCreationTimestamp="2025-07-08 10:12:47 +0000 UTC" firstStartedPulling="2025-07-08 10:12:48.221707779 +0000 UTC m=+19.198645067" lastFinishedPulling="2025-07-08 10:13:11.064809699 +0000 UTC m=+42.041746987" observedRunningTime="2025-07-08 10:13:11.316781448 +0000 UTC m=+42.293718736" watchObservedRunningTime="2025-07-08 10:13:11.316818638 +0000 UTC m=+42.293755926" Jul 8 10:13:11.430778 containerd[1553]: time="2025-07-08T10:13:11.430654436Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c19c544da539a8d44cdb17e1da27e28b8bf60b6fded8ebbf5a6bd14d9b5c518a\" id:\"2036e3c393bbc0953d119e99128a5f558a76dc2286423c38ede31fcaf4cbdd6d\" pid:3987 exit_status:1 exited_at:{seconds:1751969591 nanos:430311261}" Jul 8 10:13:11.502764 kubelet[2716]: I0708 10:13:11.502724 2716 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e1abf930-a2f5-4b0a-891f-2fb307b9e2a0-whisker-backend-key-pair\") pod \"e1abf930-a2f5-4b0a-891f-2fb307b9e2a0\" (UID: \"e1abf930-a2f5-4b0a-891f-2fb307b9e2a0\") " Jul 8 10:13:11.502764 kubelet[2716]: I0708 10:13:11.502762 2716 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e1abf930-a2f5-4b0a-891f-2fb307b9e2a0-whisker-ca-bundle\") pod \"e1abf930-a2f5-4b0a-891f-2fb307b9e2a0\" (UID: \"e1abf930-a2f5-4b0a-891f-2fb307b9e2a0\") " Jul 8 10:13:11.502960 kubelet[2716]: I0708 10:13:11.502788 2716 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mdvt\" (UniqueName: \"kubernetes.io/projected/e1abf930-a2f5-4b0a-891f-2fb307b9e2a0-kube-api-access-6mdvt\") pod \"e1abf930-a2f5-4b0a-891f-2fb307b9e2a0\" (UID: \"e1abf930-a2f5-4b0a-891f-2fb307b9e2a0\") " Jul 8 10:13:11.503988 kubelet[2716]: I0708 10:13:11.503913 2716 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1abf930-a2f5-4b0a-891f-2fb307b9e2a0-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "e1abf930-a2f5-4b0a-891f-2fb307b9e2a0" (UID: "e1abf930-a2f5-4b0a-891f-2fb307b9e2a0"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 8 10:13:11.506919 kubelet[2716]: I0708 10:13:11.506889 2716 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1abf930-a2f5-4b0a-891f-2fb307b9e2a0-kube-api-access-6mdvt" (OuterVolumeSpecName: "kube-api-access-6mdvt") pod "e1abf930-a2f5-4b0a-891f-2fb307b9e2a0" (UID: "e1abf930-a2f5-4b0a-891f-2fb307b9e2a0"). InnerVolumeSpecName "kube-api-access-6mdvt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 8 10:13:11.507468 kubelet[2716]: I0708 10:13:11.507439 2716 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1abf930-a2f5-4b0a-891f-2fb307b9e2a0-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "e1abf930-a2f5-4b0a-891f-2fb307b9e2a0" (UID: "e1abf930-a2f5-4b0a-891f-2fb307b9e2a0"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 8 10:13:11.603889 kubelet[2716]: I0708 10:13:11.603829 2716 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e1abf930-a2f5-4b0a-891f-2fb307b9e2a0-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 8 10:13:11.603889 kubelet[2716]: I0708 10:13:11.603871 2716 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6mdvt\" (UniqueName: \"kubernetes.io/projected/e1abf930-a2f5-4b0a-891f-2fb307b9e2a0-kube-api-access-6mdvt\") on node \"localhost\" DevicePath \"\"" Jul 8 10:13:11.603889 kubelet[2716]: I0708 10:13:11.603883 2716 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e1abf930-a2f5-4b0a-891f-2fb307b9e2a0-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 8 10:13:12.015585 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3292413650.mount: Deactivated successfully. Jul 8 10:13:12.015694 systemd[1]: var-lib-kubelet-pods-e1abf930\x2da2f5\x2d4b0a\x2d891f\x2d2fb307b9e2a0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6mdvt.mount: Deactivated successfully. Jul 8 10:13:12.015777 systemd[1]: var-lib-kubelet-pods-e1abf930\x2da2f5\x2d4b0a\x2d891f\x2d2fb307b9e2a0-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 8 10:13:12.118374 containerd[1553]: time="2025-07-08T10:13:12.118299134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cd6h8,Uid:9d03d297-85cc-4cfe-b78f-31085a48166e,Namespace:calico-system,Attempt:0,}" Jul 8 10:13:12.118516 containerd[1553]: time="2025-07-08T10:13:12.118300307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-z998s,Uid:de0ffe7a-e2b0-48fd-b874-045162653d78,Namespace:kube-system,Attempt:0,}" Jul 8 10:13:12.296834 systemd[1]: Removed slice kubepods-besteffort-pode1abf930_a2f5_4b0a_891f_2fb307b9e2a0.slice - libcontainer container kubepods-besteffort-pode1abf930_a2f5_4b0a_891f_2fb307b9e2a0.slice. Jul 8 10:13:12.384329 systemd[1]: Created slice kubepods-besteffort-podb6f99801_3a5b_4478_9027_9b923ca889d9.slice - libcontainer container kubepods-besteffort-podb6f99801_3a5b_4478_9027_9b923ca889d9.slice. Jul 8 10:13:12.409570 kubelet[2716]: I0708 10:13:12.409448 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b6f99801-3a5b-4478-9027-9b923ca889d9-whisker-backend-key-pair\") pod \"whisker-66b8c7b787-wln6d\" (UID: \"b6f99801-3a5b-4478-9027-9b923ca889d9\") " pod="calico-system/whisker-66b8c7b787-wln6d" Jul 8 10:13:12.409570 kubelet[2716]: I0708 10:13:12.409492 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6f99801-3a5b-4478-9027-9b923ca889d9-whisker-ca-bundle\") pod \"whisker-66b8c7b787-wln6d\" (UID: \"b6f99801-3a5b-4478-9027-9b923ca889d9\") " pod="calico-system/whisker-66b8c7b787-wln6d" Jul 8 10:13:12.409570 kubelet[2716]: I0708 10:13:12.409511 2716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgbtb\" (UniqueName: \"kubernetes.io/projected/b6f99801-3a5b-4478-9027-9b923ca889d9-kube-api-access-mgbtb\") pod \"whisker-66b8c7b787-wln6d\" (UID: \"b6f99801-3a5b-4478-9027-9b923ca889d9\") " pod="calico-system/whisker-66b8c7b787-wln6d" Jul 8 10:13:12.478448 systemd-networkd[1451]: cali62c8c6f67b6: Link UP Jul 8 10:13:12.478876 systemd-networkd[1451]: cali62c8c6f67b6: Gained carrier Jul 8 10:13:12.479411 containerd[1553]: time="2025-07-08T10:13:12.479364480Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c19c544da539a8d44cdb17e1da27e28b8bf60b6fded8ebbf5a6bd14d9b5c518a\" id:\"ae6551daef9b3203effdddaa40e29375da2c5fc66f74f86e8f8359543c81aba8\" pid:4067 exit_status:1 exited_at:{seconds:1751969592 nanos:475308454}" Jul 8 10:13:12.493858 containerd[1553]: 2025-07-08 10:13:12.327 [INFO][4038] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 8 10:13:12.493858 containerd[1553]: 2025-07-08 10:13:12.339 [INFO][4038] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--z998s-eth0 coredns-674b8bbfcf- kube-system de0ffe7a-e2b0-48fd-b874-045162653d78 813 0 2025-07-08 10:12:33 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-z998s eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali62c8c6f67b6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="fa7c009018a54963ee1dd0f31b32ec6d6dc55ff46d8d07a4db018eaa31bf11eb" Namespace="kube-system" Pod="coredns-674b8bbfcf-z998s" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--z998s-" Jul 8 10:13:12.493858 containerd[1553]: 2025-07-08 10:13:12.339 [INFO][4038] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fa7c009018a54963ee1dd0f31b32ec6d6dc55ff46d8d07a4db018eaa31bf11eb" Namespace="kube-system" Pod="coredns-674b8bbfcf-z998s" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--z998s-eth0" Jul 8 10:13:12.493858 containerd[1553]: 2025-07-08 10:13:12.414 [INFO][4080] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fa7c009018a54963ee1dd0f31b32ec6d6dc55ff46d8d07a4db018eaa31bf11eb" HandleID="k8s-pod-network.fa7c009018a54963ee1dd0f31b32ec6d6dc55ff46d8d07a4db018eaa31bf11eb" Workload="localhost-k8s-coredns--674b8bbfcf--z998s-eth0" Jul 8 10:13:12.494111 containerd[1553]: 2025-07-08 10:13:12.415 [INFO][4080] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fa7c009018a54963ee1dd0f31b32ec6d6dc55ff46d8d07a4db018eaa31bf11eb" HandleID="k8s-pod-network.fa7c009018a54963ee1dd0f31b32ec6d6dc55ff46d8d07a4db018eaa31bf11eb" Workload="localhost-k8s-coredns--674b8bbfcf--z998s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001386e0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-z998s", "timestamp":"2025-07-08 10:13:12.414300646 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 8 10:13:12.494111 containerd[1553]: 2025-07-08 10:13:12.415 [INFO][4080] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 8 10:13:12.494111 containerd[1553]: 2025-07-08 10:13:12.415 [INFO][4080] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 8 10:13:12.494111 containerd[1553]: 2025-07-08 10:13:12.416 [INFO][4080] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 8 10:13:12.494111 containerd[1553]: 2025-07-08 10:13:12.429 [INFO][4080] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fa7c009018a54963ee1dd0f31b32ec6d6dc55ff46d8d07a4db018eaa31bf11eb" host="localhost" Jul 8 10:13:12.494111 containerd[1553]: 2025-07-08 10:13:12.439 [INFO][4080] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 8 10:13:12.494111 containerd[1553]: 2025-07-08 10:13:12.444 [INFO][4080] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 8 10:13:12.494111 containerd[1553]: 2025-07-08 10:13:12.447 [INFO][4080] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 8 10:13:12.494111 containerd[1553]: 2025-07-08 10:13:12.449 [INFO][4080] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 8 10:13:12.494111 containerd[1553]: 2025-07-08 10:13:12.449 [INFO][4080] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fa7c009018a54963ee1dd0f31b32ec6d6dc55ff46d8d07a4db018eaa31bf11eb" host="localhost" Jul 8 10:13:12.494357 containerd[1553]: 2025-07-08 10:13:12.451 [INFO][4080] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.fa7c009018a54963ee1dd0f31b32ec6d6dc55ff46d8d07a4db018eaa31bf11eb Jul 8 10:13:12.494357 containerd[1553]: 2025-07-08 10:13:12.458 [INFO][4080] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fa7c009018a54963ee1dd0f31b32ec6d6dc55ff46d8d07a4db018eaa31bf11eb" host="localhost" Jul 8 10:13:12.494357 containerd[1553]: 2025-07-08 10:13:12.463 [INFO][4080] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.fa7c009018a54963ee1dd0f31b32ec6d6dc55ff46d8d07a4db018eaa31bf11eb" host="localhost" Jul 8 10:13:12.494357 containerd[1553]: 2025-07-08 10:13:12.463 [INFO][4080] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.fa7c009018a54963ee1dd0f31b32ec6d6dc55ff46d8d07a4db018eaa31bf11eb" host="localhost" Jul 8 10:13:12.494357 containerd[1553]: 2025-07-08 10:13:12.463 [INFO][4080] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 8 10:13:12.494357 containerd[1553]: 2025-07-08 10:13:12.463 [INFO][4080] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="fa7c009018a54963ee1dd0f31b32ec6d6dc55ff46d8d07a4db018eaa31bf11eb" HandleID="k8s-pod-network.fa7c009018a54963ee1dd0f31b32ec6d6dc55ff46d8d07a4db018eaa31bf11eb" Workload="localhost-k8s-coredns--674b8bbfcf--z998s-eth0" Jul 8 10:13:12.494478 containerd[1553]: 2025-07-08 10:13:12.468 [INFO][4038] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fa7c009018a54963ee1dd0f31b32ec6d6dc55ff46d8d07a4db018eaa31bf11eb" Namespace="kube-system" Pod="coredns-674b8bbfcf-z998s" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--z998s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--z998s-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"de0ffe7a-e2b0-48fd-b874-045162653d78", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.July, 8, 10, 12, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-z998s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali62c8c6f67b6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 8 10:13:12.494574 containerd[1553]: 2025-07-08 10:13:12.468 [INFO][4038] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="fa7c009018a54963ee1dd0f31b32ec6d6dc55ff46d8d07a4db018eaa31bf11eb" Namespace="kube-system" Pod="coredns-674b8bbfcf-z998s" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--z998s-eth0" Jul 8 10:13:12.494574 containerd[1553]: 2025-07-08 10:13:12.468 [INFO][4038] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali62c8c6f67b6 ContainerID="fa7c009018a54963ee1dd0f31b32ec6d6dc55ff46d8d07a4db018eaa31bf11eb" Namespace="kube-system" Pod="coredns-674b8bbfcf-z998s" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--z998s-eth0" Jul 8 10:13:12.494574 containerd[1553]: 2025-07-08 10:13:12.479 [INFO][4038] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fa7c009018a54963ee1dd0f31b32ec6d6dc55ff46d8d07a4db018eaa31bf11eb" Namespace="kube-system" Pod="coredns-674b8bbfcf-z998s" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--z998s-eth0" Jul 8 10:13:12.494641 containerd[1553]: 2025-07-08 10:13:12.480 [INFO][4038] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fa7c009018a54963ee1dd0f31b32ec6d6dc55ff46d8d07a4db018eaa31bf11eb" Namespace="kube-system" Pod="coredns-674b8bbfcf-z998s" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--z998s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--z998s-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"de0ffe7a-e2b0-48fd-b874-045162653d78", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.July, 8, 10, 12, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fa7c009018a54963ee1dd0f31b32ec6d6dc55ff46d8d07a4db018eaa31bf11eb", Pod:"coredns-674b8bbfcf-z998s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali62c8c6f67b6", MAC:"2a:4d:e4:c1:ce:a5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 8 10:13:12.494641 containerd[1553]: 2025-07-08 10:13:12.490 [INFO][4038] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fa7c009018a54963ee1dd0f31b32ec6d6dc55ff46d8d07a4db018eaa31bf11eb" Namespace="kube-system" Pod="coredns-674b8bbfcf-z998s" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--z998s-eth0" Jul 8 10:13:12.556030 containerd[1553]: time="2025-07-08T10:13:12.555888384Z" level=info msg="connecting to shim fa7c009018a54963ee1dd0f31b32ec6d6dc55ff46d8d07a4db018eaa31bf11eb" address="unix:///run/containerd/s/4b6219e1c0176007453a441315f57c77a298fb8b501f89c4e6607f8fd7906e90" namespace=k8s.io protocol=ttrpc version=3 Jul 8 10:13:12.580003 systemd-networkd[1451]: cali6368e9dfa87: Link UP Jul 8 10:13:12.580243 systemd-networkd[1451]: cali6368e9dfa87: Gained carrier Jul 8 10:13:12.594236 systemd[1]: Started cri-containerd-fa7c009018a54963ee1dd0f31b32ec6d6dc55ff46d8d07a4db018eaa31bf11eb.scope - libcontainer container fa7c009018a54963ee1dd0f31b32ec6d6dc55ff46d8d07a4db018eaa31bf11eb. Jul 8 10:13:12.615834 systemd-resolved[1404]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 8 10:13:12.631699 containerd[1553]: 2025-07-08 10:13:12.265 [INFO][4021] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 8 10:13:12.631699 containerd[1553]: 2025-07-08 10:13:12.282 [INFO][4021] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--cd6h8-eth0 csi-node-driver- calico-system 9d03d297-85cc-4cfe-b78f-31085a48166e 703 0 2025-07-08 10:12:48 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-cd6h8 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali6368e9dfa87 [] [] }} ContainerID="ec74be7c3c586d416037d12bd3e6cc8210b17ca42119da94fb2b98daadc19504" Namespace="calico-system" Pod="csi-node-driver-cd6h8" WorkloadEndpoint="localhost-k8s-csi--node--driver--cd6h8-" Jul 8 10:13:12.631699 containerd[1553]: 2025-07-08 10:13:12.282 [INFO][4021] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ec74be7c3c586d416037d12bd3e6cc8210b17ca42119da94fb2b98daadc19504" Namespace="calico-system" Pod="csi-node-driver-cd6h8" WorkloadEndpoint="localhost-k8s-csi--node--driver--cd6h8-eth0" Jul 8 10:13:12.631699 containerd[1553]: 2025-07-08 10:13:12.414 [INFO][4036] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ec74be7c3c586d416037d12bd3e6cc8210b17ca42119da94fb2b98daadc19504" HandleID="k8s-pod-network.ec74be7c3c586d416037d12bd3e6cc8210b17ca42119da94fb2b98daadc19504" Workload="localhost-k8s-csi--node--driver--cd6h8-eth0" Jul 8 10:13:12.631699 containerd[1553]: 2025-07-08 10:13:12.415 [INFO][4036] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ec74be7c3c586d416037d12bd3e6cc8210b17ca42119da94fb2b98daadc19504" HandleID="k8s-pod-network.ec74be7c3c586d416037d12bd3e6cc8210b17ca42119da94fb2b98daadc19504" Workload="localhost-k8s-csi--node--driver--cd6h8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002dede0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-cd6h8", "timestamp":"2025-07-08 10:13:12.413900855 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 8 10:13:12.631699 containerd[1553]: 2025-07-08 10:13:12.415 [INFO][4036] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 8 10:13:12.631699 containerd[1553]: 2025-07-08 10:13:12.463 [INFO][4036] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 8 10:13:12.631699 containerd[1553]: 2025-07-08 10:13:12.463 [INFO][4036] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 8 10:13:12.631699 containerd[1553]: 2025-07-08 10:13:12.529 [INFO][4036] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ec74be7c3c586d416037d12bd3e6cc8210b17ca42119da94fb2b98daadc19504" host="localhost" Jul 8 10:13:12.631699 containerd[1553]: 2025-07-08 10:13:12.539 [INFO][4036] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 8 10:13:12.631699 containerd[1553]: 2025-07-08 10:13:12.545 [INFO][4036] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 8 10:13:12.631699 containerd[1553]: 2025-07-08 10:13:12.546 [INFO][4036] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 8 10:13:12.631699 containerd[1553]: 2025-07-08 10:13:12.548 [INFO][4036] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 8 10:13:12.631699 containerd[1553]: 2025-07-08 10:13:12.548 [INFO][4036] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ec74be7c3c586d416037d12bd3e6cc8210b17ca42119da94fb2b98daadc19504" host="localhost" Jul 8 10:13:12.631699 containerd[1553]: 2025-07-08 10:13:12.550 [INFO][4036] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ec74be7c3c586d416037d12bd3e6cc8210b17ca42119da94fb2b98daadc19504 Jul 8 10:13:12.631699 containerd[1553]: 2025-07-08 10:13:12.553 [INFO][4036] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ec74be7c3c586d416037d12bd3e6cc8210b17ca42119da94fb2b98daadc19504" host="localhost" Jul 8 10:13:12.631699 containerd[1553]: 2025-07-08 10:13:12.559 [INFO][4036] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.ec74be7c3c586d416037d12bd3e6cc8210b17ca42119da94fb2b98daadc19504" host="localhost" Jul 8 10:13:12.631699 containerd[1553]: 2025-07-08 10:13:12.559 [INFO][4036] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.ec74be7c3c586d416037d12bd3e6cc8210b17ca42119da94fb2b98daadc19504" host="localhost" Jul 8 10:13:12.631699 containerd[1553]: 2025-07-08 10:13:12.559 [INFO][4036] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 8 10:13:12.631699 containerd[1553]: 2025-07-08 10:13:12.559 [INFO][4036] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="ec74be7c3c586d416037d12bd3e6cc8210b17ca42119da94fb2b98daadc19504" HandleID="k8s-pod-network.ec74be7c3c586d416037d12bd3e6cc8210b17ca42119da94fb2b98daadc19504" Workload="localhost-k8s-csi--node--driver--cd6h8-eth0" Jul 8 10:13:12.633493 containerd[1553]: 2025-07-08 10:13:12.565 [INFO][4021] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ec74be7c3c586d416037d12bd3e6cc8210b17ca42119da94fb2b98daadc19504" Namespace="calico-system" Pod="csi-node-driver-cd6h8" WorkloadEndpoint="localhost-k8s-csi--node--driver--cd6h8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--cd6h8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9d03d297-85cc-4cfe-b78f-31085a48166e", ResourceVersion:"703", Generation:0, CreationTimestamp:time.Date(2025, time.July, 8, 10, 12, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-cd6h8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6368e9dfa87", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 8 10:13:12.633493 containerd[1553]: 2025-07-08 10:13:12.565 [INFO][4021] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="ec74be7c3c586d416037d12bd3e6cc8210b17ca42119da94fb2b98daadc19504" Namespace="calico-system" Pod="csi-node-driver-cd6h8" WorkloadEndpoint="localhost-k8s-csi--node--driver--cd6h8-eth0" Jul 8 10:13:12.633493 containerd[1553]: 2025-07-08 10:13:12.565 [INFO][4021] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6368e9dfa87 ContainerID="ec74be7c3c586d416037d12bd3e6cc8210b17ca42119da94fb2b98daadc19504" Namespace="calico-system" Pod="csi-node-driver-cd6h8" WorkloadEndpoint="localhost-k8s-csi--node--driver--cd6h8-eth0" Jul 8 10:13:12.633493 containerd[1553]: 2025-07-08 10:13:12.578 [INFO][4021] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ec74be7c3c586d416037d12bd3e6cc8210b17ca42119da94fb2b98daadc19504" Namespace="calico-system" Pod="csi-node-driver-cd6h8" WorkloadEndpoint="localhost-k8s-csi--node--driver--cd6h8-eth0" Jul 8 10:13:12.633493 containerd[1553]: 2025-07-08 10:13:12.581 [INFO][4021] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ec74be7c3c586d416037d12bd3e6cc8210b17ca42119da94fb2b98daadc19504" Namespace="calico-system" Pod="csi-node-driver-cd6h8" WorkloadEndpoint="localhost-k8s-csi--node--driver--cd6h8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--cd6h8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9d03d297-85cc-4cfe-b78f-31085a48166e", ResourceVersion:"703", Generation:0, CreationTimestamp:time.Date(2025, time.July, 8, 10, 12, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ec74be7c3c586d416037d12bd3e6cc8210b17ca42119da94fb2b98daadc19504", Pod:"csi-node-driver-cd6h8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6368e9dfa87", MAC:"36:88:0a:a7:20:85", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 8 10:13:12.633493 containerd[1553]: 2025-07-08 10:13:12.616 [INFO][4021] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ec74be7c3c586d416037d12bd3e6cc8210b17ca42119da94fb2b98daadc19504" Namespace="calico-system" Pod="csi-node-driver-cd6h8" WorkloadEndpoint="localhost-k8s-csi--node--driver--cd6h8-eth0" Jul 8 10:13:12.672799 containerd[1553]: time="2025-07-08T10:13:12.672737959Z" level=info msg="connecting to shim ec74be7c3c586d416037d12bd3e6cc8210b17ca42119da94fb2b98daadc19504" address="unix:///run/containerd/s/1301c6df192b6ada6044a212377360cf91c1eea603ca3b4e381abc99a4d75b21" namespace=k8s.io protocol=ttrpc version=3 Jul 8 10:13:12.673726 containerd[1553]: time="2025-07-08T10:13:12.673683695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-z998s,Uid:de0ffe7a-e2b0-48fd-b874-045162653d78,Namespace:kube-system,Attempt:0,} returns sandbox id \"fa7c009018a54963ee1dd0f31b32ec6d6dc55ff46d8d07a4db018eaa31bf11eb\"" Jul 8 10:13:12.694715 containerd[1553]: time="2025-07-08T10:13:12.694306730Z" level=info msg="CreateContainer within sandbox \"fa7c009018a54963ee1dd0f31b32ec6d6dc55ff46d8d07a4db018eaa31bf11eb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 8 10:13:12.697085 containerd[1553]: time="2025-07-08T10:13:12.694608297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66b8c7b787-wln6d,Uid:b6f99801-3a5b-4478-9027-9b923ca889d9,Namespace:calico-system,Attempt:0,}" Jul 8 10:13:12.711405 systemd[1]: Started cri-containerd-ec74be7c3c586d416037d12bd3e6cc8210b17ca42119da94fb2b98daadc19504.scope - libcontainer container ec74be7c3c586d416037d12bd3e6cc8210b17ca42119da94fb2b98daadc19504. Jul 8 10:13:12.731759 systemd-resolved[1404]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 8 10:13:12.750292 containerd[1553]: time="2025-07-08T10:13:12.750202239Z" level=info msg="Container 1f4ee06bd89aa4cf8099a7e73251cbd963a117e5f0fce7faf0dcaa4e255c2e49: CDI devices from CRI Config.CDIDevices: []" Jul 8 10:13:12.764375 containerd[1553]: time="2025-07-08T10:13:12.764340487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cd6h8,Uid:9d03d297-85cc-4cfe-b78f-31085a48166e,Namespace:calico-system,Attempt:0,} returns sandbox id \"ec74be7c3c586d416037d12bd3e6cc8210b17ca42119da94fb2b98daadc19504\"" Jul 8 10:13:12.768491 containerd[1553]: time="2025-07-08T10:13:12.768428463Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 8 10:13:12.773086 containerd[1553]: time="2025-07-08T10:13:12.772023765Z" level=info msg="CreateContainer within sandbox \"fa7c009018a54963ee1dd0f31b32ec6d6dc55ff46d8d07a4db018eaa31bf11eb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1f4ee06bd89aa4cf8099a7e73251cbd963a117e5f0fce7faf0dcaa4e255c2e49\"" Jul 8 10:13:12.779652 containerd[1553]: time="2025-07-08T10:13:12.779563062Z" level=info msg="StartContainer for \"1f4ee06bd89aa4cf8099a7e73251cbd963a117e5f0fce7faf0dcaa4e255c2e49\"" Jul 8 10:13:12.786029 containerd[1553]: time="2025-07-08T10:13:12.785928665Z" level=info msg="connecting to shim 1f4ee06bd89aa4cf8099a7e73251cbd963a117e5f0fce7faf0dcaa4e255c2e49" address="unix:///run/containerd/s/4b6219e1c0176007453a441315f57c77a298fb8b501f89c4e6607f8fd7906e90" protocol=ttrpc version=3 Jul 8 10:13:12.808257 systemd[1]: Started cri-containerd-1f4ee06bd89aa4cf8099a7e73251cbd963a117e5f0fce7faf0dcaa4e255c2e49.scope - libcontainer container 1f4ee06bd89aa4cf8099a7e73251cbd963a117e5f0fce7faf0dcaa4e255c2e49. Jul 8 10:13:12.869818 containerd[1553]: time="2025-07-08T10:13:12.869714465Z" level=info msg="StartContainer for \"1f4ee06bd89aa4cf8099a7e73251cbd963a117e5f0fce7faf0dcaa4e255c2e49\" returns successfully" Jul 8 10:13:12.922284 systemd-networkd[1451]: calicfee78ecaee: Link UP Jul 8 10:13:12.922953 systemd-networkd[1451]: calicfee78ecaee: Gained carrier Jul 8 10:13:12.935998 containerd[1553]: 2025-07-08 10:13:12.798 [INFO][4280] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 8 10:13:12.935998 containerd[1553]: 2025-07-08 10:13:12.832 [INFO][4280] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--66b8c7b787--wln6d-eth0 whisker-66b8c7b787- calico-system b6f99801-3a5b-4478-9027-9b923ca889d9 930 0 2025-07-08 10:13:12 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:66b8c7b787 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-66b8c7b787-wln6d eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calicfee78ecaee [] [] }} ContainerID="30d37e7bcc2e4f3df54f8154c52aa756963b2ed676f73c91c5c3985f9488ffc6" Namespace="calico-system" Pod="whisker-66b8c7b787-wln6d" WorkloadEndpoint="localhost-k8s-whisker--66b8c7b787--wln6d-" Jul 8 10:13:12.935998 containerd[1553]: 2025-07-08 10:13:12.832 [INFO][4280] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="30d37e7bcc2e4f3df54f8154c52aa756963b2ed676f73c91c5c3985f9488ffc6" Namespace="calico-system" Pod="whisker-66b8c7b787-wln6d" WorkloadEndpoint="localhost-k8s-whisker--66b8c7b787--wln6d-eth0" Jul 8 10:13:12.935998 containerd[1553]: 2025-07-08 10:13:12.873 [INFO][4343] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="30d37e7bcc2e4f3df54f8154c52aa756963b2ed676f73c91c5c3985f9488ffc6" HandleID="k8s-pod-network.30d37e7bcc2e4f3df54f8154c52aa756963b2ed676f73c91c5c3985f9488ffc6" Workload="localhost-k8s-whisker--66b8c7b787--wln6d-eth0" Jul 8 10:13:12.935998 containerd[1553]: 2025-07-08 10:13:12.873 [INFO][4343] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="30d37e7bcc2e4f3df54f8154c52aa756963b2ed676f73c91c5c3985f9488ffc6" HandleID="k8s-pod-network.30d37e7bcc2e4f3df54f8154c52aa756963b2ed676f73c91c5c3985f9488ffc6" Workload="localhost-k8s-whisker--66b8c7b787--wln6d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000404e10), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-66b8c7b787-wln6d", "timestamp":"2025-07-08 10:13:12.873542653 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 8 10:13:12.935998 containerd[1553]: 2025-07-08 10:13:12.873 [INFO][4343] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 8 10:13:12.935998 containerd[1553]: 2025-07-08 10:13:12.873 [INFO][4343] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 8 10:13:12.935998 containerd[1553]: 2025-07-08 10:13:12.873 [INFO][4343] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 8 10:13:12.935998 containerd[1553]: 2025-07-08 10:13:12.881 [INFO][4343] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.30d37e7bcc2e4f3df54f8154c52aa756963b2ed676f73c91c5c3985f9488ffc6" host="localhost" Jul 8 10:13:12.935998 containerd[1553]: 2025-07-08 10:13:12.886 [INFO][4343] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 8 10:13:12.935998 containerd[1553]: 2025-07-08 10:13:12.893 [INFO][4343] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 8 10:13:12.935998 containerd[1553]: 2025-07-08 10:13:12.896 [INFO][4343] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 8 10:13:12.935998 containerd[1553]: 2025-07-08 10:13:12.898 [INFO][4343] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 8 10:13:12.935998 containerd[1553]: 2025-07-08 10:13:12.898 [INFO][4343] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.30d37e7bcc2e4f3df54f8154c52aa756963b2ed676f73c91c5c3985f9488ffc6" host="localhost" Jul 8 10:13:12.935998 containerd[1553]: 2025-07-08 10:13:12.900 [INFO][4343] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.30d37e7bcc2e4f3df54f8154c52aa756963b2ed676f73c91c5c3985f9488ffc6 Jul 8 10:13:12.935998 containerd[1553]: 2025-07-08 10:13:12.904 [INFO][4343] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.30d37e7bcc2e4f3df54f8154c52aa756963b2ed676f73c91c5c3985f9488ffc6" host="localhost" Jul 8 10:13:12.935998 containerd[1553]: 2025-07-08 10:13:12.915 [INFO][4343] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.30d37e7bcc2e4f3df54f8154c52aa756963b2ed676f73c91c5c3985f9488ffc6" host="localhost" Jul 8 10:13:12.935998 containerd[1553]: 2025-07-08 10:13:12.915 [INFO][4343] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.30d37e7bcc2e4f3df54f8154c52aa756963b2ed676f73c91c5c3985f9488ffc6" host="localhost" Jul 8 10:13:12.935998 containerd[1553]: 2025-07-08 10:13:12.915 [INFO][4343] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 8 10:13:12.935998 containerd[1553]: 2025-07-08 10:13:12.915 [INFO][4343] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="30d37e7bcc2e4f3df54f8154c52aa756963b2ed676f73c91c5c3985f9488ffc6" HandleID="k8s-pod-network.30d37e7bcc2e4f3df54f8154c52aa756963b2ed676f73c91c5c3985f9488ffc6" Workload="localhost-k8s-whisker--66b8c7b787--wln6d-eth0" Jul 8 10:13:12.937436 containerd[1553]: 2025-07-08 10:13:12.919 [INFO][4280] cni-plugin/k8s.go 418: Populated endpoint ContainerID="30d37e7bcc2e4f3df54f8154c52aa756963b2ed676f73c91c5c3985f9488ffc6" Namespace="calico-system" Pod="whisker-66b8c7b787-wln6d" WorkloadEndpoint="localhost-k8s-whisker--66b8c7b787--wln6d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--66b8c7b787--wln6d-eth0", GenerateName:"whisker-66b8c7b787-", Namespace:"calico-system", SelfLink:"", UID:"b6f99801-3a5b-4478-9027-9b923ca889d9", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2025, time.July, 8, 10, 13, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"66b8c7b787", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-66b8c7b787-wln6d", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calicfee78ecaee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 8 10:13:12.937436 containerd[1553]: 2025-07-08 10:13:12.919 [INFO][4280] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="30d37e7bcc2e4f3df54f8154c52aa756963b2ed676f73c91c5c3985f9488ffc6" Namespace="calico-system" Pod="whisker-66b8c7b787-wln6d" WorkloadEndpoint="localhost-k8s-whisker--66b8c7b787--wln6d-eth0" Jul 8 10:13:12.937436 containerd[1553]: 2025-07-08 10:13:12.919 [INFO][4280] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicfee78ecaee ContainerID="30d37e7bcc2e4f3df54f8154c52aa756963b2ed676f73c91c5c3985f9488ffc6" Namespace="calico-system" Pod="whisker-66b8c7b787-wln6d" WorkloadEndpoint="localhost-k8s-whisker--66b8c7b787--wln6d-eth0" Jul 8 10:13:12.937436 containerd[1553]: 2025-07-08 10:13:12.922 [INFO][4280] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="30d37e7bcc2e4f3df54f8154c52aa756963b2ed676f73c91c5c3985f9488ffc6" Namespace="calico-system" Pod="whisker-66b8c7b787-wln6d" WorkloadEndpoint="localhost-k8s-whisker--66b8c7b787--wln6d-eth0" Jul 8 10:13:12.937436 containerd[1553]: 2025-07-08 10:13:12.923 [INFO][4280] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="30d37e7bcc2e4f3df54f8154c52aa756963b2ed676f73c91c5c3985f9488ffc6" Namespace="calico-system" Pod="whisker-66b8c7b787-wln6d" WorkloadEndpoint="localhost-k8s-whisker--66b8c7b787--wln6d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--66b8c7b787--wln6d-eth0", GenerateName:"whisker-66b8c7b787-", Namespace:"calico-system", SelfLink:"", UID:"b6f99801-3a5b-4478-9027-9b923ca889d9", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2025, time.July, 8, 10, 13, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"66b8c7b787", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"30d37e7bcc2e4f3df54f8154c52aa756963b2ed676f73c91c5c3985f9488ffc6", Pod:"whisker-66b8c7b787-wln6d", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calicfee78ecaee", MAC:"0a:96:e2:ff:ca:c1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 8 10:13:12.937436 containerd[1553]: 2025-07-08 10:13:12.931 [INFO][4280] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="30d37e7bcc2e4f3df54f8154c52aa756963b2ed676f73c91c5c3985f9488ffc6" Namespace="calico-system" Pod="whisker-66b8c7b787-wln6d" WorkloadEndpoint="localhost-k8s-whisker--66b8c7b787--wln6d-eth0" Jul 8 10:13:12.978495 containerd[1553]: time="2025-07-08T10:13:12.978439134Z" level=info msg="connecting to shim 30d37e7bcc2e4f3df54f8154c52aa756963b2ed676f73c91c5c3985f9488ffc6" address="unix:///run/containerd/s/5d4815b0f0a6de913433a95595bd9dd3132e0428962869baa89fb1d0f7de0ddf" namespace=k8s.io protocol=ttrpc version=3 Jul 8 10:13:13.012174 systemd[1]: Started cri-containerd-30d37e7bcc2e4f3df54f8154c52aa756963b2ed676f73c91c5c3985f9488ffc6.scope - libcontainer container 30d37e7bcc2e4f3df54f8154c52aa756963b2ed676f73c91c5c3985f9488ffc6. Jul 8 10:13:13.038847 systemd-resolved[1404]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 8 10:13:13.074452 containerd[1553]: time="2025-07-08T10:13:13.074416274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66b8c7b787-wln6d,Uid:b6f99801-3a5b-4478-9027-9b923ca889d9,Namespace:calico-system,Attempt:0,} returns sandbox id \"30d37e7bcc2e4f3df54f8154c52aa756963b2ed676f73c91c5c3985f9488ffc6\"" Jul 8 10:13:13.118593 containerd[1553]: time="2025-07-08T10:13:13.118550690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-d9q6m,Uid:a7af3032-323d-4592-aa41-5139c600325d,Namespace:calico-system,Attempt:0,}" Jul 8 10:13:13.120474 kubelet[2716]: I0708 10:13:13.120432 2716 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1abf930-a2f5-4b0a-891f-2fb307b9e2a0" path="/var/lib/kubelet/pods/e1abf930-a2f5-4b0a-891f-2fb307b9e2a0/volumes" Jul 8 10:13:13.219310 systemd-networkd[1451]: vxlan.calico: Link UP Jul 8 10:13:13.219320 systemd-networkd[1451]: vxlan.calico: Gained carrier Jul 8 10:13:13.242940 systemd-networkd[1451]: cali4e457551962: Link UP Jul 8 10:13:13.243745 systemd-networkd[1451]: cali4e457551962: Gained carrier Jul 8 10:13:13.260689 containerd[1553]: 2025-07-08 10:13:13.157 [INFO][4439] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--768f4c5c69--d9q6m-eth0 goldmane-768f4c5c69- calico-system a7af3032-323d-4592-aa41-5139c600325d 817 0 2025-07-08 10:12:47 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-768f4c5c69-d9q6m eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali4e457551962 [] [] }} ContainerID="8ffd254998d0e4d7ba5d6959e7e15993b437d29bbe44b31b19ffc100cc048f71" Namespace="calico-system" Pod="goldmane-768f4c5c69-d9q6m" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--d9q6m-" Jul 8 10:13:13.260689 containerd[1553]: 2025-07-08 10:13:13.158 [INFO][4439] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8ffd254998d0e4d7ba5d6959e7e15993b437d29bbe44b31b19ffc100cc048f71" Namespace="calico-system" Pod="goldmane-768f4c5c69-d9q6m" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--d9q6m-eth0" Jul 8 10:13:13.260689 containerd[1553]: 2025-07-08 10:13:13.190 [INFO][4465] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8ffd254998d0e4d7ba5d6959e7e15993b437d29bbe44b31b19ffc100cc048f71" HandleID="k8s-pod-network.8ffd254998d0e4d7ba5d6959e7e15993b437d29bbe44b31b19ffc100cc048f71" Workload="localhost-k8s-goldmane--768f4c5c69--d9q6m-eth0" Jul 8 10:13:13.260689 containerd[1553]: 2025-07-08 10:13:13.190 [INFO][4465] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8ffd254998d0e4d7ba5d6959e7e15993b437d29bbe44b31b19ffc100cc048f71" HandleID="k8s-pod-network.8ffd254998d0e4d7ba5d6959e7e15993b437d29bbe44b31b19ffc100cc048f71" Workload="localhost-k8s-goldmane--768f4c5c69--d9q6m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f530), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-768f4c5c69-d9q6m", "timestamp":"2025-07-08 10:13:13.190458363 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 8 10:13:13.260689 containerd[1553]: 2025-07-08 10:13:13.190 [INFO][4465] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 8 10:13:13.260689 containerd[1553]: 2025-07-08 10:13:13.190 [INFO][4465] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 8 10:13:13.260689 containerd[1553]: 2025-07-08 10:13:13.190 [INFO][4465] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 8 10:13:13.260689 containerd[1553]: 2025-07-08 10:13:13.199 [INFO][4465] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8ffd254998d0e4d7ba5d6959e7e15993b437d29bbe44b31b19ffc100cc048f71" host="localhost" Jul 8 10:13:13.260689 containerd[1553]: 2025-07-08 10:13:13.208 [INFO][4465] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 8 10:13:13.260689 containerd[1553]: 2025-07-08 10:13:13.212 [INFO][4465] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 8 10:13:13.260689 containerd[1553]: 2025-07-08 10:13:13.215 [INFO][4465] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 8 10:13:13.260689 containerd[1553]: 2025-07-08 10:13:13.218 [INFO][4465] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 8 10:13:13.260689 containerd[1553]: 2025-07-08 10:13:13.218 [INFO][4465] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8ffd254998d0e4d7ba5d6959e7e15993b437d29bbe44b31b19ffc100cc048f71" host="localhost" Jul 8 10:13:13.260689 containerd[1553]: 2025-07-08 10:13:13.220 [INFO][4465] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8ffd254998d0e4d7ba5d6959e7e15993b437d29bbe44b31b19ffc100cc048f71 Jul 8 10:13:13.260689 containerd[1553]: 2025-07-08 10:13:13.226 [INFO][4465] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8ffd254998d0e4d7ba5d6959e7e15993b437d29bbe44b31b19ffc100cc048f71" host="localhost" Jul 8 10:13:13.260689 containerd[1553]: 2025-07-08 10:13:13.236 [INFO][4465] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.8ffd254998d0e4d7ba5d6959e7e15993b437d29bbe44b31b19ffc100cc048f71" host="localhost" Jul 8 10:13:13.260689 containerd[1553]: 2025-07-08 10:13:13.237 [INFO][4465] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.8ffd254998d0e4d7ba5d6959e7e15993b437d29bbe44b31b19ffc100cc048f71" host="localhost" Jul 8 10:13:13.260689 containerd[1553]: 2025-07-08 10:13:13.237 [INFO][4465] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 8 10:13:13.260689 containerd[1553]: 2025-07-08 10:13:13.237 [INFO][4465] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="8ffd254998d0e4d7ba5d6959e7e15993b437d29bbe44b31b19ffc100cc048f71" HandleID="k8s-pod-network.8ffd254998d0e4d7ba5d6959e7e15993b437d29bbe44b31b19ffc100cc048f71" Workload="localhost-k8s-goldmane--768f4c5c69--d9q6m-eth0" Jul 8 10:13:13.261876 containerd[1553]: 2025-07-08 10:13:13.240 [INFO][4439] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8ffd254998d0e4d7ba5d6959e7e15993b437d29bbe44b31b19ffc100cc048f71" Namespace="calico-system" Pod="goldmane-768f4c5c69-d9q6m" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--d9q6m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--d9q6m-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"a7af3032-323d-4592-aa41-5139c600325d", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2025, time.July, 8, 10, 12, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-768f4c5c69-d9q6m", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4e457551962", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 8 10:13:13.261876 containerd[1553]: 2025-07-08 10:13:13.241 [INFO][4439] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="8ffd254998d0e4d7ba5d6959e7e15993b437d29bbe44b31b19ffc100cc048f71" Namespace="calico-system" Pod="goldmane-768f4c5c69-d9q6m" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--d9q6m-eth0" Jul 8 10:13:13.261876 containerd[1553]: 2025-07-08 10:13:13.241 [INFO][4439] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4e457551962 ContainerID="8ffd254998d0e4d7ba5d6959e7e15993b437d29bbe44b31b19ffc100cc048f71" Namespace="calico-system" Pod="goldmane-768f4c5c69-d9q6m" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--d9q6m-eth0" Jul 8 10:13:13.261876 containerd[1553]: 2025-07-08 10:13:13.244 [INFO][4439] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8ffd254998d0e4d7ba5d6959e7e15993b437d29bbe44b31b19ffc100cc048f71" Namespace="calico-system" Pod="goldmane-768f4c5c69-d9q6m" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--d9q6m-eth0" Jul 8 10:13:13.261876 containerd[1553]: 2025-07-08 10:13:13.244 [INFO][4439] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8ffd254998d0e4d7ba5d6959e7e15993b437d29bbe44b31b19ffc100cc048f71" Namespace="calico-system" Pod="goldmane-768f4c5c69-d9q6m" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--d9q6m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--d9q6m-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"a7af3032-323d-4592-aa41-5139c600325d", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2025, time.July, 8, 10, 12, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8ffd254998d0e4d7ba5d6959e7e15993b437d29bbe44b31b19ffc100cc048f71", Pod:"goldmane-768f4c5c69-d9q6m", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4e457551962", MAC:"32:64:78:56:32:2d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 8 10:13:13.261876 containerd[1553]: 2025-07-08 10:13:13.255 [INFO][4439] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8ffd254998d0e4d7ba5d6959e7e15993b437d29bbe44b31b19ffc100cc048f71" Namespace="calico-system" Pod="goldmane-768f4c5c69-d9q6m" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--d9q6m-eth0" Jul 8 10:13:13.287662 containerd[1553]: time="2025-07-08T10:13:13.287541756Z" level=info msg="connecting to shim 8ffd254998d0e4d7ba5d6959e7e15993b437d29bbe44b31b19ffc100cc048f71" address="unix:///run/containerd/s/0c79a6c23e417109c5592c8f07253c70f9feabfc4061ca01e3e9b81a545e5455" namespace=k8s.io protocol=ttrpc version=3 Jul 8 10:13:13.330027 kubelet[2716]: I0708 10:13:13.329757 2716 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-z998s" podStartSLOduration=40.329740588 podStartE2EDuration="40.329740588s" podCreationTimestamp="2025-07-08 10:12:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-08 10:13:13.31479138 +0000 UTC m=+44.291728658" watchObservedRunningTime="2025-07-08 10:13:13.329740588 +0000 UTC m=+44.306677876" Jul 8 10:13:13.332484 systemd[1]: Started cri-containerd-8ffd254998d0e4d7ba5d6959e7e15993b437d29bbe44b31b19ffc100cc048f71.scope - libcontainer container 8ffd254998d0e4d7ba5d6959e7e15993b437d29bbe44b31b19ffc100cc048f71. Jul 8 10:13:13.351575 systemd-resolved[1404]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 8 10:13:13.394418 containerd[1553]: time="2025-07-08T10:13:13.394330221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-d9q6m,Uid:a7af3032-323d-4592-aa41-5139c600325d,Namespace:calico-system,Attempt:0,} returns sandbox id \"8ffd254998d0e4d7ba5d6959e7e15993b437d29bbe44b31b19ffc100cc048f71\"" Jul 8 10:13:13.481313 systemd-networkd[1451]: cali62c8c6f67b6: Gained IPv6LL Jul 8 10:13:14.117675 containerd[1553]: time="2025-07-08T10:13:14.117607831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rmwf4,Uid:b2379b87-3302-4e51-a739-e07127b7a8fb,Namespace:kube-system,Attempt:0,}" Jul 8 10:13:14.118101 containerd[1553]: time="2025-07-08T10:13:14.117690577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77c59df47c-d77nl,Uid:9081cece-19b9-4b34-92d7-7256bea099db,Namespace:calico-apiserver,Attempt:0,}" Jul 8 10:13:14.186604 systemd-networkd[1451]: cali6368e9dfa87: Gained IPv6LL Jul 8 10:13:14.230610 systemd-networkd[1451]: cali5c5b03dc077: Link UP Jul 8 10:13:14.230819 systemd-networkd[1451]: cali5c5b03dc077: Gained carrier Jul 8 10:13:14.243176 containerd[1553]: 2025-07-08 10:13:14.170 [INFO][4598] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--rmwf4-eth0 coredns-674b8bbfcf- kube-system b2379b87-3302-4e51-a739-e07127b7a8fb 816 0 2025-07-08 10:12:33 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-rmwf4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5c5b03dc077 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="3111eb0e1ad361c0b374762e241d3e0274636f22c9ad1b1e26df2d357e6136a0" Namespace="kube-system" Pod="coredns-674b8bbfcf-rmwf4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--rmwf4-" Jul 8 10:13:14.243176 containerd[1553]: 2025-07-08 10:13:14.170 [INFO][4598] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3111eb0e1ad361c0b374762e241d3e0274636f22c9ad1b1e26df2d357e6136a0" Namespace="kube-system" Pod="coredns-674b8bbfcf-rmwf4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--rmwf4-eth0" Jul 8 10:13:14.243176 containerd[1553]: 2025-07-08 10:13:14.195 [INFO][4630] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3111eb0e1ad361c0b374762e241d3e0274636f22c9ad1b1e26df2d357e6136a0" HandleID="k8s-pod-network.3111eb0e1ad361c0b374762e241d3e0274636f22c9ad1b1e26df2d357e6136a0" Workload="localhost-k8s-coredns--674b8bbfcf--rmwf4-eth0" Jul 8 10:13:14.243176 containerd[1553]: 2025-07-08 10:13:14.196 [INFO][4630] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3111eb0e1ad361c0b374762e241d3e0274636f22c9ad1b1e26df2d357e6136a0" HandleID="k8s-pod-network.3111eb0e1ad361c0b374762e241d3e0274636f22c9ad1b1e26df2d357e6136a0" Workload="localhost-k8s-coredns--674b8bbfcf--rmwf4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f6c0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-rmwf4", "timestamp":"2025-07-08 10:13:14.195841529 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 8 10:13:14.243176 containerd[1553]: 2025-07-08 10:13:14.196 [INFO][4630] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 8 10:13:14.243176 containerd[1553]: 2025-07-08 10:13:14.196 [INFO][4630] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 8 10:13:14.243176 containerd[1553]: 2025-07-08 10:13:14.196 [INFO][4630] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 8 10:13:14.243176 containerd[1553]: 2025-07-08 10:13:14.203 [INFO][4630] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3111eb0e1ad361c0b374762e241d3e0274636f22c9ad1b1e26df2d357e6136a0" host="localhost" Jul 8 10:13:14.243176 containerd[1553]: 2025-07-08 10:13:14.207 [INFO][4630] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 8 10:13:14.243176 containerd[1553]: 2025-07-08 10:13:14.210 [INFO][4630] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 8 10:13:14.243176 containerd[1553]: 2025-07-08 10:13:14.212 [INFO][4630] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 8 10:13:14.243176 containerd[1553]: 2025-07-08 10:13:14.214 [INFO][4630] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 8 10:13:14.243176 containerd[1553]: 2025-07-08 10:13:14.214 [INFO][4630] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3111eb0e1ad361c0b374762e241d3e0274636f22c9ad1b1e26df2d357e6136a0" host="localhost" Jul 8 10:13:14.243176 containerd[1553]: 2025-07-08 10:13:14.215 [INFO][4630] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3111eb0e1ad361c0b374762e241d3e0274636f22c9ad1b1e26df2d357e6136a0 Jul 8 10:13:14.243176 containerd[1553]: 2025-07-08 10:13:14.220 [INFO][4630] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3111eb0e1ad361c0b374762e241d3e0274636f22c9ad1b1e26df2d357e6136a0" host="localhost" Jul 8 10:13:14.243176 containerd[1553]: 2025-07-08 10:13:14.224 [INFO][4630] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.3111eb0e1ad361c0b374762e241d3e0274636f22c9ad1b1e26df2d357e6136a0" host="localhost" Jul 8 10:13:14.243176 containerd[1553]: 2025-07-08 10:13:14.224 [INFO][4630] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.3111eb0e1ad361c0b374762e241d3e0274636f22c9ad1b1e26df2d357e6136a0" host="localhost" Jul 8 10:13:14.243176 containerd[1553]: 2025-07-08 10:13:14.224 [INFO][4630] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 8 10:13:14.243176 containerd[1553]: 2025-07-08 10:13:14.224 [INFO][4630] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="3111eb0e1ad361c0b374762e241d3e0274636f22c9ad1b1e26df2d357e6136a0" HandleID="k8s-pod-network.3111eb0e1ad361c0b374762e241d3e0274636f22c9ad1b1e26df2d357e6136a0" Workload="localhost-k8s-coredns--674b8bbfcf--rmwf4-eth0" Jul 8 10:13:14.243811 containerd[1553]: 2025-07-08 10:13:14.228 [INFO][4598] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3111eb0e1ad361c0b374762e241d3e0274636f22c9ad1b1e26df2d357e6136a0" Namespace="kube-system" Pod="coredns-674b8bbfcf-rmwf4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--rmwf4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--rmwf4-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"b2379b87-3302-4e51-a739-e07127b7a8fb", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.July, 8, 10, 12, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-rmwf4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5c5b03dc077", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 8 10:13:14.243811 containerd[1553]: 2025-07-08 10:13:14.228 [INFO][4598] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="3111eb0e1ad361c0b374762e241d3e0274636f22c9ad1b1e26df2d357e6136a0" Namespace="kube-system" Pod="coredns-674b8bbfcf-rmwf4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--rmwf4-eth0" Jul 8 10:13:14.243811 containerd[1553]: 2025-07-08 10:13:14.228 [INFO][4598] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5c5b03dc077 ContainerID="3111eb0e1ad361c0b374762e241d3e0274636f22c9ad1b1e26df2d357e6136a0" Namespace="kube-system" Pod="coredns-674b8bbfcf-rmwf4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--rmwf4-eth0" Jul 8 10:13:14.243811 containerd[1553]: 2025-07-08 10:13:14.230 [INFO][4598] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3111eb0e1ad361c0b374762e241d3e0274636f22c9ad1b1e26df2d357e6136a0" Namespace="kube-system" Pod="coredns-674b8bbfcf-rmwf4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--rmwf4-eth0" Jul 8 10:13:14.243811 containerd[1553]: 2025-07-08 10:13:14.232 [INFO][4598] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3111eb0e1ad361c0b374762e241d3e0274636f22c9ad1b1e26df2d357e6136a0" Namespace="kube-system" Pod="coredns-674b8bbfcf-rmwf4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--rmwf4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--rmwf4-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"b2379b87-3302-4e51-a739-e07127b7a8fb", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.July, 8, 10, 12, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3111eb0e1ad361c0b374762e241d3e0274636f22c9ad1b1e26df2d357e6136a0", Pod:"coredns-674b8bbfcf-rmwf4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5c5b03dc077", MAC:"16:9c:25:d3:71:1e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 8 10:13:14.243811 containerd[1553]: 2025-07-08 10:13:14.239 [INFO][4598] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3111eb0e1ad361c0b374762e241d3e0274636f22c9ad1b1e26df2d357e6136a0" Namespace="kube-system" Pod="coredns-674b8bbfcf-rmwf4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--rmwf4-eth0" Jul 8 10:13:14.249223 systemd-networkd[1451]: calicfee78ecaee: Gained IPv6LL Jul 8 10:13:14.268490 containerd[1553]: time="2025-07-08T10:13:14.268372060Z" level=info msg="connecting to shim 3111eb0e1ad361c0b374762e241d3e0274636f22c9ad1b1e26df2d357e6136a0" address="unix:///run/containerd/s/57e70b45a3943ccea02c500b00b34c69202890c41c1b152a4aaaf4a8be48dab2" namespace=k8s.io protocol=ttrpc version=3 Jul 8 10:13:14.298251 systemd[1]: Started cri-containerd-3111eb0e1ad361c0b374762e241d3e0274636f22c9ad1b1e26df2d357e6136a0.scope - libcontainer container 3111eb0e1ad361c0b374762e241d3e0274636f22c9ad1b1e26df2d357e6136a0. Jul 8 10:13:14.315689 systemd-resolved[1404]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 8 10:13:14.344251 systemd-networkd[1451]: calie26bca4d9c5: Link UP Jul 8 10:13:14.345646 systemd-networkd[1451]: calie26bca4d9c5: Gained carrier Jul 8 10:13:14.350242 containerd[1553]: time="2025-07-08T10:13:14.349842305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rmwf4,Uid:b2379b87-3302-4e51-a739-e07127b7a8fb,Namespace:kube-system,Attempt:0,} returns sandbox id \"3111eb0e1ad361c0b374762e241d3e0274636f22c9ad1b1e26df2d357e6136a0\"" Jul 8 10:13:14.360686 containerd[1553]: time="2025-07-08T10:13:14.360289919Z" level=info msg="CreateContainer within sandbox \"3111eb0e1ad361c0b374762e241d3e0274636f22c9ad1b1e26df2d357e6136a0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 8 10:13:14.363691 containerd[1553]: 2025-07-08 10:13:14.168 [INFO][4605] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--77c59df47c--d77nl-eth0 calico-apiserver-77c59df47c- calico-apiserver 9081cece-19b9-4b34-92d7-7256bea099db 819 0 2025-07-08 10:12:43 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:77c59df47c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-77c59df47c-d77nl eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie26bca4d9c5 [] [] }} ContainerID="e163d961ba3472b3bfa38786ee50184922b83d2576099c287dfff9f1d18a9ac1" Namespace="calico-apiserver" Pod="calico-apiserver-77c59df47c-d77nl" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c59df47c--d77nl-" Jul 8 10:13:14.363691 containerd[1553]: 2025-07-08 10:13:14.169 [INFO][4605] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e163d961ba3472b3bfa38786ee50184922b83d2576099c287dfff9f1d18a9ac1" Namespace="calico-apiserver" Pod="calico-apiserver-77c59df47c-d77nl" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c59df47c--d77nl-eth0" Jul 8 10:13:14.363691 containerd[1553]: 2025-07-08 10:13:14.199 [INFO][4632] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e163d961ba3472b3bfa38786ee50184922b83d2576099c287dfff9f1d18a9ac1" HandleID="k8s-pod-network.e163d961ba3472b3bfa38786ee50184922b83d2576099c287dfff9f1d18a9ac1" Workload="localhost-k8s-calico--apiserver--77c59df47c--d77nl-eth0" Jul 8 10:13:14.363691 containerd[1553]: 2025-07-08 10:13:14.199 [INFO][4632] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e163d961ba3472b3bfa38786ee50184922b83d2576099c287dfff9f1d18a9ac1" HandleID="k8s-pod-network.e163d961ba3472b3bfa38786ee50184922b83d2576099c287dfff9f1d18a9ac1" Workload="localhost-k8s-calico--apiserver--77c59df47c--d77nl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002df5f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-77c59df47c-d77nl", "timestamp":"2025-07-08 10:13:14.199185798 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 8 10:13:14.363691 containerd[1553]: 2025-07-08 10:13:14.199 [INFO][4632] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 8 10:13:14.363691 containerd[1553]: 2025-07-08 10:13:14.224 [INFO][4632] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 8 10:13:14.363691 containerd[1553]: 2025-07-08 10:13:14.224 [INFO][4632] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 8 10:13:14.363691 containerd[1553]: 2025-07-08 10:13:14.304 [INFO][4632] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e163d961ba3472b3bfa38786ee50184922b83d2576099c287dfff9f1d18a9ac1" host="localhost" Jul 8 10:13:14.363691 containerd[1553]: 2025-07-08 10:13:14.312 [INFO][4632] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 8 10:13:14.363691 containerd[1553]: 2025-07-08 10:13:14.317 [INFO][4632] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 8 10:13:14.363691 containerd[1553]: 2025-07-08 10:13:14.319 [INFO][4632] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 8 10:13:14.363691 containerd[1553]: 2025-07-08 10:13:14.321 [INFO][4632] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 8 10:13:14.363691 containerd[1553]: 2025-07-08 10:13:14.321 [INFO][4632] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e163d961ba3472b3bfa38786ee50184922b83d2576099c287dfff9f1d18a9ac1" host="localhost" Jul 8 10:13:14.363691 containerd[1553]: 2025-07-08 10:13:14.323 [INFO][4632] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e163d961ba3472b3bfa38786ee50184922b83d2576099c287dfff9f1d18a9ac1 Jul 8 10:13:14.363691 containerd[1553]: 2025-07-08 10:13:14.328 [INFO][4632] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e163d961ba3472b3bfa38786ee50184922b83d2576099c287dfff9f1d18a9ac1" host="localhost" Jul 8 10:13:14.363691 containerd[1553]: 2025-07-08 10:13:14.335 [INFO][4632] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.e163d961ba3472b3bfa38786ee50184922b83d2576099c287dfff9f1d18a9ac1" host="localhost" Jul 8 10:13:14.363691 containerd[1553]: 2025-07-08 10:13:14.335 [INFO][4632] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.e163d961ba3472b3bfa38786ee50184922b83d2576099c287dfff9f1d18a9ac1" host="localhost" Jul 8 10:13:14.363691 containerd[1553]: 2025-07-08 10:13:14.335 [INFO][4632] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 8 10:13:14.363691 containerd[1553]: 2025-07-08 10:13:14.335 [INFO][4632] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="e163d961ba3472b3bfa38786ee50184922b83d2576099c287dfff9f1d18a9ac1" HandleID="k8s-pod-network.e163d961ba3472b3bfa38786ee50184922b83d2576099c287dfff9f1d18a9ac1" Workload="localhost-k8s-calico--apiserver--77c59df47c--d77nl-eth0" Jul 8 10:13:14.364236 containerd[1553]: 2025-07-08 10:13:14.342 [INFO][4605] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e163d961ba3472b3bfa38786ee50184922b83d2576099c287dfff9f1d18a9ac1" Namespace="calico-apiserver" Pod="calico-apiserver-77c59df47c-d77nl" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c59df47c--d77nl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--77c59df47c--d77nl-eth0", GenerateName:"calico-apiserver-77c59df47c-", Namespace:"calico-apiserver", SelfLink:"", UID:"9081cece-19b9-4b34-92d7-7256bea099db", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.July, 8, 10, 12, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77c59df47c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-77c59df47c-d77nl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie26bca4d9c5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 8 10:13:14.364236 containerd[1553]: 2025-07-08 10:13:14.342 [INFO][4605] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="e163d961ba3472b3bfa38786ee50184922b83d2576099c287dfff9f1d18a9ac1" Namespace="calico-apiserver" Pod="calico-apiserver-77c59df47c-d77nl" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c59df47c--d77nl-eth0" Jul 8 10:13:14.364236 containerd[1553]: 2025-07-08 10:13:14.342 [INFO][4605] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie26bca4d9c5 ContainerID="e163d961ba3472b3bfa38786ee50184922b83d2576099c287dfff9f1d18a9ac1" Namespace="calico-apiserver" Pod="calico-apiserver-77c59df47c-d77nl" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c59df47c--d77nl-eth0" Jul 8 10:13:14.364236 containerd[1553]: 2025-07-08 10:13:14.345 [INFO][4605] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e163d961ba3472b3bfa38786ee50184922b83d2576099c287dfff9f1d18a9ac1" Namespace="calico-apiserver" Pod="calico-apiserver-77c59df47c-d77nl" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c59df47c--d77nl-eth0" Jul 8 10:13:14.364236 containerd[1553]: 2025-07-08 10:13:14.346 [INFO][4605] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e163d961ba3472b3bfa38786ee50184922b83d2576099c287dfff9f1d18a9ac1" Namespace="calico-apiserver" Pod="calico-apiserver-77c59df47c-d77nl" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c59df47c--d77nl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--77c59df47c--d77nl-eth0", GenerateName:"calico-apiserver-77c59df47c-", Namespace:"calico-apiserver", SelfLink:"", UID:"9081cece-19b9-4b34-92d7-7256bea099db", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.July, 8, 10, 12, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77c59df47c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e163d961ba3472b3bfa38786ee50184922b83d2576099c287dfff9f1d18a9ac1", Pod:"calico-apiserver-77c59df47c-d77nl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie26bca4d9c5", MAC:"2a:f1:6d:0a:c9:f5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 8 10:13:14.364236 containerd[1553]: 2025-07-08 10:13:14.359 [INFO][4605] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e163d961ba3472b3bfa38786ee50184922b83d2576099c287dfff9f1d18a9ac1" Namespace="calico-apiserver" Pod="calico-apiserver-77c59df47c-d77nl" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c59df47c--d77nl-eth0" Jul 8 10:13:14.370591 containerd[1553]: time="2025-07-08T10:13:14.370505559Z" level=info msg="Container bd251f7dad8aea3cc107bfd7c9d2fd19843aaea4db3cef04b55584055f612c52: CDI devices from CRI Config.CDIDevices: []" Jul 8 10:13:14.381112 containerd[1553]: time="2025-07-08T10:13:14.381019778Z" level=info msg="CreateContainer within sandbox \"3111eb0e1ad361c0b374762e241d3e0274636f22c9ad1b1e26df2d357e6136a0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bd251f7dad8aea3cc107bfd7c9d2fd19843aaea4db3cef04b55584055f612c52\"" Jul 8 10:13:14.381886 containerd[1553]: time="2025-07-08T10:13:14.381852090Z" level=info msg="StartContainer for \"bd251f7dad8aea3cc107bfd7c9d2fd19843aaea4db3cef04b55584055f612c52\"" Jul 8 10:13:14.382647 containerd[1553]: time="2025-07-08T10:13:14.382624431Z" level=info msg="connecting to shim bd251f7dad8aea3cc107bfd7c9d2fd19843aaea4db3cef04b55584055f612c52" address="unix:///run/containerd/s/57e70b45a3943ccea02c500b00b34c69202890c41c1b152a4aaaf4a8be48dab2" protocol=ttrpc version=3 Jul 8 10:13:14.399905 containerd[1553]: time="2025-07-08T10:13:14.399429491Z" level=info msg="connecting to shim e163d961ba3472b3bfa38786ee50184922b83d2576099c287dfff9f1d18a9ac1" address="unix:///run/containerd/s/1cdf374e3102a949ac771de257691176cd5dc4e5f26b0772b3630beb0e68e4f7" namespace=k8s.io protocol=ttrpc version=3 Jul 8 10:13:14.406232 systemd[1]: Started cri-containerd-bd251f7dad8aea3cc107bfd7c9d2fd19843aaea4db3cef04b55584055f612c52.scope - libcontainer container bd251f7dad8aea3cc107bfd7c9d2fd19843aaea4db3cef04b55584055f612c52. Jul 8 10:13:14.427301 systemd[1]: Started cri-containerd-e163d961ba3472b3bfa38786ee50184922b83d2576099c287dfff9f1d18a9ac1.scope - libcontainer container e163d961ba3472b3bfa38786ee50184922b83d2576099c287dfff9f1d18a9ac1. Jul 8 10:13:14.441394 systemd-resolved[1404]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 8 10:13:14.753513 containerd[1553]: time="2025-07-08T10:13:14.753266535Z" level=info msg="StartContainer for \"bd251f7dad8aea3cc107bfd7c9d2fd19843aaea4db3cef04b55584055f612c52\" returns successfully" Jul 8 10:13:14.784897 containerd[1553]: time="2025-07-08T10:13:14.784832468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77c59df47c-d77nl,Uid:9081cece-19b9-4b34-92d7-7256bea099db,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"e163d961ba3472b3bfa38786ee50184922b83d2576099c287dfff9f1d18a9ac1\"" Jul 8 10:13:14.954020 systemd-networkd[1451]: vxlan.calico: Gained IPv6LL Jul 8 10:13:14.991326 containerd[1553]: time="2025-07-08T10:13:14.991260803Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:13:14.992127 containerd[1553]: time="2025-07-08T10:13:14.992096432Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 8 10:13:14.993654 containerd[1553]: time="2025-07-08T10:13:14.993591428Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:13:14.995530 containerd[1553]: time="2025-07-08T10:13:14.995489502Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:13:14.997088 containerd[1553]: time="2025-07-08T10:13:14.997047677Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 2.228053431s" Jul 8 10:13:14.997167 containerd[1553]: time="2025-07-08T10:13:14.997151392Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 8 10:13:15.000707 containerd[1553]: time="2025-07-08T10:13:15.000599274Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 8 10:13:15.004697 containerd[1553]: time="2025-07-08T10:13:15.004604775Z" level=info msg="CreateContainer within sandbox \"ec74be7c3c586d416037d12bd3e6cc8210b17ca42119da94fb2b98daadc19504\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 8 10:13:15.016892 containerd[1553]: time="2025-07-08T10:13:15.016831627Z" level=info msg="Container 47206ae6bc98ae42975ed1319b67cd8fd1c17945bd87c787987b60d09c11cc8f: CDI devices from CRI Config.CDIDevices: []" Jul 8 10:13:15.017240 systemd-networkd[1451]: cali4e457551962: Gained IPv6LL Jul 8 10:13:15.026349 containerd[1553]: time="2025-07-08T10:13:15.026289763Z" level=info msg="CreateContainer within sandbox \"ec74be7c3c586d416037d12bd3e6cc8210b17ca42119da94fb2b98daadc19504\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"47206ae6bc98ae42975ed1319b67cd8fd1c17945bd87c787987b60d09c11cc8f\"" Jul 8 10:13:15.026982 containerd[1553]: time="2025-07-08T10:13:15.026934192Z" level=info msg="StartContainer for \"47206ae6bc98ae42975ed1319b67cd8fd1c17945bd87c787987b60d09c11cc8f\"" Jul 8 10:13:15.028454 containerd[1553]: time="2025-07-08T10:13:15.028417167Z" level=info msg="connecting to shim 47206ae6bc98ae42975ed1319b67cd8fd1c17945bd87c787987b60d09c11cc8f" address="unix:///run/containerd/s/1301c6df192b6ada6044a212377360cf91c1eea603ca3b4e381abc99a4d75b21" protocol=ttrpc version=3 Jul 8 10:13:15.051242 systemd[1]: Started cri-containerd-47206ae6bc98ae42975ed1319b67cd8fd1c17945bd87c787987b60d09c11cc8f.scope - libcontainer container 47206ae6bc98ae42975ed1319b67cd8fd1c17945bd87c787987b60d09c11cc8f. Jul 8 10:13:15.100302 containerd[1553]: time="2025-07-08T10:13:15.100256890Z" level=info msg="StartContainer for \"47206ae6bc98ae42975ed1319b67cd8fd1c17945bd87c787987b60d09c11cc8f\" returns successfully" Jul 8 10:13:15.331188 kubelet[2716]: I0708 10:13:15.331043 2716 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-rmwf4" podStartSLOduration=42.3310278 podStartE2EDuration="42.3310278s" podCreationTimestamp="2025-07-08 10:12:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-08 10:13:15.330597723 +0000 UTC m=+46.307535011" watchObservedRunningTime="2025-07-08 10:13:15.3310278 +0000 UTC m=+46.307965078" Jul 8 10:13:15.849265 systemd-networkd[1451]: cali5c5b03dc077: Gained IPv6LL Jul 8 10:13:16.169286 systemd-networkd[1451]: calie26bca4d9c5: Gained IPv6LL Jul 8 10:13:16.279459 systemd[1]: Started sshd@8-10.0.0.44:22-10.0.0.1:36532.service - OpenSSH per-connection server daemon (10.0.0.1:36532). Jul 8 10:13:16.349884 sshd[4834]: Accepted publickey for core from 10.0.0.1 port 36532 ssh2: RSA SHA256:BQWGw9qDwwKm1WzdalDD28JNIdz3HBcaxKwPB41deh4 Jul 8 10:13:16.351832 sshd-session[4834]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 8 10:13:16.356522 systemd-logind[1535]: New session 9 of user core. Jul 8 10:13:16.361229 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 8 10:13:16.496487 sshd[4837]: Connection closed by 10.0.0.1 port 36532 Jul 8 10:13:16.496747 sshd-session[4834]: pam_unix(sshd:session): session closed for user core Jul 8 10:13:16.501039 systemd[1]: sshd@8-10.0.0.44:22-10.0.0.1:36532.service: Deactivated successfully. Jul 8 10:13:16.503087 systemd[1]: session-9.scope: Deactivated successfully. Jul 8 10:13:16.503803 systemd-logind[1535]: Session 9 logged out. Waiting for processes to exit. Jul 8 10:13:16.504885 systemd-logind[1535]: Removed session 9. Jul 8 10:13:16.952302 containerd[1553]: time="2025-07-08T10:13:16.952246851Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:13:16.953155 containerd[1553]: time="2025-07-08T10:13:16.953122635Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 8 10:13:16.954422 containerd[1553]: time="2025-07-08T10:13:16.954374676Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:13:16.956440 containerd[1553]: time="2025-07-08T10:13:16.956402532Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:13:16.956949 containerd[1553]: time="2025-07-08T10:13:16.956902310Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 1.956276115s" Jul 8 10:13:16.956994 containerd[1553]: time="2025-07-08T10:13:16.956947846Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 8 10:13:16.957937 containerd[1553]: time="2025-07-08T10:13:16.957913789Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 8 10:13:16.961508 containerd[1553]: time="2025-07-08T10:13:16.961479993Z" level=info msg="CreateContainer within sandbox \"30d37e7bcc2e4f3df54f8154c52aa756963b2ed676f73c91c5c3985f9488ffc6\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 8 10:13:16.976421 containerd[1553]: time="2025-07-08T10:13:16.976377608Z" level=info msg="Container 20ad07def14dba9230f18fb074b5a93ba001a59b6d28343b98ec115c4304bcc7: CDI devices from CRI Config.CDIDevices: []" Jul 8 10:13:16.983583 containerd[1553]: time="2025-07-08T10:13:16.983535014Z" level=info msg="CreateContainer within sandbox \"30d37e7bcc2e4f3df54f8154c52aa756963b2ed676f73c91c5c3985f9488ffc6\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"20ad07def14dba9230f18fb074b5a93ba001a59b6d28343b98ec115c4304bcc7\"" Jul 8 10:13:16.984054 containerd[1553]: time="2025-07-08T10:13:16.984014264Z" level=info msg="StartContainer for \"20ad07def14dba9230f18fb074b5a93ba001a59b6d28343b98ec115c4304bcc7\"" Jul 8 10:13:16.985207 containerd[1553]: time="2025-07-08T10:13:16.985183698Z" level=info msg="connecting to shim 20ad07def14dba9230f18fb074b5a93ba001a59b6d28343b98ec115c4304bcc7" address="unix:///run/containerd/s/5d4815b0f0a6de913433a95595bd9dd3132e0428962869baa89fb1d0f7de0ddf" protocol=ttrpc version=3 Jul 8 10:13:17.004206 systemd[1]: Started cri-containerd-20ad07def14dba9230f18fb074b5a93ba001a59b6d28343b98ec115c4304bcc7.scope - libcontainer container 20ad07def14dba9230f18fb074b5a93ba001a59b6d28343b98ec115c4304bcc7. Jul 8 10:13:17.101137 containerd[1553]: time="2025-07-08T10:13:17.101084777Z" level=info msg="StartContainer for \"20ad07def14dba9230f18fb074b5a93ba001a59b6d28343b98ec115c4304bcc7\" returns successfully" Jul 8 10:13:19.672997 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2282795942.mount: Deactivated successfully. Jul 8 10:13:20.164255 containerd[1553]: time="2025-07-08T10:13:20.164196776Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:13:20.165023 containerd[1553]: time="2025-07-08T10:13:20.164990335Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 8 10:13:20.166373 containerd[1553]: time="2025-07-08T10:13:20.166350889Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:13:20.168540 containerd[1553]: time="2025-07-08T10:13:20.168484001Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:13:20.169344 containerd[1553]: time="2025-07-08T10:13:20.169289342Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 3.211349254s" Jul 8 10:13:20.169344 containerd[1553]: time="2025-07-08T10:13:20.169316805Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 8 10:13:20.170959 containerd[1553]: time="2025-07-08T10:13:20.170928830Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 8 10:13:20.177030 containerd[1553]: time="2025-07-08T10:13:20.176996216Z" level=info msg="CreateContainer within sandbox \"8ffd254998d0e4d7ba5d6959e7e15993b437d29bbe44b31b19ffc100cc048f71\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 8 10:13:20.187005 containerd[1553]: time="2025-07-08T10:13:20.186826634Z" level=info msg="Container bd1a985dbb05db5e43e74083f50191ab55b1d509d0ff78c0b66d9e0ac8c4b7d8: CDI devices from CRI Config.CDIDevices: []" Jul 8 10:13:20.195480 containerd[1553]: time="2025-07-08T10:13:20.195443485Z" level=info msg="CreateContainer within sandbox \"8ffd254998d0e4d7ba5d6959e7e15993b437d29bbe44b31b19ffc100cc048f71\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"bd1a985dbb05db5e43e74083f50191ab55b1d509d0ff78c0b66d9e0ac8c4b7d8\"" Jul 8 10:13:20.196283 containerd[1553]: time="2025-07-08T10:13:20.196208691Z" level=info msg="StartContainer for \"bd1a985dbb05db5e43e74083f50191ab55b1d509d0ff78c0b66d9e0ac8c4b7d8\"" Jul 8 10:13:20.197587 containerd[1553]: time="2025-07-08T10:13:20.197562411Z" level=info msg="connecting to shim bd1a985dbb05db5e43e74083f50191ab55b1d509d0ff78c0b66d9e0ac8c4b7d8" address="unix:///run/containerd/s/0c79a6c23e417109c5592c8f07253c70f9feabfc4061ca01e3e9b81a545e5455" protocol=ttrpc version=3 Jul 8 10:13:20.224295 systemd[1]: Started cri-containerd-bd1a985dbb05db5e43e74083f50191ab55b1d509d0ff78c0b66d9e0ac8c4b7d8.scope - libcontainer container bd1a985dbb05db5e43e74083f50191ab55b1d509d0ff78c0b66d9e0ac8c4b7d8. Jul 8 10:13:20.391972 containerd[1553]: time="2025-07-08T10:13:20.391852546Z" level=info msg="StartContainer for \"bd1a985dbb05db5e43e74083f50191ab55b1d509d0ff78c0b66d9e0ac8c4b7d8\" returns successfully" Jul 8 10:13:21.474626 containerd[1553]: time="2025-07-08T10:13:21.474558225Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bd1a985dbb05db5e43e74083f50191ab55b1d509d0ff78c0b66d9e0ac8c4b7d8\" id:\"11405e6e07368e586df8330a4ac26c2fe9d74ceeddbaf62b28ce618b5c266621\" pid:4960 exit_status:1 exited_at:{seconds:1751969601 nanos:474144439}" Jul 8 10:13:21.509845 systemd[1]: Started sshd@9-10.0.0.44:22-10.0.0.1:52402.service - OpenSSH per-connection server daemon (10.0.0.1:52402). Jul 8 10:13:21.554917 kubelet[2716]: I0708 10:13:21.554834 2716 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-d9q6m" podStartSLOduration=27.780521049 podStartE2EDuration="34.554817545s" podCreationTimestamp="2025-07-08 10:12:47 +0000 UTC" firstStartedPulling="2025-07-08 10:13:13.396541864 +0000 UTC m=+44.373479152" lastFinishedPulling="2025-07-08 10:13:20.17083836 +0000 UTC m=+51.147775648" observedRunningTime="2025-07-08 10:13:21.553846864 +0000 UTC m=+52.530784152" watchObservedRunningTime="2025-07-08 10:13:21.554817545 +0000 UTC m=+52.531754823" Jul 8 10:13:21.614204 sshd[4973]: Accepted publickey for core from 10.0.0.1 port 52402 ssh2: RSA SHA256:BQWGw9qDwwKm1WzdalDD28JNIdz3HBcaxKwPB41deh4 Jul 8 10:13:21.615673 sshd-session[4973]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 8 10:13:21.619907 systemd-logind[1535]: New session 10 of user core. Jul 8 10:13:21.626214 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 8 10:13:21.753944 sshd[4978]: Connection closed by 10.0.0.1 port 52402 Jul 8 10:13:21.754320 sshd-session[4973]: pam_unix(sshd:session): session closed for user core Jul 8 10:13:21.759300 systemd[1]: sshd@9-10.0.0.44:22-10.0.0.1:52402.service: Deactivated successfully. Jul 8 10:13:21.761343 systemd[1]: session-10.scope: Deactivated successfully. Jul 8 10:13:21.762053 systemd-logind[1535]: Session 10 logged out. Waiting for processes to exit. Jul 8 10:13:21.763187 systemd-logind[1535]: Removed session 10. Jul 8 10:13:22.117875 containerd[1553]: time="2025-07-08T10:13:22.117830253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-746475f85f-hkq8h,Uid:bd14101c-66fe-456d-94fb-8c66c533c1b5,Namespace:calico-system,Attempt:0,}" Jul 8 10:13:22.225026 systemd-networkd[1451]: calib3d45e0c16b: Link UP Jul 8 10:13:22.225559 systemd-networkd[1451]: calib3d45e0c16b: Gained carrier Jul 8 10:13:22.241434 containerd[1553]: 2025-07-08 10:13:22.172 [INFO][4991] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--746475f85f--hkq8h-eth0 calico-kube-controllers-746475f85f- calico-system bd14101c-66fe-456d-94fb-8c66c533c1b5 821 0 2025-07-08 10:12:48 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:746475f85f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-746475f85f-hkq8h eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calib3d45e0c16b [] [] }} ContainerID="485f32857c103c1b2d42c1aa76a5c48fabb7bd9139c9a0479345bd392f201a20" Namespace="calico-system" Pod="calico-kube-controllers-746475f85f-hkq8h" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--746475f85f--hkq8h-" Jul 8 10:13:22.241434 containerd[1553]: 2025-07-08 10:13:22.173 [INFO][4991] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="485f32857c103c1b2d42c1aa76a5c48fabb7bd9139c9a0479345bd392f201a20" Namespace="calico-system" Pod="calico-kube-controllers-746475f85f-hkq8h" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--746475f85f--hkq8h-eth0" Jul 8 10:13:22.241434 containerd[1553]: 2025-07-08 10:13:22.193 [INFO][5006] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="485f32857c103c1b2d42c1aa76a5c48fabb7bd9139c9a0479345bd392f201a20" HandleID="k8s-pod-network.485f32857c103c1b2d42c1aa76a5c48fabb7bd9139c9a0479345bd392f201a20" Workload="localhost-k8s-calico--kube--controllers--746475f85f--hkq8h-eth0" Jul 8 10:13:22.241434 containerd[1553]: 2025-07-08 10:13:22.193 [INFO][5006] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="485f32857c103c1b2d42c1aa76a5c48fabb7bd9139c9a0479345bd392f201a20" HandleID="k8s-pod-network.485f32857c103c1b2d42c1aa76a5c48fabb7bd9139c9a0479345bd392f201a20" Workload="localhost-k8s-calico--kube--controllers--746475f85f--hkq8h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000515dc0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-746475f85f-hkq8h", "timestamp":"2025-07-08 10:13:22.193533918 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 8 10:13:22.241434 containerd[1553]: 2025-07-08 10:13:22.193 [INFO][5006] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 8 10:13:22.241434 containerd[1553]: 2025-07-08 10:13:22.193 [INFO][5006] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 8 10:13:22.241434 containerd[1553]: 2025-07-08 10:13:22.193 [INFO][5006] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 8 10:13:22.241434 containerd[1553]: 2025-07-08 10:13:22.200 [INFO][5006] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.485f32857c103c1b2d42c1aa76a5c48fabb7bd9139c9a0479345bd392f201a20" host="localhost" Jul 8 10:13:22.241434 containerd[1553]: 2025-07-08 10:13:22.204 [INFO][5006] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 8 10:13:22.241434 containerd[1553]: 2025-07-08 10:13:22.208 [INFO][5006] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 8 10:13:22.241434 containerd[1553]: 2025-07-08 10:13:22.209 [INFO][5006] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 8 10:13:22.241434 containerd[1553]: 2025-07-08 10:13:22.211 [INFO][5006] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 8 10:13:22.241434 containerd[1553]: 2025-07-08 10:13:22.211 [INFO][5006] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.485f32857c103c1b2d42c1aa76a5c48fabb7bd9139c9a0479345bd392f201a20" host="localhost" Jul 8 10:13:22.241434 containerd[1553]: 2025-07-08 10:13:22.212 [INFO][5006] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.485f32857c103c1b2d42c1aa76a5c48fabb7bd9139c9a0479345bd392f201a20 Jul 8 10:13:22.241434 containerd[1553]: 2025-07-08 10:13:22.215 [INFO][5006] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.485f32857c103c1b2d42c1aa76a5c48fabb7bd9139c9a0479345bd392f201a20" host="localhost" Jul 8 10:13:22.241434 containerd[1553]: 2025-07-08 10:13:22.220 [INFO][5006] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.485f32857c103c1b2d42c1aa76a5c48fabb7bd9139c9a0479345bd392f201a20" host="localhost" Jul 8 10:13:22.241434 containerd[1553]: 2025-07-08 10:13:22.220 [INFO][5006] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.485f32857c103c1b2d42c1aa76a5c48fabb7bd9139c9a0479345bd392f201a20" host="localhost" Jul 8 10:13:22.241434 containerd[1553]: 2025-07-08 10:13:22.220 [INFO][5006] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 8 10:13:22.241434 containerd[1553]: 2025-07-08 10:13:22.220 [INFO][5006] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="485f32857c103c1b2d42c1aa76a5c48fabb7bd9139c9a0479345bd392f201a20" HandleID="k8s-pod-network.485f32857c103c1b2d42c1aa76a5c48fabb7bd9139c9a0479345bd392f201a20" Workload="localhost-k8s-calico--kube--controllers--746475f85f--hkq8h-eth0" Jul 8 10:13:22.242002 containerd[1553]: 2025-07-08 10:13:22.222 [INFO][4991] cni-plugin/k8s.go 418: Populated endpoint ContainerID="485f32857c103c1b2d42c1aa76a5c48fabb7bd9139c9a0479345bd392f201a20" Namespace="calico-system" Pod="calico-kube-controllers-746475f85f-hkq8h" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--746475f85f--hkq8h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--746475f85f--hkq8h-eth0", GenerateName:"calico-kube-controllers-746475f85f-", Namespace:"calico-system", SelfLink:"", UID:"bd14101c-66fe-456d-94fb-8c66c533c1b5", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.July, 8, 10, 12, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"746475f85f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-746475f85f-hkq8h", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib3d45e0c16b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 8 10:13:22.242002 containerd[1553]: 2025-07-08 10:13:22.223 [INFO][4991] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="485f32857c103c1b2d42c1aa76a5c48fabb7bd9139c9a0479345bd392f201a20" Namespace="calico-system" Pod="calico-kube-controllers-746475f85f-hkq8h" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--746475f85f--hkq8h-eth0" Jul 8 10:13:22.242002 containerd[1553]: 2025-07-08 10:13:22.223 [INFO][4991] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib3d45e0c16b ContainerID="485f32857c103c1b2d42c1aa76a5c48fabb7bd9139c9a0479345bd392f201a20" Namespace="calico-system" Pod="calico-kube-controllers-746475f85f-hkq8h" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--746475f85f--hkq8h-eth0" Jul 8 10:13:22.242002 containerd[1553]: 2025-07-08 10:13:22.225 [INFO][4991] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="485f32857c103c1b2d42c1aa76a5c48fabb7bd9139c9a0479345bd392f201a20" Namespace="calico-system" Pod="calico-kube-controllers-746475f85f-hkq8h" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--746475f85f--hkq8h-eth0" Jul 8 10:13:22.242002 containerd[1553]: 2025-07-08 10:13:22.225 [INFO][4991] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="485f32857c103c1b2d42c1aa76a5c48fabb7bd9139c9a0479345bd392f201a20" Namespace="calico-system" Pod="calico-kube-controllers-746475f85f-hkq8h" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--746475f85f--hkq8h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--746475f85f--hkq8h-eth0", GenerateName:"calico-kube-controllers-746475f85f-", Namespace:"calico-system", SelfLink:"", UID:"bd14101c-66fe-456d-94fb-8c66c533c1b5", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.July, 8, 10, 12, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"746475f85f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"485f32857c103c1b2d42c1aa76a5c48fabb7bd9139c9a0479345bd392f201a20", Pod:"calico-kube-controllers-746475f85f-hkq8h", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib3d45e0c16b", MAC:"4e:e6:0e:0c:61:34", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 8 10:13:22.242002 containerd[1553]: 2025-07-08 10:13:22.236 [INFO][4991] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="485f32857c103c1b2d42c1aa76a5c48fabb7bd9139c9a0479345bd392f201a20" Namespace="calico-system" Pod="calico-kube-controllers-746475f85f-hkq8h" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--746475f85f--hkq8h-eth0" Jul 8 10:13:22.268101 containerd[1553]: time="2025-07-08T10:13:22.268036878Z" level=info msg="connecting to shim 485f32857c103c1b2d42c1aa76a5c48fabb7bd9139c9a0479345bd392f201a20" address="unix:///run/containerd/s/38c63695a8a35641d34960c943e79c8a021320e577908957626793c8545ebb8c" namespace=k8s.io protocol=ttrpc version=3 Jul 8 10:13:22.301213 systemd[1]: Started cri-containerd-485f32857c103c1b2d42c1aa76a5c48fabb7bd9139c9a0479345bd392f201a20.scope - libcontainer container 485f32857c103c1b2d42c1aa76a5c48fabb7bd9139c9a0479345bd392f201a20. Jul 8 10:13:22.316351 systemd-resolved[1404]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 8 10:13:22.732983 containerd[1553]: time="2025-07-08T10:13:22.732935497Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bd1a985dbb05db5e43e74083f50191ab55b1d509d0ff78c0b66d9e0ac8c4b7d8\" id:\"de676417c6ee87270ca30626c477a51451bd4602a10d96de1aeedb13d289bb2f\" pid:5084 exit_status:1 exited_at:{seconds:1751969602 nanos:732555074}" Jul 8 10:13:23.029728 containerd[1553]: time="2025-07-08T10:13:23.028941636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-746475f85f-hkq8h,Uid:bd14101c-66fe-456d-94fb-8c66c533c1b5,Namespace:calico-system,Attempt:0,} returns sandbox id \"485f32857c103c1b2d42c1aa76a5c48fabb7bd9139c9a0479345bd392f201a20\"" Jul 8 10:13:23.624614 containerd[1553]: time="2025-07-08T10:13:23.624543378Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:13:23.625328 containerd[1553]: time="2025-07-08T10:13:23.625278968Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 8 10:13:23.626523 containerd[1553]: time="2025-07-08T10:13:23.626487846Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:13:23.628604 containerd[1553]: time="2025-07-08T10:13:23.628557820Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:13:23.629167 containerd[1553]: time="2025-07-08T10:13:23.629130375Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 3.458169866s" Jul 8 10:13:23.629213 containerd[1553]: time="2025-07-08T10:13:23.629166232Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 8 10:13:23.630470 containerd[1553]: time="2025-07-08T10:13:23.630327921Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 8 10:13:23.634839 containerd[1553]: time="2025-07-08T10:13:23.634812857Z" level=info msg="CreateContainer within sandbox \"e163d961ba3472b3bfa38786ee50184922b83d2576099c287dfff9f1d18a9ac1\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 8 10:13:23.643160 containerd[1553]: time="2025-07-08T10:13:23.643118041Z" level=info msg="Container cf79a6129a1be346fee7d2f3eb1270234d062c45c85e9a8b5bf30a636b989725: CDI devices from CRI Config.CDIDevices: []" Jul 8 10:13:23.651822 containerd[1553]: time="2025-07-08T10:13:23.651773081Z" level=info msg="CreateContainer within sandbox \"e163d961ba3472b3bfa38786ee50184922b83d2576099c287dfff9f1d18a9ac1\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"cf79a6129a1be346fee7d2f3eb1270234d062c45c85e9a8b5bf30a636b989725\"" Jul 8 10:13:23.653104 containerd[1553]: time="2025-07-08T10:13:23.652365061Z" level=info msg="StartContainer for \"cf79a6129a1be346fee7d2f3eb1270234d062c45c85e9a8b5bf30a636b989725\"" Jul 8 10:13:23.653321 containerd[1553]: time="2025-07-08T10:13:23.653282623Z" level=info msg="connecting to shim cf79a6129a1be346fee7d2f3eb1270234d062c45c85e9a8b5bf30a636b989725" address="unix:///run/containerd/s/1cdf374e3102a949ac771de257691176cd5dc4e5f26b0772b3630beb0e68e4f7" protocol=ttrpc version=3 Jul 8 10:13:23.679260 systemd[1]: Started cri-containerd-cf79a6129a1be346fee7d2f3eb1270234d062c45c85e9a8b5bf30a636b989725.scope - libcontainer container cf79a6129a1be346fee7d2f3eb1270234d062c45c85e9a8b5bf30a636b989725. Jul 8 10:13:23.727907 containerd[1553]: time="2025-07-08T10:13:23.727868512Z" level=info msg="StartContainer for \"cf79a6129a1be346fee7d2f3eb1270234d062c45c85e9a8b5bf30a636b989725\" returns successfully" Jul 8 10:13:24.042677 systemd-networkd[1451]: calib3d45e0c16b: Gained IPv6LL Jul 8 10:13:24.415294 kubelet[2716]: I0708 10:13:24.415128 2716 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-77c59df47c-d77nl" podStartSLOduration=32.571032027 podStartE2EDuration="41.415111181s" podCreationTimestamp="2025-07-08 10:12:43 +0000 UTC" firstStartedPulling="2025-07-08 10:13:14.786000932 +0000 UTC m=+45.762938220" lastFinishedPulling="2025-07-08 10:13:23.630080086 +0000 UTC m=+54.607017374" observedRunningTime="2025-07-08 10:13:24.414436635 +0000 UTC m=+55.391373923" watchObservedRunningTime="2025-07-08 10:13:24.415111181 +0000 UTC m=+55.392048469" Jul 8 10:13:25.118448 containerd[1553]: time="2025-07-08T10:13:25.118392880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77c59df47c-p96dh,Uid:3c4eb006-3f30-452e-b844-82020e73618d,Namespace:calico-apiserver,Attempt:0,}" Jul 8 10:13:25.224929 systemd-networkd[1451]: cali4e9444c50e6: Link UP Jul 8 10:13:25.226574 systemd-networkd[1451]: cali4e9444c50e6: Gained carrier Jul 8 10:13:25.238986 containerd[1553]: 2025-07-08 10:13:25.160 [INFO][5151] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--77c59df47c--p96dh-eth0 calico-apiserver-77c59df47c- calico-apiserver 3c4eb006-3f30-452e-b844-82020e73618d 820 0 2025-07-08 10:12:43 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:77c59df47c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-77c59df47c-p96dh eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4e9444c50e6 [] [] }} ContainerID="005057d5f90198b1c6374409c4e5d370573dd7d2c1615c7c2584dd7610b1169c" Namespace="calico-apiserver" Pod="calico-apiserver-77c59df47c-p96dh" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c59df47c--p96dh-" Jul 8 10:13:25.238986 containerd[1553]: 2025-07-08 10:13:25.160 [INFO][5151] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="005057d5f90198b1c6374409c4e5d370573dd7d2c1615c7c2584dd7610b1169c" Namespace="calico-apiserver" Pod="calico-apiserver-77c59df47c-p96dh" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c59df47c--p96dh-eth0" Jul 8 10:13:25.238986 containerd[1553]: 2025-07-08 10:13:25.186 [INFO][5164] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="005057d5f90198b1c6374409c4e5d370573dd7d2c1615c7c2584dd7610b1169c" HandleID="k8s-pod-network.005057d5f90198b1c6374409c4e5d370573dd7d2c1615c7c2584dd7610b1169c" Workload="localhost-k8s-calico--apiserver--77c59df47c--p96dh-eth0" Jul 8 10:13:25.238986 containerd[1553]: 2025-07-08 10:13:25.186 [INFO][5164] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="005057d5f90198b1c6374409c4e5d370573dd7d2c1615c7c2584dd7610b1169c" HandleID="k8s-pod-network.005057d5f90198b1c6374409c4e5d370573dd7d2c1615c7c2584dd7610b1169c" Workload="localhost-k8s-calico--apiserver--77c59df47c--p96dh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e280), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-77c59df47c-p96dh", "timestamp":"2025-07-08 10:13:25.185995476 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 8 10:13:25.238986 containerd[1553]: 2025-07-08 10:13:25.186 [INFO][5164] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 8 10:13:25.238986 containerd[1553]: 2025-07-08 10:13:25.186 [INFO][5164] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 8 10:13:25.238986 containerd[1553]: 2025-07-08 10:13:25.186 [INFO][5164] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 8 10:13:25.238986 containerd[1553]: 2025-07-08 10:13:25.193 [INFO][5164] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.005057d5f90198b1c6374409c4e5d370573dd7d2c1615c7c2584dd7610b1169c" host="localhost" Jul 8 10:13:25.238986 containerd[1553]: 2025-07-08 10:13:25.200 [INFO][5164] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 8 10:13:25.238986 containerd[1553]: 2025-07-08 10:13:25.204 [INFO][5164] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 8 10:13:25.238986 containerd[1553]: 2025-07-08 10:13:25.206 [INFO][5164] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 8 10:13:25.238986 containerd[1553]: 2025-07-08 10:13:25.208 [INFO][5164] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 8 10:13:25.238986 containerd[1553]: 2025-07-08 10:13:25.208 [INFO][5164] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.005057d5f90198b1c6374409c4e5d370573dd7d2c1615c7c2584dd7610b1169c" host="localhost" Jul 8 10:13:25.238986 containerd[1553]: 2025-07-08 10:13:25.209 [INFO][5164] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.005057d5f90198b1c6374409c4e5d370573dd7d2c1615c7c2584dd7610b1169c Jul 8 10:13:25.238986 containerd[1553]: 2025-07-08 10:13:25.212 [INFO][5164] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.005057d5f90198b1c6374409c4e5d370573dd7d2c1615c7c2584dd7610b1169c" host="localhost" Jul 8 10:13:25.238986 containerd[1553]: 2025-07-08 10:13:25.218 [INFO][5164] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.005057d5f90198b1c6374409c4e5d370573dd7d2c1615c7c2584dd7610b1169c" host="localhost" Jul 8 10:13:25.238986 containerd[1553]: 2025-07-08 10:13:25.218 [INFO][5164] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.005057d5f90198b1c6374409c4e5d370573dd7d2c1615c7c2584dd7610b1169c" host="localhost" Jul 8 10:13:25.238986 containerd[1553]: 2025-07-08 10:13:25.218 [INFO][5164] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 8 10:13:25.238986 containerd[1553]: 2025-07-08 10:13:25.219 [INFO][5164] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="005057d5f90198b1c6374409c4e5d370573dd7d2c1615c7c2584dd7610b1169c" HandleID="k8s-pod-network.005057d5f90198b1c6374409c4e5d370573dd7d2c1615c7c2584dd7610b1169c" Workload="localhost-k8s-calico--apiserver--77c59df47c--p96dh-eth0" Jul 8 10:13:25.239719 containerd[1553]: 2025-07-08 10:13:25.222 [INFO][5151] cni-plugin/k8s.go 418: Populated endpoint ContainerID="005057d5f90198b1c6374409c4e5d370573dd7d2c1615c7c2584dd7610b1169c" Namespace="calico-apiserver" Pod="calico-apiserver-77c59df47c-p96dh" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c59df47c--p96dh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--77c59df47c--p96dh-eth0", GenerateName:"calico-apiserver-77c59df47c-", Namespace:"calico-apiserver", SelfLink:"", UID:"3c4eb006-3f30-452e-b844-82020e73618d", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.July, 8, 10, 12, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77c59df47c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-77c59df47c-p96dh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4e9444c50e6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 8 10:13:25.239719 containerd[1553]: 2025-07-08 10:13:25.222 [INFO][5151] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="005057d5f90198b1c6374409c4e5d370573dd7d2c1615c7c2584dd7610b1169c" Namespace="calico-apiserver" Pod="calico-apiserver-77c59df47c-p96dh" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c59df47c--p96dh-eth0" Jul 8 10:13:25.239719 containerd[1553]: 2025-07-08 10:13:25.222 [INFO][5151] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4e9444c50e6 ContainerID="005057d5f90198b1c6374409c4e5d370573dd7d2c1615c7c2584dd7610b1169c" Namespace="calico-apiserver" Pod="calico-apiserver-77c59df47c-p96dh" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c59df47c--p96dh-eth0" Jul 8 10:13:25.239719 containerd[1553]: 2025-07-08 10:13:25.224 [INFO][5151] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="005057d5f90198b1c6374409c4e5d370573dd7d2c1615c7c2584dd7610b1169c" Namespace="calico-apiserver" Pod="calico-apiserver-77c59df47c-p96dh" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c59df47c--p96dh-eth0" Jul 8 10:13:25.239719 containerd[1553]: 2025-07-08 10:13:25.225 [INFO][5151] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="005057d5f90198b1c6374409c4e5d370573dd7d2c1615c7c2584dd7610b1169c" Namespace="calico-apiserver" Pod="calico-apiserver-77c59df47c-p96dh" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c59df47c--p96dh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--77c59df47c--p96dh-eth0", GenerateName:"calico-apiserver-77c59df47c-", Namespace:"calico-apiserver", SelfLink:"", UID:"3c4eb006-3f30-452e-b844-82020e73618d", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.July, 8, 10, 12, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77c59df47c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"005057d5f90198b1c6374409c4e5d370573dd7d2c1615c7c2584dd7610b1169c", Pod:"calico-apiserver-77c59df47c-p96dh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4e9444c50e6", MAC:"9a:ad:08:d8:c5:d2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 8 10:13:25.239719 containerd[1553]: 2025-07-08 10:13:25.235 [INFO][5151] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="005057d5f90198b1c6374409c4e5d370573dd7d2c1615c7c2584dd7610b1169c" Namespace="calico-apiserver" Pod="calico-apiserver-77c59df47c-p96dh" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c59df47c--p96dh-eth0" Jul 8 10:13:25.260929 containerd[1553]: time="2025-07-08T10:13:25.260881866Z" level=info msg="connecting to shim 005057d5f90198b1c6374409c4e5d370573dd7d2c1615c7c2584dd7610b1169c" address="unix:///run/containerd/s/0392eaa8e01beed052ed05247102662f59583a4e9ae4b0fcebf71e8795937d5a" namespace=k8s.io protocol=ttrpc version=3 Jul 8 10:13:25.298233 systemd[1]: Started cri-containerd-005057d5f90198b1c6374409c4e5d370573dd7d2c1615c7c2584dd7610b1169c.scope - libcontainer container 005057d5f90198b1c6374409c4e5d370573dd7d2c1615c7c2584dd7610b1169c. Jul 8 10:13:25.312338 systemd-resolved[1404]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 8 10:13:25.378352 containerd[1553]: time="2025-07-08T10:13:25.378217021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77c59df47c-p96dh,Uid:3c4eb006-3f30-452e-b844-82020e73618d,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"005057d5f90198b1c6374409c4e5d370573dd7d2c1615c7c2584dd7610b1169c\"" Jul 8 10:13:25.387719 containerd[1553]: time="2025-07-08T10:13:25.387674815Z" level=info msg="CreateContainer within sandbox \"005057d5f90198b1c6374409c4e5d370573dd7d2c1615c7c2584dd7610b1169c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 8 10:13:25.397707 containerd[1553]: time="2025-07-08T10:13:25.397647295Z" level=info msg="Container 504acb99c4941ea07c989e8e0b5a885c9299859bed890a95d4cfd2af8e46799b: CDI devices from CRI Config.CDIDevices: []" Jul 8 10:13:25.408438 containerd[1553]: time="2025-07-08T10:13:25.408378970Z" level=info msg="CreateContainer within sandbox \"005057d5f90198b1c6374409c4e5d370573dd7d2c1615c7c2584dd7610b1169c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"504acb99c4941ea07c989e8e0b5a885c9299859bed890a95d4cfd2af8e46799b\"" Jul 8 10:13:25.409133 containerd[1553]: time="2025-07-08T10:13:25.409058846Z" level=info msg="StartContainer for \"504acb99c4941ea07c989e8e0b5a885c9299859bed890a95d4cfd2af8e46799b\"" Jul 8 10:13:25.410392 containerd[1553]: time="2025-07-08T10:13:25.410364705Z" level=info msg="connecting to shim 504acb99c4941ea07c989e8e0b5a885c9299859bed890a95d4cfd2af8e46799b" address="unix:///run/containerd/s/0392eaa8e01beed052ed05247102662f59583a4e9ae4b0fcebf71e8795937d5a" protocol=ttrpc version=3 Jul 8 10:13:25.438193 systemd[1]: Started cri-containerd-504acb99c4941ea07c989e8e0b5a885c9299859bed890a95d4cfd2af8e46799b.scope - libcontainer container 504acb99c4941ea07c989e8e0b5a885c9299859bed890a95d4cfd2af8e46799b. Jul 8 10:13:25.540171 containerd[1553]: time="2025-07-08T10:13:25.540129801Z" level=info msg="StartContainer for \"504acb99c4941ea07c989e8e0b5a885c9299859bed890a95d4cfd2af8e46799b\" returns successfully" Jul 8 10:13:25.863636 containerd[1553]: time="2025-07-08T10:13:25.863563405Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:13:25.864335 containerd[1553]: time="2025-07-08T10:13:25.864297902Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 8 10:13:25.865813 containerd[1553]: time="2025-07-08T10:13:25.865784151Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:13:25.868767 containerd[1553]: time="2025-07-08T10:13:25.868700904Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:13:25.869772 containerd[1553]: time="2025-07-08T10:13:25.869714245Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 2.239354052s" Jul 8 10:13:25.869814 containerd[1553]: time="2025-07-08T10:13:25.869780118Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 8 10:13:25.871391 containerd[1553]: time="2025-07-08T10:13:25.871310329Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 8 10:13:25.877027 containerd[1553]: time="2025-07-08T10:13:25.876960210Z" level=info msg="CreateContainer within sandbox \"ec74be7c3c586d416037d12bd3e6cc8210b17ca42119da94fb2b98daadc19504\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 8 10:13:25.887750 containerd[1553]: time="2025-07-08T10:13:25.887700239Z" level=info msg="Container 9035234263e3fed615e3bb351b6af573413fdb4ef75de3a8523731e36d4e43e7: CDI devices from CRI Config.CDIDevices: []" Jul 8 10:13:25.898025 containerd[1553]: time="2025-07-08T10:13:25.897975278Z" level=info msg="CreateContainer within sandbox \"ec74be7c3c586d416037d12bd3e6cc8210b17ca42119da94fb2b98daadc19504\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"9035234263e3fed615e3bb351b6af573413fdb4ef75de3a8523731e36d4e43e7\"" Jul 8 10:13:25.898660 containerd[1553]: time="2025-07-08T10:13:25.898625297Z" level=info msg="StartContainer for \"9035234263e3fed615e3bb351b6af573413fdb4ef75de3a8523731e36d4e43e7\"" Jul 8 10:13:25.900214 containerd[1553]: time="2025-07-08T10:13:25.900171889Z" level=info msg="connecting to shim 9035234263e3fed615e3bb351b6af573413fdb4ef75de3a8523731e36d4e43e7" address="unix:///run/containerd/s/1301c6df192b6ada6044a212377360cf91c1eea603ca3b4e381abc99a4d75b21" protocol=ttrpc version=3 Jul 8 10:13:25.924309 systemd[1]: Started cri-containerd-9035234263e3fed615e3bb351b6af573413fdb4ef75de3a8523731e36d4e43e7.scope - libcontainer container 9035234263e3fed615e3bb351b6af573413fdb4ef75de3a8523731e36d4e43e7. Jul 8 10:13:25.983087 containerd[1553]: time="2025-07-08T10:13:25.983030668Z" level=info msg="StartContainer for \"9035234263e3fed615e3bb351b6af573413fdb4ef75de3a8523731e36d4e43e7\" returns successfully" Jul 8 10:13:26.182418 kubelet[2716]: I0708 10:13:26.182295 2716 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 8 10:13:26.183656 kubelet[2716]: I0708 10:13:26.183628 2716 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 8 10:13:26.432321 kubelet[2716]: I0708 10:13:26.432037 2716 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-77c59df47c-p96dh" podStartSLOduration=43.432015518 podStartE2EDuration="43.432015518s" podCreationTimestamp="2025-07-08 10:12:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-08 10:13:26.423437996 +0000 UTC m=+57.400375294" watchObservedRunningTime="2025-07-08 10:13:26.432015518 +0000 UTC m=+57.408952806" Jul 8 10:13:26.432941 kubelet[2716]: I0708 10:13:26.432830 2716 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-cd6h8" podStartSLOduration=25.329444876 podStartE2EDuration="38.432822793s" podCreationTimestamp="2025-07-08 10:12:48 +0000 UTC" firstStartedPulling="2025-07-08 10:13:12.767473491 +0000 UTC m=+43.744410769" lastFinishedPulling="2025-07-08 10:13:25.870851398 +0000 UTC m=+56.847788686" observedRunningTime="2025-07-08 10:13:26.432354234 +0000 UTC m=+57.409291522" watchObservedRunningTime="2025-07-08 10:13:26.432822793 +0000 UTC m=+57.409760081" Jul 8 10:13:26.767931 systemd[1]: Started sshd@10-10.0.0.44:22-10.0.0.1:52418.service - OpenSSH per-connection server daemon (10.0.0.1:52418). Jul 8 10:13:26.793263 systemd-networkd[1451]: cali4e9444c50e6: Gained IPv6LL Jul 8 10:13:26.835346 sshd[5308]: Accepted publickey for core from 10.0.0.1 port 52418 ssh2: RSA SHA256:BQWGw9qDwwKm1WzdalDD28JNIdz3HBcaxKwPB41deh4 Jul 8 10:13:26.836851 sshd-session[5308]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 8 10:13:26.841336 systemd-logind[1535]: New session 11 of user core. Jul 8 10:13:26.851203 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 8 10:13:26.978620 sshd[5311]: Connection closed by 10.0.0.1 port 52418 Jul 8 10:13:26.979051 sshd-session[5308]: pam_unix(sshd:session): session closed for user core Jul 8 10:13:26.990927 systemd[1]: sshd@10-10.0.0.44:22-10.0.0.1:52418.service: Deactivated successfully. Jul 8 10:13:26.992964 systemd[1]: session-11.scope: Deactivated successfully. Jul 8 10:13:26.993787 systemd-logind[1535]: Session 11 logged out. Waiting for processes to exit. Jul 8 10:13:26.996587 systemd[1]: Started sshd@11-10.0.0.44:22-10.0.0.1:52424.service - OpenSSH per-connection server daemon (10.0.0.1:52424). Jul 8 10:13:26.997633 systemd-logind[1535]: Removed session 11. Jul 8 10:13:27.044598 sshd[5326]: Accepted publickey for core from 10.0.0.1 port 52424 ssh2: RSA SHA256:BQWGw9qDwwKm1WzdalDD28JNIdz3HBcaxKwPB41deh4 Jul 8 10:13:27.046096 sshd-session[5326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 8 10:13:27.050523 systemd-logind[1535]: New session 12 of user core. Jul 8 10:13:27.065212 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 8 10:13:27.678653 sshd[5329]: Connection closed by 10.0.0.1 port 52424 Jul 8 10:13:27.691224 systemd[1]: Started sshd@12-10.0.0.44:22-10.0.0.1:52426.service - OpenSSH per-connection server daemon (10.0.0.1:52426). Jul 8 10:13:27.700635 sshd-session[5326]: pam_unix(sshd:session): session closed for user core Jul 8 10:13:27.705632 systemd[1]: sshd@11-10.0.0.44:22-10.0.0.1:52424.service: Deactivated successfully. Jul 8 10:13:27.708005 systemd[1]: session-12.scope: Deactivated successfully. Jul 8 10:13:27.709472 systemd-logind[1535]: Session 12 logged out. Waiting for processes to exit. Jul 8 10:13:27.710691 systemd-logind[1535]: Removed session 12. Jul 8 10:13:27.746728 sshd[5338]: Accepted publickey for core from 10.0.0.1 port 52426 ssh2: RSA SHA256:BQWGw9qDwwKm1WzdalDD28JNIdz3HBcaxKwPB41deh4 Jul 8 10:13:27.748004 sshd-session[5338]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 8 10:13:27.753343 systemd-logind[1535]: New session 13 of user core. Jul 8 10:13:27.764227 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 8 10:13:27.904882 sshd[5344]: Connection closed by 10.0.0.1 port 52426 Jul 8 10:13:27.905284 sshd-session[5338]: pam_unix(sshd:session): session closed for user core Jul 8 10:13:27.910428 systemd[1]: sshd@12-10.0.0.44:22-10.0.0.1:52426.service: Deactivated successfully. Jul 8 10:13:27.912537 systemd[1]: session-13.scope: Deactivated successfully. Jul 8 10:13:27.913641 systemd-logind[1535]: Session 13 logged out. Waiting for processes to exit. Jul 8 10:13:27.914836 systemd-logind[1535]: Removed session 13. Jul 8 10:13:29.505328 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2110292133.mount: Deactivated successfully. Jul 8 10:13:29.945246 containerd[1553]: time="2025-07-08T10:13:29.945177878Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:13:29.945945 containerd[1553]: time="2025-07-08T10:13:29.945908910Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 8 10:13:29.947193 containerd[1553]: time="2025-07-08T10:13:29.947140199Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:13:29.949115 containerd[1553]: time="2025-07-08T10:13:29.949057246Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:13:29.949697 containerd[1553]: time="2025-07-08T10:13:29.949643164Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 4.078304301s" Jul 8 10:13:29.949697 containerd[1553]: time="2025-07-08T10:13:29.949685614Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 8 10:13:29.950923 containerd[1553]: time="2025-07-08T10:13:29.950672285Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 8 10:13:29.954329 containerd[1553]: time="2025-07-08T10:13:29.954273731Z" level=info msg="CreateContainer within sandbox \"30d37e7bcc2e4f3df54f8154c52aa756963b2ed676f73c91c5c3985f9488ffc6\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 8 10:13:29.961982 containerd[1553]: time="2025-07-08T10:13:29.961952996Z" level=info msg="Container b114e281f127acaaf175b750e87dc93f9f27897a92ea363e3ad922f1b216bc98: CDI devices from CRI Config.CDIDevices: []" Jul 8 10:13:29.979779 containerd[1553]: time="2025-07-08T10:13:29.979727120Z" level=info msg="CreateContainer within sandbox \"30d37e7bcc2e4f3df54f8154c52aa756963b2ed676f73c91c5c3985f9488ffc6\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"b114e281f127acaaf175b750e87dc93f9f27897a92ea363e3ad922f1b216bc98\"" Jul 8 10:13:29.980312 containerd[1553]: time="2025-07-08T10:13:29.980284234Z" level=info msg="StartContainer for \"b114e281f127acaaf175b750e87dc93f9f27897a92ea363e3ad922f1b216bc98\"" Jul 8 10:13:29.981269 containerd[1553]: time="2025-07-08T10:13:29.981232934Z" level=info msg="connecting to shim b114e281f127acaaf175b750e87dc93f9f27897a92ea363e3ad922f1b216bc98" address="unix:///run/containerd/s/5d4815b0f0a6de913433a95595bd9dd3132e0428962869baa89fb1d0f7de0ddf" protocol=ttrpc version=3 Jul 8 10:13:30.006219 systemd[1]: Started cri-containerd-b114e281f127acaaf175b750e87dc93f9f27897a92ea363e3ad922f1b216bc98.scope - libcontainer container b114e281f127acaaf175b750e87dc93f9f27897a92ea363e3ad922f1b216bc98. Jul 8 10:13:30.056053 containerd[1553]: time="2025-07-08T10:13:30.056008855Z" level=info msg="StartContainer for \"b114e281f127acaaf175b750e87dc93f9f27897a92ea363e3ad922f1b216bc98\" returns successfully" Jul 8 10:13:30.437589 kubelet[2716]: I0708 10:13:30.437050 2716 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-66b8c7b787-wln6d" podStartSLOduration=1.5623838110000001 podStartE2EDuration="18.437031112s" podCreationTimestamp="2025-07-08 10:13:12 +0000 UTC" firstStartedPulling="2025-07-08 10:13:13.075847261 +0000 UTC m=+44.052784549" lastFinishedPulling="2025-07-08 10:13:29.950494562 +0000 UTC m=+60.927431850" observedRunningTime="2025-07-08 10:13:30.436591727 +0000 UTC m=+61.413529005" watchObservedRunningTime="2025-07-08 10:13:30.437031112 +0000 UTC m=+61.413968400" Jul 8 10:13:32.923134 systemd[1]: Started sshd@13-10.0.0.44:22-10.0.0.1:37254.service - OpenSSH per-connection server daemon (10.0.0.1:37254). Jul 8 10:13:33.002102 sshd[5413]: Accepted publickey for core from 10.0.0.1 port 37254 ssh2: RSA SHA256:BQWGw9qDwwKm1WzdalDD28JNIdz3HBcaxKwPB41deh4 Jul 8 10:13:33.004347 sshd-session[5413]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 8 10:13:33.009386 systemd-logind[1535]: New session 14 of user core. Jul 8 10:13:33.016203 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 8 10:13:33.178383 sshd[5416]: Connection closed by 10.0.0.1 port 37254 Jul 8 10:13:33.179060 sshd-session[5413]: pam_unix(sshd:session): session closed for user core Jul 8 10:13:33.184236 systemd[1]: sshd@13-10.0.0.44:22-10.0.0.1:37254.service: Deactivated successfully. Jul 8 10:13:33.186949 systemd[1]: session-14.scope: Deactivated successfully. Jul 8 10:13:33.188505 systemd-logind[1535]: Session 14 logged out. Waiting for processes to exit. Jul 8 10:13:33.189906 systemd-logind[1535]: Removed session 14. Jul 8 10:13:33.361366 containerd[1553]: time="2025-07-08T10:13:33.361175024Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:13:33.362640 containerd[1553]: time="2025-07-08T10:13:33.362437523Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 8 10:13:33.363909 containerd[1553]: time="2025-07-08T10:13:33.363875392Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:13:33.366417 containerd[1553]: time="2025-07-08T10:13:33.366362266Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:13:33.366918 containerd[1553]: time="2025-07-08T10:13:33.366880508Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 3.416144792s" Jul 8 10:13:33.366968 containerd[1553]: time="2025-07-08T10:13:33.366925444Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 8 10:13:33.381590 containerd[1553]: time="2025-07-08T10:13:33.381528621Z" level=info msg="CreateContainer within sandbox \"485f32857c103c1b2d42c1aa76a5c48fabb7bd9139c9a0479345bd392f201a20\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 8 10:13:33.392599 containerd[1553]: time="2025-07-08T10:13:33.391716455Z" level=info msg="Container cc9c7692c307f06ffdf29a749b8875b88875013660ddd94f8e4a1ccf21e40a60: CDI devices from CRI Config.CDIDevices: []" Jul 8 10:13:33.402456 containerd[1553]: time="2025-07-08T10:13:33.402399797Z" level=info msg="CreateContainer within sandbox \"485f32857c103c1b2d42c1aa76a5c48fabb7bd9139c9a0479345bd392f201a20\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"cc9c7692c307f06ffdf29a749b8875b88875013660ddd94f8e4a1ccf21e40a60\"" Jul 8 10:13:33.403309 containerd[1553]: time="2025-07-08T10:13:33.403280519Z" level=info msg="StartContainer for \"cc9c7692c307f06ffdf29a749b8875b88875013660ddd94f8e4a1ccf21e40a60\"" Jul 8 10:13:33.404516 containerd[1553]: time="2025-07-08T10:13:33.404490727Z" level=info msg="connecting to shim cc9c7692c307f06ffdf29a749b8875b88875013660ddd94f8e4a1ccf21e40a60" address="unix:///run/containerd/s/38c63695a8a35641d34960c943e79c8a021320e577908957626793c8545ebb8c" protocol=ttrpc version=3 Jul 8 10:13:33.429220 systemd[1]: Started cri-containerd-cc9c7692c307f06ffdf29a749b8875b88875013660ddd94f8e4a1ccf21e40a60.scope - libcontainer container cc9c7692c307f06ffdf29a749b8875b88875013660ddd94f8e4a1ccf21e40a60. Jul 8 10:13:33.474731 containerd[1553]: time="2025-07-08T10:13:33.474689413Z" level=info msg="StartContainer for \"cc9c7692c307f06ffdf29a749b8875b88875013660ddd94f8e4a1ccf21e40a60\" returns successfully" Jul 8 10:13:34.452813 kubelet[2716]: I0708 10:13:34.451882 2716 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-746475f85f-hkq8h" podStartSLOduration=36.116173793 podStartE2EDuration="46.451864983s" podCreationTimestamp="2025-07-08 10:12:48 +0000 UTC" firstStartedPulling="2025-07-08 10:13:23.031988843 +0000 UTC m=+54.008926121" lastFinishedPulling="2025-07-08 10:13:33.367680033 +0000 UTC m=+64.344617311" observedRunningTime="2025-07-08 10:13:34.4515833 +0000 UTC m=+65.428520588" watchObservedRunningTime="2025-07-08 10:13:34.451864983 +0000 UTC m=+65.428802261" Jul 8 10:13:34.483244 containerd[1553]: time="2025-07-08T10:13:34.483202584Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cc9c7692c307f06ffdf29a749b8875b88875013660ddd94f8e4a1ccf21e40a60\" id:\"45e02ab4bbde8c3f7c4db618fa4816dd6f5f08487eff0ac30679057cac61f6ee\" pid:5496 exited_at:{seconds:1751969614 nanos:482903807}" Jul 8 10:13:38.192034 systemd[1]: Started sshd@14-10.0.0.44:22-10.0.0.1:57536.service - OpenSSH per-connection server daemon (10.0.0.1:57536). Jul 8 10:13:38.243970 sshd[5510]: Accepted publickey for core from 10.0.0.1 port 57536 ssh2: RSA SHA256:BQWGw9qDwwKm1WzdalDD28JNIdz3HBcaxKwPB41deh4 Jul 8 10:13:38.245354 sshd-session[5510]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 8 10:13:38.249380 systemd-logind[1535]: New session 15 of user core. Jul 8 10:13:38.260191 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 8 10:13:38.413160 sshd[5513]: Connection closed by 10.0.0.1 port 57536 Jul 8 10:13:38.413523 sshd-session[5510]: pam_unix(sshd:session): session closed for user core Jul 8 10:13:38.417318 systemd-logind[1535]: Session 15 logged out. Waiting for processes to exit. Jul 8 10:13:38.417705 systemd[1]: sshd@14-10.0.0.44:22-10.0.0.1:57536.service: Deactivated successfully. Jul 8 10:13:38.419892 systemd[1]: session-15.scope: Deactivated successfully. Jul 8 10:13:38.421537 systemd-logind[1535]: Removed session 15. Jul 8 10:13:42.377375 containerd[1553]: time="2025-07-08T10:13:42.377311907Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c19c544da539a8d44cdb17e1da27e28b8bf60b6fded8ebbf5a6bd14d9b5c518a\" id:\"4ac170b72634371bc62c31f49c84abbba4db080406bd89e1b26d39236c8816cf\" pid:5538 exited_at:{seconds:1751969622 nanos:376939942}" Jul 8 10:13:43.425711 systemd[1]: Started sshd@15-10.0.0.44:22-10.0.0.1:57544.service - OpenSSH per-connection server daemon (10.0.0.1:57544). Jul 8 10:13:43.483671 sshd[5554]: Accepted publickey for core from 10.0.0.1 port 57544 ssh2: RSA SHA256:BQWGw9qDwwKm1WzdalDD28JNIdz3HBcaxKwPB41deh4 Jul 8 10:13:43.485780 sshd-session[5554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 8 10:13:43.490568 systemd-logind[1535]: New session 16 of user core. Jul 8 10:13:43.504245 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 8 10:13:43.636206 sshd[5557]: Connection closed by 10.0.0.1 port 57544 Jul 8 10:13:43.636580 sshd-session[5554]: pam_unix(sshd:session): session closed for user core Jul 8 10:13:43.641332 systemd[1]: sshd@15-10.0.0.44:22-10.0.0.1:57544.service: Deactivated successfully. Jul 8 10:13:43.643497 systemd[1]: session-16.scope: Deactivated successfully. Jul 8 10:13:43.644473 systemd-logind[1535]: Session 16 logged out. Waiting for processes to exit. Jul 8 10:13:43.645652 systemd-logind[1535]: Removed session 16. Jul 8 10:13:48.654293 systemd[1]: Started sshd@16-10.0.0.44:22-10.0.0.1:42382.service - OpenSSH per-connection server daemon (10.0.0.1:42382). Jul 8 10:13:48.748858 sshd[5572]: Accepted publickey for core from 10.0.0.1 port 42382 ssh2: RSA SHA256:BQWGw9qDwwKm1WzdalDD28JNIdz3HBcaxKwPB41deh4 Jul 8 10:13:48.756334 sshd-session[5572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 8 10:13:48.762414 systemd-logind[1535]: New session 17 of user core. Jul 8 10:13:48.766192 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 8 10:13:48.986834 sshd[5575]: Connection closed by 10.0.0.1 port 42382 Jul 8 10:13:48.987613 sshd-session[5572]: pam_unix(sshd:session): session closed for user core Jul 8 10:13:48.996281 systemd[1]: sshd@16-10.0.0.44:22-10.0.0.1:42382.service: Deactivated successfully. Jul 8 10:13:48.998849 systemd[1]: session-17.scope: Deactivated successfully. Jul 8 10:13:48.999702 systemd-logind[1535]: Session 17 logged out. Waiting for processes to exit. Jul 8 10:13:49.002958 systemd[1]: Started sshd@17-10.0.0.44:22-10.0.0.1:42384.service - OpenSSH per-connection server daemon (10.0.0.1:42384). Jul 8 10:13:49.003605 systemd-logind[1535]: Removed session 17. Jul 8 10:13:49.063399 sshd[5589]: Accepted publickey for core from 10.0.0.1 port 42384 ssh2: RSA SHA256:BQWGw9qDwwKm1WzdalDD28JNIdz3HBcaxKwPB41deh4 Jul 8 10:13:49.065692 sshd-session[5589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 8 10:13:49.071743 systemd-logind[1535]: New session 18 of user core. Jul 8 10:13:49.083248 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 8 10:13:49.375563 sshd[5592]: Connection closed by 10.0.0.1 port 42384 Jul 8 10:13:49.376010 sshd-session[5589]: pam_unix(sshd:session): session closed for user core Jul 8 10:13:49.387208 systemd[1]: sshd@17-10.0.0.44:22-10.0.0.1:42384.service: Deactivated successfully. Jul 8 10:13:49.389185 systemd[1]: session-18.scope: Deactivated successfully. Jul 8 10:13:49.389973 systemd-logind[1535]: Session 18 logged out. Waiting for processes to exit. Jul 8 10:13:49.393014 systemd[1]: Started sshd@18-10.0.0.44:22-10.0.0.1:42396.service - OpenSSH per-connection server daemon (10.0.0.1:42396). Jul 8 10:13:49.394331 systemd-logind[1535]: Removed session 18. Jul 8 10:13:49.454751 sshd[5606]: Accepted publickey for core from 10.0.0.1 port 42396 ssh2: RSA SHA256:BQWGw9qDwwKm1WzdalDD28JNIdz3HBcaxKwPB41deh4 Jul 8 10:13:49.456369 sshd-session[5606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 8 10:13:49.461397 systemd-logind[1535]: New session 19 of user core. Jul 8 10:13:49.470251 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 8 10:13:50.049754 containerd[1553]: time="2025-07-08T10:13:50.049712559Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bd1a985dbb05db5e43e74083f50191ab55b1d509d0ff78c0b66d9e0ac8c4b7d8\" id:\"d7c7aef20fddb24ef1ab02ca1d2aafdac6a5e6db0a75e7ef09797ed5aa2c14fa\" pid:5627 exited_at:{seconds:1751969630 nanos:49393298}" Jul 8 10:13:50.379448 sshd[5609]: Connection closed by 10.0.0.1 port 42396 Jul 8 10:13:50.380861 sshd-session[5606]: pam_unix(sshd:session): session closed for user core Jul 8 10:13:50.390013 systemd[1]: sshd@18-10.0.0.44:22-10.0.0.1:42396.service: Deactivated successfully. Jul 8 10:13:50.392323 systemd[1]: session-19.scope: Deactivated successfully. Jul 8 10:13:50.394051 systemd-logind[1535]: Session 19 logged out. Waiting for processes to exit. Jul 8 10:13:50.397305 systemd[1]: Started sshd@19-10.0.0.44:22-10.0.0.1:42398.service - OpenSSH per-connection server daemon (10.0.0.1:42398). Jul 8 10:13:50.398469 systemd-logind[1535]: Removed session 19. Jul 8 10:13:50.455083 sshd[5651]: Accepted publickey for core from 10.0.0.1 port 42398 ssh2: RSA SHA256:BQWGw9qDwwKm1WzdalDD28JNIdz3HBcaxKwPB41deh4 Jul 8 10:13:50.456344 sshd-session[5651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 8 10:13:50.460818 systemd-logind[1535]: New session 20 of user core. Jul 8 10:13:50.470213 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 8 10:13:51.014149 sshd[5654]: Connection closed by 10.0.0.1 port 42398 Jul 8 10:13:51.014626 sshd-session[5651]: pam_unix(sshd:session): session closed for user core Jul 8 10:13:51.025685 systemd[1]: sshd@19-10.0.0.44:22-10.0.0.1:42398.service: Deactivated successfully. Jul 8 10:13:51.027663 systemd[1]: session-20.scope: Deactivated successfully. Jul 8 10:13:51.028374 systemd-logind[1535]: Session 20 logged out. Waiting for processes to exit. Jul 8 10:13:51.031248 systemd[1]: Started sshd@20-10.0.0.44:22-10.0.0.1:42414.service - OpenSSH per-connection server daemon (10.0.0.1:42414). Jul 8 10:13:51.032103 systemd-logind[1535]: Removed session 20. Jul 8 10:13:51.093015 sshd[5665]: Accepted publickey for core from 10.0.0.1 port 42414 ssh2: RSA SHA256:BQWGw9qDwwKm1WzdalDD28JNIdz3HBcaxKwPB41deh4 Jul 8 10:13:51.094840 sshd-session[5665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 8 10:13:51.099605 systemd-logind[1535]: New session 21 of user core. Jul 8 10:13:51.109239 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 8 10:13:51.217657 sshd[5668]: Connection closed by 10.0.0.1 port 42414 Jul 8 10:13:51.217994 sshd-session[5665]: pam_unix(sshd:session): session closed for user core Jul 8 10:13:51.221762 systemd[1]: sshd@20-10.0.0.44:22-10.0.0.1:42414.service: Deactivated successfully. Jul 8 10:13:51.223717 systemd[1]: session-21.scope: Deactivated successfully. Jul 8 10:13:51.224533 systemd-logind[1535]: Session 21 logged out. Waiting for processes to exit. Jul 8 10:13:51.225576 systemd-logind[1535]: Removed session 21. Jul 8 10:13:52.479492 containerd[1553]: time="2025-07-08T10:13:52.479431458Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bd1a985dbb05db5e43e74083f50191ab55b1d509d0ff78c0b66d9e0ac8c4b7d8\" id:\"95e253c86e572f2c32198e30dbdac758f39dd7498f4c34ad48ed6d95b86548d0\" pid:5692 exited_at:{seconds:1751969632 nanos:479197882}" Jul 8 10:13:56.230916 systemd[1]: Started sshd@21-10.0.0.44:22-10.0.0.1:42418.service - OpenSSH per-connection server daemon (10.0.0.1:42418). Jul 8 10:13:56.285173 sshd[5716]: Accepted publickey for core from 10.0.0.1 port 42418 ssh2: RSA SHA256:BQWGw9qDwwKm1WzdalDD28JNIdz3HBcaxKwPB41deh4 Jul 8 10:13:56.286367 sshd-session[5716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 8 10:13:56.290657 systemd-logind[1535]: New session 22 of user core. Jul 8 10:13:56.298185 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 8 10:13:56.416636 sshd[5719]: Connection closed by 10.0.0.1 port 42418 Jul 8 10:13:56.416931 sshd-session[5716]: pam_unix(sshd:session): session closed for user core Jul 8 10:13:56.420825 systemd[1]: sshd@21-10.0.0.44:22-10.0.0.1:42418.service: Deactivated successfully. Jul 8 10:13:56.422659 systemd[1]: session-22.scope: Deactivated successfully. Jul 8 10:13:56.423420 systemd-logind[1535]: Session 22 logged out. Waiting for processes to exit. Jul 8 10:13:56.424561 systemd-logind[1535]: Removed session 22. Jul 8 10:13:56.837667 containerd[1553]: time="2025-07-08T10:13:56.837619639Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cc9c7692c307f06ffdf29a749b8875b88875013660ddd94f8e4a1ccf21e40a60\" id:\"1f6e9155291dacf5ef991caa2bc24964a01b681c5612f4340e66515b38f9889c\" pid:5744 exited_at:{seconds:1751969636 nanos:837392135}" Jul 8 10:14:01.434723 systemd[1]: Started sshd@22-10.0.0.44:22-10.0.0.1:50900.service - OpenSSH per-connection server daemon (10.0.0.1:50900). Jul 8 10:14:01.507569 sshd[5755]: Accepted publickey for core from 10.0.0.1 port 50900 ssh2: RSA SHA256:BQWGw9qDwwKm1WzdalDD28JNIdz3HBcaxKwPB41deh4 Jul 8 10:14:01.509285 sshd-session[5755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 8 10:14:01.513725 systemd-logind[1535]: New session 23 of user core. Jul 8 10:14:01.525326 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 8 10:14:01.672354 sshd[5758]: Connection closed by 10.0.0.1 port 50900 Jul 8 10:14:01.672847 sshd-session[5755]: pam_unix(sshd:session): session closed for user core Jul 8 10:14:01.678162 systemd[1]: sshd@22-10.0.0.44:22-10.0.0.1:50900.service: Deactivated successfully. Jul 8 10:14:01.680334 systemd[1]: session-23.scope: Deactivated successfully. Jul 8 10:14:01.681044 systemd-logind[1535]: Session 23 logged out. Waiting for processes to exit. Jul 8 10:14:01.682156 systemd-logind[1535]: Removed session 23. Jul 8 10:14:04.488602 containerd[1553]: time="2025-07-08T10:14:04.488549150Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cc9c7692c307f06ffdf29a749b8875b88875013660ddd94f8e4a1ccf21e40a60\" id:\"dba437c158601aae0203d5921c5a11136fec5a62611a696da857ff7717e8ad44\" pid:5785 exited_at:{seconds:1751969644 nanos:488276562}" Jul 8 10:14:06.686948 systemd[1]: Started sshd@23-10.0.0.44:22-10.0.0.1:50910.service - OpenSSH per-connection server daemon (10.0.0.1:50910). Jul 8 10:14:06.754698 sshd[5799]: Accepted publickey for core from 10.0.0.1 port 50910 ssh2: RSA SHA256:BQWGw9qDwwKm1WzdalDD28JNIdz3HBcaxKwPB41deh4 Jul 8 10:14:06.757225 sshd-session[5799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 8 10:14:06.762183 systemd-logind[1535]: New session 24 of user core. Jul 8 10:14:06.776185 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 8 10:14:06.988797 sshd[5802]: Connection closed by 10.0.0.1 port 50910 Jul 8 10:14:06.989085 sshd-session[5799]: pam_unix(sshd:session): session closed for user core Jul 8 10:14:06.993800 systemd[1]: sshd@23-10.0.0.44:22-10.0.0.1:50910.service: Deactivated successfully. Jul 8 10:14:06.996040 systemd[1]: session-24.scope: Deactivated successfully. Jul 8 10:14:06.996826 systemd-logind[1535]: Session 24 logged out. Waiting for processes to exit. Jul 8 10:14:06.997991 systemd-logind[1535]: Removed session 24.