Jul 8 10:08:22.823927 kernel: Linux version 6.12.35-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue Jul 8 08:29:03 -00 2025 Jul 8 10:08:22.823954 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=3331cec13b1d190d4054e739c6cc72d1bbe47015265c5d8f0c303fc32f06c18e Jul 8 10:08:22.823968 kernel: BIOS-provided physical RAM map: Jul 8 10:08:22.823977 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 8 10:08:22.823983 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jul 8 10:08:22.823989 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jul 8 10:08:22.823997 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jul 8 10:08:22.824004 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jul 8 10:08:22.824016 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Jul 8 10:08:22.824023 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jul 8 10:08:22.824030 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Jul 8 10:08:22.824036 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jul 8 10:08:22.824042 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jul 8 10:08:22.824049 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jul 8 10:08:22.824059 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jul 8 10:08:22.824067 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jul 8 10:08:22.824084 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jul 8 10:08:22.824091 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jul 8 10:08:22.824101 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jul 8 10:08:22.824109 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jul 8 10:08:22.824116 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jul 8 10:08:22.824123 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jul 8 10:08:22.824130 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jul 8 10:08:22.824137 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 8 10:08:22.824143 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jul 8 10:08:22.824163 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 8 10:08:22.824178 kernel: NX (Execute Disable) protection: active Jul 8 10:08:22.824193 kernel: APIC: Static calls initialized Jul 8 10:08:22.824211 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Jul 8 10:08:22.824218 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Jul 8 10:08:22.824225 kernel: extended physical RAM map: Jul 8 10:08:22.824252 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 8 10:08:22.824259 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Jul 8 10:08:22.824266 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jul 8 10:08:22.824274 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Jul 8 10:08:22.824285 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jul 8 10:08:22.824303 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Jul 8 10:08:22.824317 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jul 8 10:08:22.824333 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Jul 8 10:08:22.824346 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Jul 8 10:08:22.824358 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Jul 8 10:08:22.824366 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Jul 8 10:08:22.824377 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Jul 8 10:08:22.824385 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jul 8 10:08:22.824392 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jul 8 10:08:22.824399 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jul 8 10:08:22.824407 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jul 8 10:08:22.824414 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jul 8 10:08:22.824421 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jul 8 10:08:22.824428 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jul 8 10:08:22.824436 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jul 8 10:08:22.824443 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jul 8 10:08:22.824452 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jul 8 10:08:22.824459 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jul 8 10:08:22.824467 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jul 8 10:08:22.824474 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 8 10:08:22.824481 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jul 8 10:08:22.824488 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 8 10:08:22.824499 kernel: efi: EFI v2.7 by EDK II Jul 8 10:08:22.824506 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Jul 8 10:08:22.824514 kernel: random: crng init done Jul 8 10:08:22.824523 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Jul 8 10:08:22.824531 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Jul 8 10:08:22.824542 kernel: secureboot: Secure boot disabled Jul 8 10:08:22.824550 kernel: SMBIOS 2.8 present. Jul 8 10:08:22.824557 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Jul 8 10:08:22.824564 kernel: DMI: Memory slots populated: 1/1 Jul 8 10:08:22.824571 kernel: Hypervisor detected: KVM Jul 8 10:08:22.824579 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 8 10:08:22.824586 kernel: kvm-clock: using sched offset of 3996748560 cycles Jul 8 10:08:22.824594 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 8 10:08:22.824602 kernel: tsc: Detected 2794.750 MHz processor Jul 8 10:08:22.824609 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 8 10:08:22.824617 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 8 10:08:22.824627 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jul 8 10:08:22.824635 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jul 8 10:08:22.824642 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 8 10:08:22.824650 kernel: Using GB pages for direct mapping Jul 8 10:08:22.824657 kernel: ACPI: Early table checksum verification disabled Jul 8 10:08:22.824665 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jul 8 10:08:22.824672 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jul 8 10:08:22.824680 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 8 10:08:22.824687 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 8 10:08:22.824697 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jul 8 10:08:22.824705 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 8 10:08:22.824712 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 8 10:08:22.824720 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 8 10:08:22.824727 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 8 10:08:22.824735 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jul 8 10:08:22.824742 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jul 8 10:08:22.824749 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Jul 8 10:08:22.824759 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jul 8 10:08:22.824767 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jul 8 10:08:22.824774 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jul 8 10:08:22.824781 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jul 8 10:08:22.824789 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jul 8 10:08:22.824796 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jul 8 10:08:22.824803 kernel: No NUMA configuration found Jul 8 10:08:22.824811 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Jul 8 10:08:22.824818 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Jul 8 10:08:22.824825 kernel: Zone ranges: Jul 8 10:08:22.824835 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 8 10:08:22.824843 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Jul 8 10:08:22.824850 kernel: Normal empty Jul 8 10:08:22.824857 kernel: Device empty Jul 8 10:08:22.824873 kernel: Movable zone start for each node Jul 8 10:08:22.824880 kernel: Early memory node ranges Jul 8 10:08:22.824888 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jul 8 10:08:22.824895 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jul 8 10:08:22.824905 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jul 8 10:08:22.824916 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Jul 8 10:08:22.824923 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Jul 8 10:08:22.824931 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Jul 8 10:08:22.824938 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Jul 8 10:08:22.824945 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Jul 8 10:08:22.824952 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Jul 8 10:08:22.824960 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 8 10:08:22.824970 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jul 8 10:08:22.824987 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jul 8 10:08:22.824995 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 8 10:08:22.825003 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Jul 8 10:08:22.825011 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Jul 8 10:08:22.825022 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jul 8 10:08:22.825031 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Jul 8 10:08:22.825039 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Jul 8 10:08:22.825049 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 8 10:08:22.825057 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 8 10:08:22.825067 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 8 10:08:22.825075 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 8 10:08:22.825092 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 8 10:08:22.825100 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 8 10:08:22.825108 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 8 10:08:22.825124 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 8 10:08:22.825132 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 8 10:08:22.825140 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 8 10:08:22.825157 kernel: TSC deadline timer available Jul 8 10:08:22.825177 kernel: CPU topo: Max. logical packages: 1 Jul 8 10:08:22.825199 kernel: CPU topo: Max. logical dies: 1 Jul 8 10:08:22.825212 kernel: CPU topo: Max. dies per package: 1 Jul 8 10:08:22.825220 kernel: CPU topo: Max. threads per core: 1 Jul 8 10:08:22.825251 kernel: CPU topo: Num. cores per package: 4 Jul 8 10:08:22.825260 kernel: CPU topo: Num. threads per package: 4 Jul 8 10:08:22.825268 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jul 8 10:08:22.825283 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 8 10:08:22.825294 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 8 10:08:22.825303 kernel: kvm-guest: setup PV sched yield Jul 8 10:08:22.825318 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Jul 8 10:08:22.825326 kernel: Booting paravirtualized kernel on KVM Jul 8 10:08:22.825334 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 8 10:08:22.825342 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jul 8 10:08:22.825350 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jul 8 10:08:22.825358 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jul 8 10:08:22.825366 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 8 10:08:22.825373 kernel: kvm-guest: PV spinlocks enabled Jul 8 10:08:22.825384 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 8 10:08:22.825393 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=3331cec13b1d190d4054e739c6cc72d1bbe47015265c5d8f0c303fc32f06c18e Jul 8 10:08:22.825405 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 8 10:08:22.825413 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 8 10:08:22.825421 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 8 10:08:22.825428 kernel: Fallback order for Node 0: 0 Jul 8 10:08:22.825436 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Jul 8 10:08:22.825444 kernel: Policy zone: DMA32 Jul 8 10:08:22.825452 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 8 10:08:22.825462 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 8 10:08:22.825470 kernel: ftrace: allocating 40097 entries in 157 pages Jul 8 10:08:22.825477 kernel: ftrace: allocated 157 pages with 5 groups Jul 8 10:08:22.825485 kernel: Dynamic Preempt: voluntary Jul 8 10:08:22.825493 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 8 10:08:22.825502 kernel: rcu: RCU event tracing is enabled. Jul 8 10:08:22.825510 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 8 10:08:22.825517 kernel: Trampoline variant of Tasks RCU enabled. Jul 8 10:08:22.825525 kernel: Rude variant of Tasks RCU enabled. Jul 8 10:08:22.825535 kernel: Tracing variant of Tasks RCU enabled. Jul 8 10:08:22.825543 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 8 10:08:22.825554 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 8 10:08:22.825562 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 8 10:08:22.825570 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 8 10:08:22.825578 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 8 10:08:22.825586 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 8 10:08:22.825594 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 8 10:08:22.825601 kernel: Console: colour dummy device 80x25 Jul 8 10:08:22.825612 kernel: printk: legacy console [ttyS0] enabled Jul 8 10:08:22.825619 kernel: ACPI: Core revision 20240827 Jul 8 10:08:22.825627 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 8 10:08:22.825635 kernel: APIC: Switch to symmetric I/O mode setup Jul 8 10:08:22.825643 kernel: x2apic enabled Jul 8 10:08:22.825650 kernel: APIC: Switched APIC routing to: physical x2apic Jul 8 10:08:22.825658 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jul 8 10:08:22.825666 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jul 8 10:08:22.825674 kernel: kvm-guest: setup PV IPIs Jul 8 10:08:22.825684 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 8 10:08:22.825692 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Jul 8 10:08:22.825700 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Jul 8 10:08:22.825707 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 8 10:08:22.825716 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 8 10:08:22.825723 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 8 10:08:22.825731 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 8 10:08:22.825739 kernel: Spectre V2 : Mitigation: Retpolines Jul 8 10:08:22.825750 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 8 10:08:22.825760 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 8 10:08:22.825768 kernel: RETBleed: Mitigation: untrained return thunk Jul 8 10:08:22.825776 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 8 10:08:22.825786 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 8 10:08:22.825794 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jul 8 10:08:22.825803 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jul 8 10:08:22.825811 kernel: x86/bugs: return thunk changed Jul 8 10:08:22.825819 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jul 8 10:08:22.825829 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 8 10:08:22.825837 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 8 10:08:22.825845 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 8 10:08:22.825853 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 8 10:08:22.825861 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 8 10:08:22.825868 kernel: Freeing SMP alternatives memory: 32K Jul 8 10:08:22.825876 kernel: pid_max: default: 32768 minimum: 301 Jul 8 10:08:22.825884 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 8 10:08:22.825891 kernel: landlock: Up and running. Jul 8 10:08:22.825901 kernel: SELinux: Initializing. Jul 8 10:08:22.825909 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 8 10:08:22.825917 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 8 10:08:22.825925 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 8 10:08:22.825933 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 8 10:08:22.825940 kernel: ... version: 0 Jul 8 10:08:22.825948 kernel: ... bit width: 48 Jul 8 10:08:22.825956 kernel: ... generic registers: 6 Jul 8 10:08:22.825964 kernel: ... value mask: 0000ffffffffffff Jul 8 10:08:22.825974 kernel: ... max period: 00007fffffffffff Jul 8 10:08:22.825981 kernel: ... fixed-purpose events: 0 Jul 8 10:08:22.825989 kernel: ... event mask: 000000000000003f Jul 8 10:08:22.825997 kernel: signal: max sigframe size: 1776 Jul 8 10:08:22.826004 kernel: rcu: Hierarchical SRCU implementation. Jul 8 10:08:22.826012 kernel: rcu: Max phase no-delay instances is 400. Jul 8 10:08:22.826023 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 8 10:08:22.826031 kernel: smp: Bringing up secondary CPUs ... Jul 8 10:08:22.826038 kernel: smpboot: x86: Booting SMP configuration: Jul 8 10:08:22.826048 kernel: .... node #0, CPUs: #1 #2 #3 Jul 8 10:08:22.826056 kernel: smp: Brought up 1 node, 4 CPUs Jul 8 10:08:22.826064 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Jul 8 10:08:22.826072 kernel: Memory: 2422668K/2565800K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54592K init, 2376K bss, 137196K reserved, 0K cma-reserved) Jul 8 10:08:22.826079 kernel: devtmpfs: initialized Jul 8 10:08:22.826087 kernel: x86/mm: Memory block size: 128MB Jul 8 10:08:22.826096 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jul 8 10:08:22.826103 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jul 8 10:08:22.826111 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Jul 8 10:08:22.826121 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jul 8 10:08:22.826129 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Jul 8 10:08:22.826137 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jul 8 10:08:22.826145 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 8 10:08:22.826153 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 8 10:08:22.826160 kernel: pinctrl core: initialized pinctrl subsystem Jul 8 10:08:22.826168 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 8 10:08:22.826176 kernel: audit: initializing netlink subsys (disabled) Jul 8 10:08:22.826184 kernel: audit: type=2000 audit(1751969301.016:1): state=initialized audit_enabled=0 res=1 Jul 8 10:08:22.826193 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 8 10:08:22.826201 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 8 10:08:22.826209 kernel: cpuidle: using governor menu Jul 8 10:08:22.826217 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 8 10:08:22.826224 kernel: dca service started, version 1.12.1 Jul 8 10:08:22.826250 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Jul 8 10:08:22.826258 kernel: PCI: Using configuration type 1 for base access Jul 8 10:08:22.826266 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 8 10:08:22.826274 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 8 10:08:22.826285 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 8 10:08:22.826293 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 8 10:08:22.826301 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 8 10:08:22.826309 kernel: ACPI: Added _OSI(Module Device) Jul 8 10:08:22.826316 kernel: ACPI: Added _OSI(Processor Device) Jul 8 10:08:22.826324 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 8 10:08:22.826332 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 8 10:08:22.826339 kernel: ACPI: Interpreter enabled Jul 8 10:08:22.826347 kernel: ACPI: PM: (supports S0 S3 S5) Jul 8 10:08:22.826357 kernel: ACPI: Using IOAPIC for interrupt routing Jul 8 10:08:22.826365 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 8 10:08:22.826373 kernel: PCI: Using E820 reservations for host bridge windows Jul 8 10:08:22.826380 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 8 10:08:22.826388 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 8 10:08:22.826599 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 8 10:08:22.826721 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jul 8 10:08:22.826842 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jul 8 10:08:22.826852 kernel: PCI host bridge to bus 0000:00 Jul 8 10:08:22.827017 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 8 10:08:22.827130 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 8 10:08:22.827272 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 8 10:08:22.827398 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Jul 8 10:08:22.827508 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jul 8 10:08:22.827612 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Jul 8 10:08:22.827722 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 8 10:08:22.827872 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jul 8 10:08:22.828006 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jul 8 10:08:22.828128 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Jul 8 10:08:22.828266 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Jul 8 10:08:22.828398 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Jul 8 10:08:22.828529 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 8 10:08:22.828665 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jul 8 10:08:22.828783 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Jul 8 10:08:22.828899 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Jul 8 10:08:22.829029 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Jul 8 10:08:22.829172 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jul 8 10:08:22.829315 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Jul 8 10:08:22.829439 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Jul 8 10:08:22.829581 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Jul 8 10:08:22.829741 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jul 8 10:08:22.829859 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Jul 8 10:08:22.829980 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Jul 8 10:08:22.830094 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Jul 8 10:08:22.830224 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Jul 8 10:08:22.830389 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jul 8 10:08:22.830505 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 8 10:08:22.830638 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jul 8 10:08:22.830754 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Jul 8 10:08:22.830867 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Jul 8 10:08:22.831031 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jul 8 10:08:22.831159 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Jul 8 10:08:22.831170 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 8 10:08:22.831178 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 8 10:08:22.831187 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 8 10:08:22.831195 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 8 10:08:22.831203 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 8 10:08:22.831216 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 8 10:08:22.831227 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 8 10:08:22.831270 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 8 10:08:22.831280 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 8 10:08:22.831288 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 8 10:08:22.831296 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 8 10:08:22.831304 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 8 10:08:22.831312 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 8 10:08:22.831320 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 8 10:08:22.831328 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 8 10:08:22.831336 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 8 10:08:22.831347 kernel: iommu: Default domain type: Translated Jul 8 10:08:22.831355 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 8 10:08:22.831363 kernel: efivars: Registered efivars operations Jul 8 10:08:22.831371 kernel: PCI: Using ACPI for IRQ routing Jul 8 10:08:22.831379 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 8 10:08:22.831388 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jul 8 10:08:22.831396 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Jul 8 10:08:22.831404 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Jul 8 10:08:22.831412 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Jul 8 10:08:22.831422 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Jul 8 10:08:22.831439 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Jul 8 10:08:22.831455 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Jul 8 10:08:22.831462 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Jul 8 10:08:22.831602 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 8 10:08:22.831749 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 8 10:08:22.831866 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 8 10:08:22.831877 kernel: vgaarb: loaded Jul 8 10:08:22.831893 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 8 10:08:22.831902 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 8 10:08:22.831910 kernel: clocksource: Switched to clocksource kvm-clock Jul 8 10:08:22.831918 kernel: VFS: Disk quotas dquot_6.6.0 Jul 8 10:08:22.831926 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 8 10:08:22.831935 kernel: pnp: PnP ACPI init Jul 8 10:08:22.832096 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Jul 8 10:08:22.832126 kernel: pnp: PnP ACPI: found 6 devices Jul 8 10:08:22.832138 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 8 10:08:22.832147 kernel: NET: Registered PF_INET protocol family Jul 8 10:08:22.832155 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 8 10:08:22.832164 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 8 10:08:22.832172 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 8 10:08:22.832181 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 8 10:08:22.832189 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 8 10:08:22.832197 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 8 10:08:22.832206 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 8 10:08:22.832216 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 8 10:08:22.832225 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 8 10:08:22.832256 kernel: NET: Registered PF_XDP protocol family Jul 8 10:08:22.832381 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Jul 8 10:08:22.832499 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Jul 8 10:08:22.832609 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 8 10:08:22.832735 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 8 10:08:22.832853 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 8 10:08:22.832964 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Jul 8 10:08:22.833073 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jul 8 10:08:22.833179 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Jul 8 10:08:22.833190 kernel: PCI: CLS 0 bytes, default 64 Jul 8 10:08:22.833199 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Jul 8 10:08:22.833207 kernel: Initialise system trusted keyrings Jul 8 10:08:22.833216 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 8 10:08:22.833224 kernel: Key type asymmetric registered Jul 8 10:08:22.833297 kernel: Asymmetric key parser 'x509' registered Jul 8 10:08:22.833309 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 8 10:08:22.833318 kernel: io scheduler mq-deadline registered Jul 8 10:08:22.833329 kernel: io scheduler kyber registered Jul 8 10:08:22.833337 kernel: io scheduler bfq registered Jul 8 10:08:22.833346 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 8 10:08:22.833357 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 8 10:08:22.833365 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 8 10:08:22.833374 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jul 8 10:08:22.833382 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 8 10:08:22.833391 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 8 10:08:22.833400 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 8 10:08:22.833408 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 8 10:08:22.833422 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 8 10:08:22.833572 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 8 10:08:22.833698 kernel: rtc_cmos 00:04: registered as rtc0 Jul 8 10:08:22.833819 kernel: rtc_cmos 00:04: setting system clock to 2025-07-08T10:08:22 UTC (1751969302) Jul 8 10:08:22.833929 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jul 8 10:08:22.833940 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Jul 8 10:08:22.833949 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 8 10:08:22.833957 kernel: efifb: probing for efifb Jul 8 10:08:22.833966 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Jul 8 10:08:22.833974 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jul 8 10:08:22.833986 kernel: efifb: scrolling: redraw Jul 8 10:08:22.833994 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 8 10:08:22.834003 kernel: Console: switching to colour frame buffer device 160x50 Jul 8 10:08:22.834011 kernel: fb0: EFI VGA frame buffer device Jul 8 10:08:22.834020 kernel: pstore: Using crash dump compression: deflate Jul 8 10:08:22.834029 kernel: pstore: Registered efi_pstore as persistent store backend Jul 8 10:08:22.834040 kernel: NET: Registered PF_INET6 protocol family Jul 8 10:08:22.834049 kernel: Segment Routing with IPv6 Jul 8 10:08:22.834059 kernel: In-situ OAM (IOAM) with IPv6 Jul 8 10:08:22.834069 kernel: NET: Registered PF_PACKET protocol family Jul 8 10:08:22.834078 kernel: Key type dns_resolver registered Jul 8 10:08:22.834086 kernel: IPI shorthand broadcast: enabled Jul 8 10:08:22.834094 kernel: sched_clock: Marking stable (3257003301, 150883146)->(3428230812, -20344365) Jul 8 10:08:22.834102 kernel: registered taskstats version 1 Jul 8 10:08:22.834110 kernel: Loading compiled-in X.509 certificates Jul 8 10:08:22.834119 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.35-flatcar: 979ef2c0f02e8e58776916c0ada334818b3eaefe' Jul 8 10:08:22.834128 kernel: Demotion targets for Node 0: null Jul 8 10:08:22.834136 kernel: Key type .fscrypt registered Jul 8 10:08:22.834146 kernel: Key type fscrypt-provisioning registered Jul 8 10:08:22.834155 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 8 10:08:22.834163 kernel: ima: Allocated hash algorithm: sha1 Jul 8 10:08:22.834171 kernel: ima: No architecture policies found Jul 8 10:08:22.834180 kernel: clk: Disabling unused clocks Jul 8 10:08:22.834188 kernel: Warning: unable to open an initial console. Jul 8 10:08:22.834196 kernel: Freeing unused kernel image (initmem) memory: 54592K Jul 8 10:08:22.834205 kernel: Write protecting the kernel read-only data: 24576k Jul 8 10:08:22.834213 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 8 10:08:22.834224 kernel: Run /init as init process Jul 8 10:08:22.834254 kernel: with arguments: Jul 8 10:08:22.834263 kernel: /init Jul 8 10:08:22.834272 kernel: with environment: Jul 8 10:08:22.834281 kernel: HOME=/ Jul 8 10:08:22.834289 kernel: TERM=linux Jul 8 10:08:22.834297 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 8 10:08:22.834306 systemd[1]: Successfully made /usr/ read-only. Jul 8 10:08:22.834318 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 8 10:08:22.834330 systemd[1]: Detected virtualization kvm. Jul 8 10:08:22.834339 systemd[1]: Detected architecture x86-64. Jul 8 10:08:22.834348 systemd[1]: Running in initrd. Jul 8 10:08:22.834357 systemd[1]: No hostname configured, using default hostname. Jul 8 10:08:22.834366 systemd[1]: Hostname set to . Jul 8 10:08:22.834374 systemd[1]: Initializing machine ID from VM UUID. Jul 8 10:08:22.834383 systemd[1]: Queued start job for default target initrd.target. Jul 8 10:08:22.834395 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 8 10:08:22.834404 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 8 10:08:22.834413 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 8 10:08:22.834422 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 8 10:08:22.834431 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 8 10:08:22.834441 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 8 10:08:22.834451 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 8 10:08:22.834462 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 8 10:08:22.834471 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 8 10:08:22.834480 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 8 10:08:22.834489 systemd[1]: Reached target paths.target - Path Units. Jul 8 10:08:22.834498 systemd[1]: Reached target slices.target - Slice Units. Jul 8 10:08:22.834509 systemd[1]: Reached target swap.target - Swaps. Jul 8 10:08:22.834518 systemd[1]: Reached target timers.target - Timer Units. Jul 8 10:08:22.834527 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 8 10:08:22.834538 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 8 10:08:22.834546 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 8 10:08:22.834555 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 8 10:08:22.834564 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 8 10:08:22.834574 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 8 10:08:22.834583 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 8 10:08:22.834592 systemd[1]: Reached target sockets.target - Socket Units. Jul 8 10:08:22.834601 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 8 10:08:22.834610 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 8 10:08:22.834620 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 8 10:08:22.834630 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 8 10:08:22.834639 systemd[1]: Starting systemd-fsck-usr.service... Jul 8 10:08:22.834648 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 8 10:08:22.834656 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 8 10:08:22.834665 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 8 10:08:22.834674 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 8 10:08:22.834686 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 8 10:08:22.834695 systemd[1]: Finished systemd-fsck-usr.service. Jul 8 10:08:22.834707 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 8 10:08:22.834742 systemd-journald[219]: Collecting audit messages is disabled. Jul 8 10:08:22.834767 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 8 10:08:22.834776 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 8 10:08:22.834787 systemd-journald[219]: Journal started Jul 8 10:08:22.834808 systemd-journald[219]: Runtime Journal (/run/log/journal/65ab2deedf634ec8b469651c7176c362) is 6M, max 48.5M, 42.4M free. Jul 8 10:08:22.823046 systemd-modules-load[221]: Inserted module 'overlay' Jul 8 10:08:22.840293 systemd[1]: Started systemd-journald.service - Journal Service. Jul 8 10:08:22.840708 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 8 10:08:22.850102 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 8 10:08:22.851421 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 8 10:08:22.851537 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 8 10:08:22.855284 kernel: Bridge firewalling registered Jul 8 10:08:22.855152 systemd-modules-load[221]: Inserted module 'br_netfilter' Jul 8 10:08:22.856160 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 8 10:08:22.859863 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 8 10:08:22.867117 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 8 10:08:22.870385 systemd-tmpfiles[241]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 8 10:08:22.872634 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 8 10:08:22.875374 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 8 10:08:22.876516 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 8 10:08:22.878632 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 8 10:08:22.881507 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 8 10:08:22.905189 dracut-cmdline[258]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=3331cec13b1d190d4054e739c6cc72d1bbe47015265c5d8f0c303fc32f06c18e Jul 8 10:08:22.924223 systemd-resolved[261]: Positive Trust Anchors: Jul 8 10:08:22.924296 systemd-resolved[261]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 8 10:08:22.924326 systemd-resolved[261]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 8 10:08:22.926764 systemd-resolved[261]: Defaulting to hostname 'linux'. Jul 8 10:08:22.928117 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 8 10:08:22.934792 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 8 10:08:23.023304 kernel: SCSI subsystem initialized Jul 8 10:08:23.033288 kernel: Loading iSCSI transport class v2.0-870. Jul 8 10:08:23.044281 kernel: iscsi: registered transport (tcp) Jul 8 10:08:23.065317 kernel: iscsi: registered transport (qla4xxx) Jul 8 10:08:23.065392 kernel: QLogic iSCSI HBA Driver Jul 8 10:08:23.087517 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 8 10:08:23.112356 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 8 10:08:23.114587 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 8 10:08:23.170774 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 8 10:08:23.173446 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 8 10:08:23.234272 kernel: raid6: avx2x4 gen() 29690 MB/s Jul 8 10:08:23.251285 kernel: raid6: avx2x2 gen() 30826 MB/s Jul 8 10:08:23.268287 kernel: raid6: avx2x1 gen() 25557 MB/s Jul 8 10:08:23.268328 kernel: raid6: using algorithm avx2x2 gen() 30826 MB/s Jul 8 10:08:23.286293 kernel: raid6: .... xor() 19847 MB/s, rmw enabled Jul 8 10:08:23.286317 kernel: raid6: using avx2x2 recovery algorithm Jul 8 10:08:23.306267 kernel: xor: automatically using best checksumming function avx Jul 8 10:08:23.474270 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 8 10:08:23.483015 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 8 10:08:23.485835 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 8 10:08:23.513577 systemd-udevd[472]: Using default interface naming scheme 'v255'. Jul 8 10:08:23.519200 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 8 10:08:23.520135 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 8 10:08:23.544411 dracut-pre-trigger[476]: rd.md=0: removing MD RAID activation Jul 8 10:08:23.574006 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 8 10:08:23.577486 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 8 10:08:23.655847 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 8 10:08:23.662341 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 8 10:08:23.698270 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jul 8 10:08:23.703480 kernel: cryptd: max_cpu_qlen set to 1000 Jul 8 10:08:23.712258 kernel: AES CTR mode by8 optimization enabled Jul 8 10:08:23.716280 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 8 10:08:23.725302 kernel: libata version 3.00 loaded. Jul 8 10:08:23.728282 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jul 8 10:08:23.733088 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 8 10:08:23.733131 kernel: GPT:9289727 != 19775487 Jul 8 10:08:23.733152 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 8 10:08:23.733164 kernel: GPT:9289727 != 19775487 Jul 8 10:08:23.733173 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 8 10:08:23.733187 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 8 10:08:23.742654 kernel: ahci 0000:00:1f.2: version 3.0 Jul 8 10:08:23.745024 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 8 10:08:23.745041 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jul 8 10:08:23.745178 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jul 8 10:08:23.745337 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 8 10:08:23.745729 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 8 10:08:23.746885 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 8 10:08:23.749375 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 8 10:08:23.752881 kernel: scsi host0: ahci Jul 8 10:08:23.750360 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 8 10:08:23.757684 kernel: scsi host1: ahci Jul 8 10:08:23.757952 kernel: scsi host2: ahci Jul 8 10:08:23.759484 kernel: scsi host3: ahci Jul 8 10:08:23.759692 kernel: scsi host4: ahci Jul 8 10:08:23.759839 kernel: scsi host5: ahci Jul 8 10:08:23.760302 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 0 Jul 8 10:08:23.762710 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 0 Jul 8 10:08:23.762732 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 0 Jul 8 10:08:23.764120 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 0 Jul 8 10:08:23.764139 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 0 Jul 8 10:08:23.765991 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 0 Jul 8 10:08:23.782999 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 8 10:08:23.784660 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 8 10:08:23.813117 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 8 10:08:23.821526 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 8 10:08:23.821646 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 8 10:08:23.830332 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 8 10:08:23.833960 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 8 10:08:23.865475 disk-uuid[635]: Primary Header is updated. Jul 8 10:08:23.865475 disk-uuid[635]: Secondary Entries is updated. Jul 8 10:08:23.865475 disk-uuid[635]: Secondary Header is updated. Jul 8 10:08:23.869261 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 8 10:08:23.873275 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 8 10:08:24.077096 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 8 10:08:24.077174 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 8 10:08:24.077186 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 8 10:08:24.077196 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 8 10:08:24.078278 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jul 8 10:08:24.079259 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 8 10:08:24.080268 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 8 10:08:24.080282 kernel: ata3.00: applying bridge limits Jul 8 10:08:24.081290 kernel: ata3.00: configured for UDMA/100 Jul 8 10:08:24.083276 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 8 10:08:24.145791 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 8 10:08:24.146197 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 8 10:08:24.158309 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 8 10:08:24.594745 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 8 10:08:24.598429 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 8 10:08:24.601559 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 8 10:08:24.604491 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 8 10:08:24.608455 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 8 10:08:24.636062 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 8 10:08:24.875208 disk-uuid[636]: The operation has completed successfully. Jul 8 10:08:24.876923 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 8 10:08:24.900029 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 8 10:08:24.900149 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 8 10:08:24.937378 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 8 10:08:24.961639 sh[664]: Success Jul 8 10:08:24.980617 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 8 10:08:24.980670 kernel: device-mapper: uevent: version 1.0.3 Jul 8 10:08:24.981648 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 8 10:08:24.990262 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jul 8 10:08:25.024898 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 8 10:08:25.028403 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 8 10:08:25.051964 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 8 10:08:25.059949 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 8 10:08:25.059978 kernel: BTRFS: device fsid 8a7b8c84-7fe6-440f-95a1-3ff425e81fda devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (676) Jul 8 10:08:25.062756 kernel: BTRFS info (device dm-0): first mount of filesystem 8a7b8c84-7fe6-440f-95a1-3ff425e81fda Jul 8 10:08:25.062779 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 8 10:08:25.062789 kernel: BTRFS info (device dm-0): using free-space-tree Jul 8 10:08:25.067568 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 8 10:08:25.068207 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 8 10:08:25.070370 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 8 10:08:25.071263 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 8 10:08:25.072601 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 8 10:08:25.103284 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (709) Jul 8 10:08:25.103329 kernel: BTRFS info (device vda6): first mount of filesystem f90b1c2f-8ecc-4345-b0e5-5297e348f3bd Jul 8 10:08:25.103341 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 8 10:08:25.104809 kernel: BTRFS info (device vda6): using free-space-tree Jul 8 10:08:25.112277 kernel: BTRFS info (device vda6): last unmount of filesystem f90b1c2f-8ecc-4345-b0e5-5297e348f3bd Jul 8 10:08:25.113169 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 8 10:08:25.115202 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 8 10:08:25.229127 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 8 10:08:25.232587 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 8 10:08:25.236724 ignition[754]: Ignition 2.21.0 Jul 8 10:08:25.236739 ignition[754]: Stage: fetch-offline Jul 8 10:08:25.236772 ignition[754]: no configs at "/usr/lib/ignition/base.d" Jul 8 10:08:25.236781 ignition[754]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 8 10:08:25.236866 ignition[754]: parsed url from cmdline: "" Jul 8 10:08:25.236869 ignition[754]: no config URL provided Jul 8 10:08:25.236874 ignition[754]: reading system config file "/usr/lib/ignition/user.ign" Jul 8 10:08:25.236882 ignition[754]: no config at "/usr/lib/ignition/user.ign" Jul 8 10:08:25.236906 ignition[754]: op(1): [started] loading QEMU firmware config module Jul 8 10:08:25.236911 ignition[754]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 8 10:08:25.246171 ignition[754]: op(1): [finished] loading QEMU firmware config module Jul 8 10:08:25.282993 ignition[754]: parsing config with SHA512: b684928d1143ea02f8ab888709202d3bdd2ca11467dbb504303aa420fe0005497f5e19ead7c6f4f7202dd0fa9382da36aaf3b311719904e3a1b75df7ed8f4377 Jul 8 10:08:25.286396 unknown[754]: fetched base config from "system" Jul 8 10:08:25.286407 unknown[754]: fetched user config from "qemu" Jul 8 10:08:25.286719 ignition[754]: fetch-offline: fetch-offline passed Jul 8 10:08:25.287601 systemd-networkd[851]: lo: Link UP Jul 8 10:08:25.286776 ignition[754]: Ignition finished successfully Jul 8 10:08:25.287605 systemd-networkd[851]: lo: Gained carrier Jul 8 10:08:25.289149 systemd-networkd[851]: Enumeration completed Jul 8 10:08:25.289284 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 8 10:08:25.289585 systemd-networkd[851]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 8 10:08:25.289589 systemd-networkd[851]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 8 10:08:25.290057 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 8 10:08:25.291001 systemd-networkd[851]: eth0: Link UP Jul 8 10:08:25.291005 systemd-networkd[851]: eth0: Gained carrier Jul 8 10:08:25.291012 systemd-networkd[851]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 8 10:08:25.293584 systemd[1]: Reached target network.target - Network. Jul 8 10:08:25.293756 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 8 10:08:25.294596 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 8 10:08:25.311340 systemd-networkd[851]: eth0: DHCPv4 address 10.0.0.19/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 8 10:08:25.338272 ignition[858]: Ignition 2.21.0 Jul 8 10:08:25.338287 ignition[858]: Stage: kargs Jul 8 10:08:25.338413 ignition[858]: no configs at "/usr/lib/ignition/base.d" Jul 8 10:08:25.338423 ignition[858]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 8 10:08:25.340573 ignition[858]: kargs: kargs passed Jul 8 10:08:25.340632 ignition[858]: Ignition finished successfully Jul 8 10:08:25.346302 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 8 10:08:25.348535 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 8 10:08:25.539947 ignition[867]: Ignition 2.21.0 Jul 8 10:08:25.539961 ignition[867]: Stage: disks Jul 8 10:08:25.540206 ignition[867]: no configs at "/usr/lib/ignition/base.d" Jul 8 10:08:25.540220 ignition[867]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 8 10:08:25.541550 ignition[867]: disks: disks passed Jul 8 10:08:25.541605 ignition[867]: Ignition finished successfully Jul 8 10:08:25.547896 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 8 10:08:25.549936 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 8 10:08:25.550015 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 8 10:08:25.552010 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 8 10:08:25.552503 systemd[1]: Reached target sysinit.target - System Initialization. Jul 8 10:08:25.552814 systemd[1]: Reached target basic.target - Basic System. Jul 8 10:08:25.559272 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 8 10:08:25.607025 systemd-fsck[877]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 8 10:08:25.614898 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 8 10:08:25.617181 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 8 10:08:25.742265 kernel: EXT4-fs (vda9): mounted filesystem 29d3077b-4f9b-456e-9d11-186262f0abd5 r/w with ordered data mode. Quota mode: none. Jul 8 10:08:25.742816 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 8 10:08:25.744112 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 8 10:08:25.746564 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 8 10:08:25.748449 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 8 10:08:25.749486 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 8 10:08:25.749524 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 8 10:08:25.749545 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 8 10:08:25.767687 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 8 10:08:25.769072 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 8 10:08:25.772262 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (885) Jul 8 10:08:25.774640 kernel: BTRFS info (device vda6): first mount of filesystem f90b1c2f-8ecc-4345-b0e5-5297e348f3bd Jul 8 10:08:25.774666 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 8 10:08:25.774681 kernel: BTRFS info (device vda6): using free-space-tree Jul 8 10:08:25.779604 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 8 10:08:25.813679 initrd-setup-root[909]: cut: /sysroot/etc/passwd: No such file or directory Jul 8 10:08:25.818093 initrd-setup-root[916]: cut: /sysroot/etc/group: No such file or directory Jul 8 10:08:25.822406 initrd-setup-root[923]: cut: /sysroot/etc/shadow: No such file or directory Jul 8 10:08:25.826904 initrd-setup-root[930]: cut: /sysroot/etc/gshadow: No such file or directory Jul 8 10:08:25.939014 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 8 10:08:25.941175 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 8 10:08:25.942717 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 8 10:08:25.963275 kernel: BTRFS info (device vda6): last unmount of filesystem f90b1c2f-8ecc-4345-b0e5-5297e348f3bd Jul 8 10:08:25.976411 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 8 10:08:25.992400 ignition[999]: INFO : Ignition 2.21.0 Jul 8 10:08:25.992400 ignition[999]: INFO : Stage: mount Jul 8 10:08:25.993996 ignition[999]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 8 10:08:25.993996 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 8 10:08:25.996119 ignition[999]: INFO : mount: mount passed Jul 8 10:08:25.996119 ignition[999]: INFO : Ignition finished successfully Jul 8 10:08:25.998699 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 8 10:08:26.001807 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 8 10:08:26.059641 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 8 10:08:26.061681 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 8 10:08:26.081253 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1011) Jul 8 10:08:26.083791 kernel: BTRFS info (device vda6): first mount of filesystem f90b1c2f-8ecc-4345-b0e5-5297e348f3bd Jul 8 10:08:26.083813 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 8 10:08:26.083824 kernel: BTRFS info (device vda6): using free-space-tree Jul 8 10:08:26.087473 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 8 10:08:26.122229 ignition[1028]: INFO : Ignition 2.21.0 Jul 8 10:08:26.122229 ignition[1028]: INFO : Stage: files Jul 8 10:08:26.123876 ignition[1028]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 8 10:08:26.123876 ignition[1028]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 8 10:08:26.123876 ignition[1028]: DEBUG : files: compiled without relabeling support, skipping Jul 8 10:08:26.127332 ignition[1028]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 8 10:08:26.127332 ignition[1028]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 8 10:08:26.130170 ignition[1028]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 8 10:08:26.130170 ignition[1028]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 8 10:08:26.130170 ignition[1028]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 8 10:08:26.130170 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 8 10:08:26.130170 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 8 10:08:26.128088 unknown[1028]: wrote ssh authorized keys file for user: core Jul 8 10:08:26.177078 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 8 10:08:26.402212 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 8 10:08:26.402212 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 8 10:08:26.513450 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 8 10:08:26.515331 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 8 10:08:26.515331 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 8 10:08:26.515331 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 8 10:08:26.515331 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 8 10:08:26.515331 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 8 10:08:26.515331 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 8 10:08:26.526021 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 8 10:08:26.526021 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 8 10:08:26.526021 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 8 10:08:26.526021 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 8 10:08:26.526021 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 8 10:08:26.526021 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jul 8 10:08:26.949503 systemd-networkd[851]: eth0: Gained IPv6LL Jul 8 10:08:27.305155 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 8 10:08:28.485658 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 8 10:08:28.485658 ignition[1028]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 8 10:08:28.489983 ignition[1028]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 8 10:08:28.493268 ignition[1028]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 8 10:08:28.493268 ignition[1028]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 8 10:08:28.493268 ignition[1028]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jul 8 10:08:28.498489 ignition[1028]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 8 10:08:28.498489 ignition[1028]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 8 10:08:28.498489 ignition[1028]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jul 8 10:08:28.498489 ignition[1028]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jul 8 10:08:28.514630 ignition[1028]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 8 10:08:28.520267 ignition[1028]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 8 10:08:28.522057 ignition[1028]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jul 8 10:08:28.522057 ignition[1028]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jul 8 10:08:28.522057 ignition[1028]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jul 8 10:08:28.522057 ignition[1028]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 8 10:08:28.522057 ignition[1028]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 8 10:08:28.522057 ignition[1028]: INFO : files: files passed Jul 8 10:08:28.522057 ignition[1028]: INFO : Ignition finished successfully Jul 8 10:08:28.531228 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 8 10:08:28.535807 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 8 10:08:28.538029 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 8 10:08:28.558549 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 8 10:08:28.558763 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 8 10:08:28.562257 initrd-setup-root-after-ignition[1057]: grep: /sysroot/oem/oem-release: No such file or directory Jul 8 10:08:28.565947 initrd-setup-root-after-ignition[1059]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 8 10:08:28.567668 initrd-setup-root-after-ignition[1059]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 8 10:08:28.569722 initrd-setup-root-after-ignition[1063]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 8 10:08:28.572309 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 8 10:08:28.575126 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 8 10:08:28.576114 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 8 10:08:28.622871 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 8 10:08:28.623018 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 8 10:08:28.624146 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 8 10:08:28.626224 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 8 10:08:28.628083 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 8 10:08:28.631427 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 8 10:08:28.647190 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 8 10:08:28.650719 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 8 10:08:28.682060 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 8 10:08:28.682217 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 8 10:08:28.684472 systemd[1]: Stopped target timers.target - Timer Units. Jul 8 10:08:28.686786 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 8 10:08:28.686889 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 8 10:08:28.690713 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 8 10:08:28.692896 systemd[1]: Stopped target basic.target - Basic System. Jul 8 10:08:28.694785 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 8 10:08:28.696750 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 8 10:08:28.698930 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 8 10:08:28.701173 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 8 10:08:28.703357 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 8 10:08:28.705449 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 8 10:08:28.708678 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 8 10:08:28.708809 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 8 10:08:28.710589 systemd[1]: Stopped target swap.target - Swaps. Jul 8 10:08:28.710869 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 8 10:08:28.710976 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 8 10:08:28.715359 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 8 10:08:28.716387 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 8 10:08:28.716661 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 8 10:08:28.720223 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 8 10:08:28.721133 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 8 10:08:28.721252 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 8 10:08:28.725102 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 8 10:08:28.725211 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 8 10:08:28.726172 systemd[1]: Stopped target paths.target - Path Units. Jul 8 10:08:28.726553 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 8 10:08:28.733304 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 8 10:08:28.733443 systemd[1]: Stopped target slices.target - Slice Units. Jul 8 10:08:28.736788 systemd[1]: Stopped target sockets.target - Socket Units. Jul 8 10:08:28.737648 systemd[1]: iscsid.socket: Deactivated successfully. Jul 8 10:08:28.737733 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 8 10:08:28.739291 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 8 10:08:28.739373 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 8 10:08:28.740893 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 8 10:08:28.741002 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 8 10:08:28.743712 systemd[1]: ignition-files.service: Deactivated successfully. Jul 8 10:08:28.743812 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 8 10:08:28.748577 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 8 10:08:28.749372 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 8 10:08:28.749481 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 8 10:08:28.754030 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 8 10:08:28.755119 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 8 10:08:28.755244 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 8 10:08:28.757978 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 8 10:08:28.758078 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 8 10:08:28.763608 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 8 10:08:28.768366 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 8 10:08:28.789367 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 8 10:08:28.880525 ignition[1083]: INFO : Ignition 2.21.0 Jul 8 10:08:28.880525 ignition[1083]: INFO : Stage: umount Jul 8 10:08:28.882206 ignition[1083]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 8 10:08:28.882206 ignition[1083]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 8 10:08:28.885045 ignition[1083]: INFO : umount: umount passed Jul 8 10:08:28.885045 ignition[1083]: INFO : Ignition finished successfully Jul 8 10:08:28.887580 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 8 10:08:28.887703 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 8 10:08:28.889495 systemd[1]: Stopped target network.target - Network. Jul 8 10:08:28.890460 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 8 10:08:28.890517 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 8 10:08:28.892108 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 8 10:08:28.892153 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 8 10:08:28.893902 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 8 10:08:28.893951 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 8 10:08:28.895639 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 8 10:08:28.895684 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 8 10:08:28.897499 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 8 10:08:28.899358 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 8 10:08:28.906599 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 8 10:08:28.906723 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 8 10:08:28.910017 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 8 10:08:28.910299 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 8 10:08:28.910416 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 8 10:08:28.913984 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 8 10:08:28.914624 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 8 10:08:28.916591 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 8 10:08:28.916634 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 8 10:08:28.920617 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 8 10:08:28.920682 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 8 10:08:28.920729 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 8 10:08:28.923558 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 8 10:08:28.923604 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 8 10:08:28.926656 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 8 10:08:28.926700 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 8 10:08:28.927531 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 8 10:08:28.927577 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 8 10:08:28.931423 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 8 10:08:28.933208 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 8 10:08:28.933301 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 8 10:08:28.942666 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 8 10:08:28.942788 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 8 10:08:28.952051 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 8 10:08:28.952278 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 8 10:08:28.953260 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 8 10:08:28.953305 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 8 10:08:28.955448 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 8 10:08:28.955480 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 8 10:08:28.958356 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 8 10:08:28.958405 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 8 10:08:28.959588 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 8 10:08:28.959634 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 8 10:08:28.961580 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 8 10:08:28.961626 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 8 10:08:28.970722 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 8 10:08:28.971846 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 8 10:08:28.971931 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 8 10:08:28.976484 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 8 10:08:28.977556 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 8 10:08:28.979969 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 8 10:08:28.980029 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 8 10:08:28.984662 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 8 10:08:28.984721 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 8 10:08:28.984768 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 8 10:08:28.985114 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 8 10:08:28.985213 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 8 10:08:28.996090 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 8 10:08:28.996262 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 8 10:08:28.997437 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 8 10:08:28.998743 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 8 10:08:28.998800 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 8 10:08:29.003534 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 8 10:08:29.019758 systemd[1]: Switching root. Jul 8 10:08:29.063585 systemd-journald[219]: Journal stopped Jul 8 10:08:30.354463 systemd-journald[219]: Received SIGTERM from PID 1 (systemd). Jul 8 10:08:30.354534 kernel: SELinux: policy capability network_peer_controls=1 Jul 8 10:08:30.354548 kernel: SELinux: policy capability open_perms=1 Jul 8 10:08:30.354565 kernel: SELinux: policy capability extended_socket_class=1 Jul 8 10:08:30.354576 kernel: SELinux: policy capability always_check_network=0 Jul 8 10:08:30.354596 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 8 10:08:30.354607 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 8 10:08:30.354626 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 8 10:08:30.354638 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 8 10:08:30.354649 kernel: SELinux: policy capability userspace_initial_context=0 Jul 8 10:08:30.354664 kernel: audit: type=1403 audit(1751969309.609:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 8 10:08:30.354682 systemd[1]: Successfully loaded SELinux policy in 59.004ms. Jul 8 10:08:30.354703 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.267ms. Jul 8 10:08:30.354716 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 8 10:08:30.354728 systemd[1]: Detected virtualization kvm. Jul 8 10:08:30.354740 systemd[1]: Detected architecture x86-64. Jul 8 10:08:30.354764 systemd[1]: Detected first boot. Jul 8 10:08:30.354782 systemd[1]: Initializing machine ID from VM UUID. Jul 8 10:08:30.354793 zram_generator::config[1129]: No configuration found. Jul 8 10:08:30.354806 kernel: Guest personality initialized and is inactive Jul 8 10:08:30.354817 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 8 10:08:30.354828 kernel: Initialized host personality Jul 8 10:08:30.354839 kernel: NET: Registered PF_VSOCK protocol family Jul 8 10:08:30.354851 systemd[1]: Populated /etc with preset unit settings. Jul 8 10:08:30.354865 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 8 10:08:30.354877 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 8 10:08:30.354889 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 8 10:08:30.354901 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 8 10:08:30.354913 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 8 10:08:30.354925 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 8 10:08:30.354936 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 8 10:08:30.354948 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 8 10:08:30.354960 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 8 10:08:30.354974 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 8 10:08:30.354986 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 8 10:08:30.354997 systemd[1]: Created slice user.slice - User and Session Slice. Jul 8 10:08:30.355010 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 8 10:08:30.355028 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 8 10:08:30.355041 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 8 10:08:30.355061 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 8 10:08:30.355074 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 8 10:08:30.355089 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 8 10:08:30.355101 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 8 10:08:30.355118 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 8 10:08:30.355130 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 8 10:08:30.355142 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 8 10:08:30.355154 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 8 10:08:30.355166 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 8 10:08:30.355178 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 8 10:08:30.355191 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 8 10:08:30.355203 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 8 10:08:30.355215 systemd[1]: Reached target slices.target - Slice Units. Jul 8 10:08:30.355227 systemd[1]: Reached target swap.target - Swaps. Jul 8 10:08:30.355253 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 8 10:08:30.355265 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 8 10:08:30.355277 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 8 10:08:30.355291 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 8 10:08:30.355308 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 8 10:08:30.355335 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 8 10:08:30.355356 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 8 10:08:30.355373 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 8 10:08:30.355389 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 8 10:08:30.355405 systemd[1]: Mounting media.mount - External Media Directory... Jul 8 10:08:30.355422 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 8 10:08:30.355437 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 8 10:08:30.355449 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 8 10:08:30.355463 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 8 10:08:30.355483 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 8 10:08:30.355499 systemd[1]: Reached target machines.target - Containers. Jul 8 10:08:30.355515 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 8 10:08:30.355527 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 8 10:08:30.355539 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 8 10:08:30.355551 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 8 10:08:30.355563 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 8 10:08:30.355574 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 8 10:08:30.355586 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 8 10:08:30.355600 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 8 10:08:30.355612 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 8 10:08:30.355624 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 8 10:08:30.355652 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 8 10:08:30.355664 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 8 10:08:30.355676 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 8 10:08:30.355687 systemd[1]: Stopped systemd-fsck-usr.service. Jul 8 10:08:30.355700 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 8 10:08:30.355714 kernel: fuse: init (API version 7.41) Jul 8 10:08:30.355725 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 8 10:08:30.355737 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 8 10:08:30.355748 kernel: ACPI: bus type drm_connector registered Jul 8 10:08:30.355759 kernel: loop: module loaded Jul 8 10:08:30.355770 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 8 10:08:30.355782 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 8 10:08:30.355794 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 8 10:08:30.355808 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 8 10:08:30.355819 systemd[1]: verity-setup.service: Deactivated successfully. Jul 8 10:08:30.355831 systemd[1]: Stopped verity-setup.service. Jul 8 10:08:30.355867 systemd-journald[1207]: Collecting audit messages is disabled. Jul 8 10:08:30.355890 systemd-journald[1207]: Journal started Jul 8 10:08:30.355915 systemd-journald[1207]: Runtime Journal (/run/log/journal/65ab2deedf634ec8b469651c7176c362) is 6M, max 48.5M, 42.4M free. Jul 8 10:08:30.114797 systemd[1]: Queued start job for default target multi-user.target. Jul 8 10:08:30.137168 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 8 10:08:30.137605 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 8 10:08:30.361258 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 8 10:08:30.365340 systemd[1]: Started systemd-journald.service - Journal Service. Jul 8 10:08:30.366720 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 8 10:08:30.367799 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 8 10:08:30.368915 systemd[1]: Mounted media.mount - External Media Directory. Jul 8 10:08:30.369926 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 8 10:08:30.371080 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 8 10:08:30.372210 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 8 10:08:30.373427 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 8 10:08:30.374819 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 8 10:08:30.376293 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 8 10:08:30.376512 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 8 10:08:30.377883 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 8 10:08:30.378106 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 8 10:08:30.379479 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 8 10:08:30.379676 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 8 10:08:30.380951 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 8 10:08:30.381157 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 8 10:08:30.382619 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 8 10:08:30.382825 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 8 10:08:30.384130 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 8 10:08:30.384341 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 8 10:08:30.385680 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 8 10:08:30.387025 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 8 10:08:30.388517 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 8 10:08:30.389995 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 8 10:08:30.403306 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 8 10:08:30.405799 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 8 10:08:30.408139 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 8 10:08:30.409372 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 8 10:08:30.409400 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 8 10:08:30.411348 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 8 10:08:30.416337 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 8 10:08:30.417459 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 8 10:08:30.418520 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 8 10:08:30.421216 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 8 10:08:30.422552 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 8 10:08:30.423513 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 8 10:08:30.424784 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 8 10:08:30.425819 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 8 10:08:30.429441 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 8 10:08:30.441125 systemd-journald[1207]: Time spent on flushing to /var/log/journal/65ab2deedf634ec8b469651c7176c362 is 28.002ms for 1060 entries. Jul 8 10:08:30.441125 systemd-journald[1207]: System Journal (/var/log/journal/65ab2deedf634ec8b469651c7176c362) is 8M, max 195.6M, 187.6M free. Jul 8 10:08:30.488354 systemd-journald[1207]: Received client request to flush runtime journal. Jul 8 10:08:30.488425 kernel: loop0: detected capacity change from 0 to 221472 Jul 8 10:08:30.488450 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 8 10:08:30.433399 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 8 10:08:30.436920 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 8 10:08:30.438472 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 8 10:08:30.442873 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 8 10:08:30.444687 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 8 10:08:30.447147 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 8 10:08:30.461656 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 8 10:08:30.469758 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 8 10:08:30.490556 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 8 10:08:30.497084 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 8 10:08:30.502070 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 8 10:08:30.504953 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 8 10:08:30.508344 kernel: loop1: detected capacity change from 0 to 146488 Jul 8 10:08:30.531777 systemd-tmpfiles[1265]: ACLs are not supported, ignoring. Jul 8 10:08:30.531793 systemd-tmpfiles[1265]: ACLs are not supported, ignoring. Jul 8 10:08:30.535423 kernel: loop2: detected capacity change from 0 to 114000 Jul 8 10:08:30.536470 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 8 10:08:30.562263 kernel: loop3: detected capacity change from 0 to 221472 Jul 8 10:08:30.574264 kernel: loop4: detected capacity change from 0 to 146488 Jul 8 10:08:30.584264 kernel: loop5: detected capacity change from 0 to 114000 Jul 8 10:08:30.592438 (sd-merge)[1271]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 8 10:08:30.593001 (sd-merge)[1271]: Merged extensions into '/usr'. Jul 8 10:08:30.599566 systemd[1]: Reload requested from client PID 1248 ('systemd-sysext') (unit systemd-sysext.service)... Jul 8 10:08:30.599582 systemd[1]: Reloading... Jul 8 10:08:30.654268 zram_generator::config[1294]: No configuration found. Jul 8 10:08:30.856859 ldconfig[1243]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 8 10:08:30.884572 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 8 10:08:30.965922 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 8 10:08:30.966469 systemd[1]: Reloading finished in 366 ms. Jul 8 10:08:30.989749 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 8 10:08:30.991571 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 8 10:08:31.008653 systemd[1]: Starting ensure-sysext.service... Jul 8 10:08:31.010577 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 8 10:08:31.020371 systemd[1]: Reload requested from client PID 1334 ('systemctl') (unit ensure-sysext.service)... Jul 8 10:08:31.020384 systemd[1]: Reloading... Jul 8 10:08:31.062486 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 8 10:08:31.062526 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 8 10:08:31.064610 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 8 10:08:31.064878 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 8 10:08:31.067737 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 8 10:08:31.068125 systemd-tmpfiles[1335]: ACLs are not supported, ignoring. Jul 8 10:08:31.069301 systemd-tmpfiles[1335]: ACLs are not supported, ignoring. Jul 8 10:08:31.078778 systemd-tmpfiles[1335]: Detected autofs mount point /boot during canonicalization of boot. Jul 8 10:08:31.078857 systemd-tmpfiles[1335]: Skipping /boot Jul 8 10:08:31.080266 zram_generator::config[1362]: No configuration found. Jul 8 10:08:31.095851 systemd-tmpfiles[1335]: Detected autofs mount point /boot during canonicalization of boot. Jul 8 10:08:31.095929 systemd-tmpfiles[1335]: Skipping /boot Jul 8 10:08:31.176256 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 8 10:08:31.256345 systemd[1]: Reloading finished in 235 ms. Jul 8 10:08:31.272835 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 8 10:08:31.294014 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 8 10:08:31.302827 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 8 10:08:31.305140 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 8 10:08:31.307579 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 8 10:08:31.315079 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 8 10:08:31.317859 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 8 10:08:31.322053 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 8 10:08:31.326154 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 8 10:08:31.326366 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 8 10:08:31.327536 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 8 10:08:31.329578 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 8 10:08:31.332801 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 8 10:08:31.334065 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 8 10:08:31.334163 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 8 10:08:31.340421 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 8 10:08:31.341573 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 8 10:08:31.343201 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 8 10:08:31.345601 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 8 10:08:31.345841 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 8 10:08:31.348960 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 8 10:08:31.349187 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 8 10:08:31.354961 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 8 10:08:31.355729 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 8 10:08:31.363537 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 8 10:08:31.364159 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 8 10:08:31.368468 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 8 10:08:31.371258 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 8 10:08:31.373079 systemd-udevd[1404]: Using default interface naming scheme 'v255'. Jul 8 10:08:31.376818 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 8 10:08:31.379499 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 8 10:08:31.379769 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 8 10:08:31.380165 augenrules[1436]: No rules Jul 8 10:08:31.387006 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 8 10:08:31.388285 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 8 10:08:31.389771 systemd[1]: audit-rules.service: Deactivated successfully. Jul 8 10:08:31.390267 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 8 10:08:31.392050 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 8 10:08:31.394078 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 8 10:08:31.394479 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 8 10:08:31.396653 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 8 10:08:31.398756 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 8 10:08:31.400530 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 8 10:08:31.400740 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 8 10:08:31.402508 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 8 10:08:31.402711 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 8 10:08:31.412425 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 8 10:08:31.414154 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 8 10:08:31.443894 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 8 10:08:31.450361 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 8 10:08:31.640581 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 8 10:08:31.648298 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 8 10:08:31.659500 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 8 10:08:31.661681 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 8 10:08:31.664539 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 8 10:08:31.667405 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 8 10:08:31.667521 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 8 10:08:31.670996 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 8 10:08:31.672102 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 8 10:08:31.672191 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 8 10:08:31.684416 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 8 10:08:31.692905 systemd[1]: Finished ensure-sysext.service. Jul 8 10:08:31.698837 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 8 10:08:31.699182 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 8 10:08:31.700586 augenrules[1477]: /sbin/augenrules: No change Jul 8 10:08:31.701160 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 8 10:08:31.701490 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 8 10:08:31.703051 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 8 10:08:31.704551 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 8 10:08:31.709254 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 8 10:08:31.709491 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 8 10:08:31.720398 augenrules[1518]: No rules Jul 8 10:08:31.732210 systemd[1]: audit-rules.service: Deactivated successfully. Jul 8 10:08:31.732312 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jul 8 10:08:31.732669 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 8 10:08:31.744521 kernel: mousedev: PS/2 mouse device common for all mice Jul 8 10:08:31.743382 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 8 10:08:31.749673 kernel: ACPI: button: Power Button [PWRF] Jul 8 10:08:31.750002 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 8 10:08:31.751140 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 8 10:08:31.751199 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 8 10:08:31.754802 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 8 10:08:31.762906 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jul 8 10:08:31.763221 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 8 10:08:31.763439 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 8 10:08:31.777636 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 8 10:08:31.807519 systemd-resolved[1403]: Positive Trust Anchors: Jul 8 10:08:31.807538 systemd-resolved[1403]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 8 10:08:31.807568 systemd-resolved[1403]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 8 10:08:31.821867 systemd-resolved[1403]: Defaulting to hostname 'linux'. Jul 8 10:08:31.825634 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 8 10:08:31.826919 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 8 10:08:31.854304 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 8 10:08:31.862399 systemd-networkd[1494]: lo: Link UP Jul 8 10:08:31.862667 systemd-networkd[1494]: lo: Gained carrier Jul 8 10:08:31.864384 systemd-networkd[1494]: Enumeration completed Jul 8 10:08:31.865346 systemd-networkd[1494]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 8 10:08:31.865427 systemd-networkd[1494]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 8 10:08:31.866307 systemd-networkd[1494]: eth0: Link UP Jul 8 10:08:31.866744 systemd-networkd[1494]: eth0: Gained carrier Jul 8 10:08:31.866825 systemd-networkd[1494]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 8 10:08:31.867894 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 8 10:08:31.873901 systemd[1]: Reached target network.target - Network. Jul 8 10:08:31.889611 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 8 10:08:31.892077 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 8 10:08:31.893288 systemd-networkd[1494]: eth0: DHCPv4 address 10.0.0.19/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 8 10:08:31.907483 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 8 10:08:31.907736 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 8 10:08:31.913406 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 8 10:08:31.964426 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 8 10:08:31.980478 kernel: kvm_amd: TSC scaling supported Jul 8 10:08:31.980565 kernel: kvm_amd: Nested Virtualization enabled Jul 8 10:08:31.980605 kernel: kvm_amd: Nested Paging enabled Jul 8 10:08:31.980618 kernel: kvm_amd: LBR virtualization supported Jul 8 10:08:31.981649 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jul 8 10:08:31.981685 kernel: kvm_amd: Virtual GIF supported Jul 8 10:08:32.000387 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 8 10:08:32.000541 systemd[1]: Reached target time-set.target - System Time Set. Jul 8 10:08:33.734833 systemd-timesyncd[1527]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 8 10:08:33.735179 systemd-resolved[1403]: Clock change detected. Flushing caches. Jul 8 10:08:33.735190 systemd-timesyncd[1527]: Initial clock synchronization to Tue 2025-07-08 10:08:33.734405 UTC. Jul 8 10:08:33.752178 kernel: EDAC MC: Ver: 3.0.0 Jul 8 10:08:33.763628 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 8 10:08:33.765048 systemd[1]: Reached target sysinit.target - System Initialization. Jul 8 10:08:33.766227 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 8 10:08:33.767422 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 8 10:08:33.768609 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 8 10:08:33.769943 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 8 10:08:33.771171 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 8 10:08:33.772398 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 8 10:08:33.773602 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 8 10:08:33.773634 systemd[1]: Reached target paths.target - Path Units. Jul 8 10:08:33.774510 systemd[1]: Reached target timers.target - Timer Units. Jul 8 10:08:33.776065 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 8 10:08:33.778756 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 8 10:08:33.781776 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 8 10:08:33.783193 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 8 10:08:33.784398 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 8 10:08:33.787915 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 8 10:08:33.789206 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 8 10:08:33.790912 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 8 10:08:33.792656 systemd[1]: Reached target sockets.target - Socket Units. Jul 8 10:08:33.793591 systemd[1]: Reached target basic.target - Basic System. Jul 8 10:08:33.794518 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 8 10:08:33.794549 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 8 10:08:33.795526 systemd[1]: Starting containerd.service - containerd container runtime... Jul 8 10:08:33.797488 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 8 10:08:33.799344 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 8 10:08:33.801502 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 8 10:08:33.804679 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 8 10:08:33.805671 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 8 10:08:33.807151 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 8 10:08:33.810232 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 8 10:08:33.810976 jq[1567]: false Jul 8 10:08:33.813122 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 8 10:08:33.816954 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 8 10:08:33.819757 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 8 10:08:33.823928 google_oslogin_nss_cache[1569]: oslogin_cache_refresh[1569]: Refreshing passwd entry cache Jul 8 10:08:33.823938 oslogin_cache_refresh[1569]: Refreshing passwd entry cache Jul 8 10:08:33.825238 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 8 10:08:33.827130 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 8 10:08:33.827561 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 8 10:08:33.828862 systemd[1]: Starting update-engine.service - Update Engine... Jul 8 10:08:33.831826 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 8 10:08:33.833812 google_oslogin_nss_cache[1569]: oslogin_cache_refresh[1569]: Failure getting users, quitting Jul 8 10:08:33.833803 oslogin_cache_refresh[1569]: Failure getting users, quitting Jul 8 10:08:33.833896 google_oslogin_nss_cache[1569]: oslogin_cache_refresh[1569]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 8 10:08:33.833896 google_oslogin_nss_cache[1569]: oslogin_cache_refresh[1569]: Refreshing group entry cache Jul 8 10:08:33.833822 oslogin_cache_refresh[1569]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 8 10:08:33.833873 oslogin_cache_refresh[1569]: Refreshing group entry cache Jul 8 10:08:33.835342 extend-filesystems[1568]: Found /dev/vda6 Jul 8 10:08:33.838502 extend-filesystems[1568]: Found /dev/vda9 Jul 8 10:08:33.839495 google_oslogin_nss_cache[1569]: oslogin_cache_refresh[1569]: Failure getting groups, quitting Jul 8 10:08:33.839530 oslogin_cache_refresh[1569]: Failure getting groups, quitting Jul 8 10:08:33.839585 google_oslogin_nss_cache[1569]: oslogin_cache_refresh[1569]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 8 10:08:33.839614 oslogin_cache_refresh[1569]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 8 10:08:33.842719 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 8 10:08:33.844236 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 8 10:08:33.844507 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 8 10:08:33.844819 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 8 10:08:33.845067 jq[1585]: true Jul 8 10:08:33.845071 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 8 10:08:33.845453 extend-filesystems[1568]: Checking size of /dev/vda9 Jul 8 10:08:33.846524 systemd[1]: motdgen.service: Deactivated successfully. Jul 8 10:08:33.846758 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 8 10:08:33.849517 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 8 10:08:33.850348 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 8 10:08:33.859307 update_engine[1583]: I20250708 10:08:33.858127 1583 main.cc:92] Flatcar Update Engine starting Jul 8 10:08:33.861512 extend-filesystems[1568]: Resized partition /dev/vda9 Jul 8 10:08:33.867132 extend-filesystems[1607]: resize2fs 1.47.2 (1-Jan-2025) Jul 8 10:08:33.868390 jq[1594]: true Jul 8 10:08:33.877370 (ntainerd)[1595]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 8 10:08:33.892114 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 8 10:08:33.906018 tar[1592]: linux-amd64/helm Jul 8 10:08:33.908145 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 8 10:08:33.924589 dbus-daemon[1565]: [system] SELinux support is enabled Jul 8 10:08:33.925114 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 8 10:08:33.929442 extend-filesystems[1607]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 8 10:08:33.929442 extend-filesystems[1607]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 8 10:08:33.929442 extend-filesystems[1607]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 8 10:08:33.932935 extend-filesystems[1568]: Resized filesystem in /dev/vda9 Jul 8 10:08:33.930670 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 8 10:08:33.934041 update_engine[1583]: I20250708 10:08:33.933156 1583 update_check_scheduler.cc:74] Next update check in 3m49s Jul 8 10:08:33.930958 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 8 10:08:33.940473 systemd-logind[1579]: Watching system buttons on /dev/input/event2 (Power Button) Jul 8 10:08:33.940496 systemd-logind[1579]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 8 10:08:33.943739 bash[1628]: Updated "/home/core/.ssh/authorized_keys" Jul 8 10:08:33.943949 systemd-logind[1579]: New seat seat0. Jul 8 10:08:33.944694 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 8 10:08:33.947313 systemd[1]: Started update-engine.service - Update Engine. Jul 8 10:08:33.949257 systemd[1]: Started systemd-logind.service - User Login Management. Jul 8 10:08:33.952538 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 8 10:08:33.953309 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 8 10:08:33.954227 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 8 10:08:33.955670 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 8 10:08:33.955776 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 8 10:08:33.959341 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 8 10:08:34.063741 sshd_keygen[1589]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 8 10:08:34.077374 locksmithd[1632]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 8 10:08:34.089328 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 8 10:08:34.093880 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 8 10:08:34.174941 systemd[1]: issuegen.service: Deactivated successfully. Jul 8 10:08:34.175269 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 8 10:08:34.184553 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 8 10:08:34.209392 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 8 10:08:34.213162 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 8 10:08:34.215437 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 8 10:08:34.217262 systemd[1]: Reached target getty.target - Login Prompts. Jul 8 10:08:34.306748 containerd[1595]: time="2025-07-08T10:08:34Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 8 10:08:34.309333 containerd[1595]: time="2025-07-08T10:08:34.309266689Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Jul 8 10:08:34.335649 tar[1592]: linux-amd64/LICENSE Jul 8 10:08:34.335816 tar[1592]: linux-amd64/README.md Jul 8 10:08:34.355011 containerd[1595]: time="2025-07-08T10:08:34.354938023Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="14.366µs" Jul 8 10:08:34.355011 containerd[1595]: time="2025-07-08T10:08:34.354999708Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 8 10:08:34.355094 containerd[1595]: time="2025-07-08T10:08:34.355026398Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 8 10:08:34.355320 containerd[1595]: time="2025-07-08T10:08:34.355291626Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 8 10:08:34.355361 containerd[1595]: time="2025-07-08T10:08:34.355319929Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 8 10:08:34.355361 containerd[1595]: time="2025-07-08T10:08:34.355353752Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 8 10:08:34.355460 containerd[1595]: time="2025-07-08T10:08:34.355433231Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 8 10:08:34.355460 containerd[1595]: time="2025-07-08T10:08:34.355449762Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 8 10:08:34.355848 containerd[1595]: time="2025-07-08T10:08:34.355811861Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 8 10:08:34.355848 containerd[1595]: time="2025-07-08T10:08:34.355838140Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 8 10:08:34.355897 containerd[1595]: time="2025-07-08T10:08:34.355848861Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 8 10:08:34.355897 containerd[1595]: time="2025-07-08T10:08:34.355857096Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 8 10:08:34.355992 containerd[1595]: time="2025-07-08T10:08:34.355966120Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 8 10:08:34.356371 containerd[1595]: time="2025-07-08T10:08:34.356342937Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 8 10:08:34.356395 containerd[1595]: time="2025-07-08T10:08:34.356377121Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 8 10:08:34.356395 containerd[1595]: time="2025-07-08T10:08:34.356386769Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 8 10:08:34.356463 containerd[1595]: time="2025-07-08T10:08:34.356446661Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 8 10:08:34.356970 containerd[1595]: time="2025-07-08T10:08:34.356890173Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 8 10:08:34.357106 containerd[1595]: time="2025-07-08T10:08:34.357034033Z" level=info msg="metadata content store policy set" policy=shared Jul 8 10:08:34.364361 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 8 10:08:34.475002 containerd[1595]: time="2025-07-08T10:08:34.474943954Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 8 10:08:34.475059 containerd[1595]: time="2025-07-08T10:08:34.475023353Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 8 10:08:34.475059 containerd[1595]: time="2025-07-08T10:08:34.475042088Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 8 10:08:34.475059 containerd[1595]: time="2025-07-08T10:08:34.475056495Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 8 10:08:34.475152 containerd[1595]: time="2025-07-08T10:08:34.475069119Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 8 10:08:34.475152 containerd[1595]: time="2025-07-08T10:08:34.475096710Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 8 10:08:34.475152 containerd[1595]: time="2025-07-08T10:08:34.475113241Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 8 10:08:34.475152 containerd[1595]: time="2025-07-08T10:08:34.475127368Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 8 10:08:34.475152 containerd[1595]: time="2025-07-08T10:08:34.475143738Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 8 10:08:34.475152 containerd[1595]: time="2025-07-08T10:08:34.475153276Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 8 10:08:34.475261 containerd[1595]: time="2025-07-08T10:08:34.475163155Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 8 10:08:34.475261 containerd[1595]: time="2025-07-08T10:08:34.475177211Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 8 10:08:34.475349 containerd[1595]: time="2025-07-08T10:08:34.475311753Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 8 10:08:34.475372 containerd[1595]: time="2025-07-08T10:08:34.475347641Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 8 10:08:34.475372 containerd[1595]: time="2025-07-08T10:08:34.475364783Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 8 10:08:34.475414 containerd[1595]: time="2025-07-08T10:08:34.475376785Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 8 10:08:34.475414 containerd[1595]: time="2025-07-08T10:08:34.475388187Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 8 10:08:34.475414 containerd[1595]: time="2025-07-08T10:08:34.475404798Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 8 10:08:34.475468 containerd[1595]: time="2025-07-08T10:08:34.475417842Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 8 10:08:34.475468 containerd[1595]: time="2025-07-08T10:08:34.475430857Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 8 10:08:34.475468 containerd[1595]: time="2025-07-08T10:08:34.475446075Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 8 10:08:34.475468 containerd[1595]: time="2025-07-08T10:08:34.475456445Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 8 10:08:34.475468 containerd[1595]: time="2025-07-08T10:08:34.475466173Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 8 10:08:34.475595 containerd[1595]: time="2025-07-08T10:08:34.475572713Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 8 10:08:34.475595 containerd[1595]: time="2025-07-08T10:08:34.475591718Z" level=info msg="Start snapshots syncer" Jul 8 10:08:34.475640 containerd[1595]: time="2025-07-08T10:08:34.475626393Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 8 10:08:34.477104 containerd[1595]: time="2025-07-08T10:08:34.476598817Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 8 10:08:34.477104 containerd[1595]: time="2025-07-08T10:08:34.476692062Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 8 10:08:34.477411 containerd[1595]: time="2025-07-08T10:08:34.476833737Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 8 10:08:34.477411 containerd[1595]: time="2025-07-08T10:08:34.476945757Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 8 10:08:34.477411 containerd[1595]: time="2025-07-08T10:08:34.476994639Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 8 10:08:34.477411 containerd[1595]: time="2025-07-08T10:08:34.477013675Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 8 10:08:34.477411 containerd[1595]: time="2025-07-08T10:08:34.477031388Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 8 10:08:34.477411 containerd[1595]: time="2025-07-08T10:08:34.477112480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 8 10:08:34.477411 containerd[1595]: time="2025-07-08T10:08:34.477228477Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 8 10:08:34.477411 containerd[1595]: time="2025-07-08T10:08:34.477251781Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 8 10:08:34.477411 containerd[1595]: time="2025-07-08T10:08:34.477297186Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 8 10:08:34.477411 containerd[1595]: time="2025-07-08T10:08:34.477312876Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 8 10:08:34.477411 containerd[1595]: time="2025-07-08T10:08:34.477330408Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 8 10:08:34.477605 containerd[1595]: time="2025-07-08T10:08:34.477448600Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 8 10:08:34.477605 containerd[1595]: time="2025-07-08T10:08:34.477471152Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 8 10:08:34.477605 containerd[1595]: time="2025-07-08T10:08:34.477483856Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 8 10:08:34.477605 containerd[1595]: time="2025-07-08T10:08:34.477501820Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 8 10:08:34.477605 containerd[1595]: time="2025-07-08T10:08:34.477514133Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 8 10:08:34.477605 containerd[1595]: time="2025-07-08T10:08:34.477531125Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 8 10:08:34.477605 containerd[1595]: time="2025-07-08T10:08:34.477564257Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 8 10:08:34.477605 containerd[1595]: time="2025-07-08T10:08:34.477594203Z" level=info msg="runtime interface created" Jul 8 10:08:34.477605 containerd[1595]: time="2025-07-08T10:08:34.477603040Z" level=info msg="created NRI interface" Jul 8 10:08:34.477769 containerd[1595]: time="2025-07-08T10:08:34.477613349Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 8 10:08:34.477769 containerd[1595]: time="2025-07-08T10:08:34.477678331Z" level=info msg="Connect containerd service" Jul 8 10:08:34.477920 containerd[1595]: time="2025-07-08T10:08:34.477894226Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 8 10:08:34.478881 containerd[1595]: time="2025-07-08T10:08:34.478843465Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 8 10:08:34.628505 containerd[1595]: time="2025-07-08T10:08:34.628350501Z" level=info msg="Start subscribing containerd event" Jul 8 10:08:34.628505 containerd[1595]: time="2025-07-08T10:08:34.628443786Z" level=info msg="Start recovering state" Jul 8 10:08:34.628717 containerd[1595]: time="2025-07-08T10:08:34.628689547Z" level=info msg="Start event monitor" Jul 8 10:08:34.628827 containerd[1595]: time="2025-07-08T10:08:34.628725905Z" level=info msg="Start cni network conf syncer for default" Jul 8 10:08:34.628827 containerd[1595]: time="2025-07-08T10:08:34.628738298Z" level=info msg="Start streaming server" Jul 8 10:08:34.628827 containerd[1595]: time="2025-07-08T10:08:34.628761562Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 8 10:08:34.628827 containerd[1595]: time="2025-07-08T10:08:34.628770799Z" level=info msg="runtime interface starting up..." Jul 8 10:08:34.628827 containerd[1595]: time="2025-07-08T10:08:34.628778704Z" level=info msg="starting plugins..." Jul 8 10:08:34.628827 containerd[1595]: time="2025-07-08T10:08:34.628797720Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 8 10:08:34.628934 containerd[1595]: time="2025-07-08T10:08:34.628696380Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 8 10:08:34.629045 containerd[1595]: time="2025-07-08T10:08:34.629017412Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 8 10:08:34.629145 containerd[1595]: time="2025-07-08T10:08:34.629121607Z" level=info msg="containerd successfully booted in 0.322950s" Jul 8 10:08:34.629279 systemd[1]: Started containerd.service - containerd container runtime. Jul 8 10:08:35.592271 systemd-networkd[1494]: eth0: Gained IPv6LL Jul 8 10:08:35.595613 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 8 10:08:35.597604 systemd[1]: Reached target network-online.target - Network is Online. Jul 8 10:08:35.600315 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 8 10:08:35.602871 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 8 10:08:35.617706 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 8 10:08:35.643611 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 8 10:08:35.645973 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 8 10:08:35.646360 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 8 10:08:35.649108 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 8 10:08:36.673123 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 8 10:08:36.674819 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 8 10:08:36.676418 systemd[1]: Startup finished in 3.318s (kernel) + 6.968s (initrd) + 5.394s (userspace) = 15.681s. Jul 8 10:08:36.687507 (kubelet)[1701]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 8 10:08:37.302718 kubelet[1701]: E0708 10:08:37.302601 1701 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 8 10:08:37.306583 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 8 10:08:37.306802 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 8 10:08:37.307236 systemd[1]: kubelet.service: Consumed 1.510s CPU time, 264.2M memory peak. Jul 8 10:08:38.049255 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 8 10:08:38.050466 systemd[1]: Started sshd@0-10.0.0.19:22-10.0.0.1:32868.service - OpenSSH per-connection server daemon (10.0.0.1:32868). Jul 8 10:08:38.145868 sshd[1714]: Accepted publickey for core from 10.0.0.1 port 32868 ssh2: RSA SHA256:BQWGw9qDwwKm1WzdalDD28JNIdz3HBcaxKwPB41deh4 Jul 8 10:08:38.147318 sshd-session[1714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 8 10:08:38.154092 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 8 10:08:38.155264 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 8 10:08:38.161556 systemd-logind[1579]: New session 1 of user core. Jul 8 10:08:38.180094 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 8 10:08:38.183219 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 8 10:08:38.202735 (systemd)[1719]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 8 10:08:38.205188 systemd-logind[1579]: New session c1 of user core. Jul 8 10:08:38.366645 systemd[1719]: Queued start job for default target default.target. Jul 8 10:08:38.391311 systemd[1719]: Created slice app.slice - User Application Slice. Jul 8 10:08:38.391342 systemd[1719]: Reached target paths.target - Paths. Jul 8 10:08:38.391381 systemd[1719]: Reached target timers.target - Timers. Jul 8 10:08:38.392860 systemd[1719]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 8 10:08:38.404273 systemd[1719]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 8 10:08:38.404394 systemd[1719]: Reached target sockets.target - Sockets. Jul 8 10:08:38.404433 systemd[1719]: Reached target basic.target - Basic System. Jul 8 10:08:38.404471 systemd[1719]: Reached target default.target - Main User Target. Jul 8 10:08:38.404502 systemd[1719]: Startup finished in 191ms. Jul 8 10:08:38.404841 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 8 10:08:38.406550 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 8 10:08:38.466472 systemd[1]: Started sshd@1-10.0.0.19:22-10.0.0.1:41724.service - OpenSSH per-connection server daemon (10.0.0.1:41724). Jul 8 10:08:38.507415 sshd[1730]: Accepted publickey for core from 10.0.0.1 port 41724 ssh2: RSA SHA256:BQWGw9qDwwKm1WzdalDD28JNIdz3HBcaxKwPB41deh4 Jul 8 10:08:38.508705 sshd-session[1730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 8 10:08:38.513041 systemd-logind[1579]: New session 2 of user core. Jul 8 10:08:38.526215 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 8 10:08:38.578946 sshd[1733]: Connection closed by 10.0.0.1 port 41724 Jul 8 10:08:38.579276 sshd-session[1730]: pam_unix(sshd:session): session closed for user core Jul 8 10:08:38.595589 systemd[1]: sshd@1-10.0.0.19:22-10.0.0.1:41724.service: Deactivated successfully. Jul 8 10:08:38.597448 systemd[1]: session-2.scope: Deactivated successfully. Jul 8 10:08:38.598178 systemd-logind[1579]: Session 2 logged out. Waiting for processes to exit. Jul 8 10:08:38.600800 systemd[1]: Started sshd@2-10.0.0.19:22-10.0.0.1:41738.service - OpenSSH per-connection server daemon (10.0.0.1:41738). Jul 8 10:08:38.601339 systemd-logind[1579]: Removed session 2. Jul 8 10:08:38.650994 sshd[1739]: Accepted publickey for core from 10.0.0.1 port 41738 ssh2: RSA SHA256:BQWGw9qDwwKm1WzdalDD28JNIdz3HBcaxKwPB41deh4 Jul 8 10:08:38.652164 sshd-session[1739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 8 10:08:38.656205 systemd-logind[1579]: New session 3 of user core. Jul 8 10:08:38.670195 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 8 10:08:38.719088 sshd[1742]: Connection closed by 10.0.0.1 port 41738 Jul 8 10:08:38.719491 sshd-session[1739]: pam_unix(sshd:session): session closed for user core Jul 8 10:08:38.734555 systemd[1]: sshd@2-10.0.0.19:22-10.0.0.1:41738.service: Deactivated successfully. Jul 8 10:08:38.736396 systemd[1]: session-3.scope: Deactivated successfully. Jul 8 10:08:38.737156 systemd-logind[1579]: Session 3 logged out. Waiting for processes to exit. Jul 8 10:08:38.739752 systemd[1]: Started sshd@3-10.0.0.19:22-10.0.0.1:41746.service - OpenSSH per-connection server daemon (10.0.0.1:41746). Jul 8 10:08:38.740294 systemd-logind[1579]: Removed session 3. Jul 8 10:08:38.796919 sshd[1748]: Accepted publickey for core from 10.0.0.1 port 41746 ssh2: RSA SHA256:BQWGw9qDwwKm1WzdalDD28JNIdz3HBcaxKwPB41deh4 Jul 8 10:08:38.798134 sshd-session[1748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 8 10:08:38.802110 systemd-logind[1579]: New session 4 of user core. Jul 8 10:08:38.809203 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 8 10:08:38.863349 sshd[1751]: Connection closed by 10.0.0.1 port 41746 Jul 8 10:08:38.863725 sshd-session[1748]: pam_unix(sshd:session): session closed for user core Jul 8 10:08:38.876612 systemd[1]: sshd@3-10.0.0.19:22-10.0.0.1:41746.service: Deactivated successfully. Jul 8 10:08:38.878473 systemd[1]: session-4.scope: Deactivated successfully. Jul 8 10:08:38.879162 systemd-logind[1579]: Session 4 logged out. Waiting for processes to exit. Jul 8 10:08:38.881901 systemd[1]: Started sshd@4-10.0.0.19:22-10.0.0.1:41752.service - OpenSSH per-connection server daemon (10.0.0.1:41752). Jul 8 10:08:38.882432 systemd-logind[1579]: Removed session 4. Jul 8 10:08:38.928627 sshd[1757]: Accepted publickey for core from 10.0.0.1 port 41752 ssh2: RSA SHA256:BQWGw9qDwwKm1WzdalDD28JNIdz3HBcaxKwPB41deh4 Jul 8 10:08:38.929827 sshd-session[1757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 8 10:08:38.933836 systemd-logind[1579]: New session 5 of user core. Jul 8 10:08:38.944205 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 8 10:08:39.001766 sudo[1761]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 8 10:08:39.002093 sudo[1761]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 8 10:08:39.023893 sudo[1761]: pam_unix(sudo:session): session closed for user root Jul 8 10:08:39.025583 sshd[1760]: Connection closed by 10.0.0.1 port 41752 Jul 8 10:08:39.026092 sshd-session[1757]: pam_unix(sshd:session): session closed for user core Jul 8 10:08:39.044673 systemd[1]: sshd@4-10.0.0.19:22-10.0.0.1:41752.service: Deactivated successfully. Jul 8 10:08:39.046396 systemd[1]: session-5.scope: Deactivated successfully. Jul 8 10:08:39.047110 systemd-logind[1579]: Session 5 logged out. Waiting for processes to exit. Jul 8 10:08:39.049881 systemd[1]: Started sshd@5-10.0.0.19:22-10.0.0.1:41760.service - OpenSSH per-connection server daemon (10.0.0.1:41760). Jul 8 10:08:39.050423 systemd-logind[1579]: Removed session 5. Jul 8 10:08:39.104692 sshd[1767]: Accepted publickey for core from 10.0.0.1 port 41760 ssh2: RSA SHA256:BQWGw9qDwwKm1WzdalDD28JNIdz3HBcaxKwPB41deh4 Jul 8 10:08:39.105951 sshd-session[1767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 8 10:08:39.110157 systemd-logind[1579]: New session 6 of user core. Jul 8 10:08:39.118200 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 8 10:08:39.171453 sudo[1772]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 8 10:08:39.171773 sudo[1772]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 8 10:08:39.287581 sudo[1772]: pam_unix(sudo:session): session closed for user root Jul 8 10:08:39.294112 sudo[1771]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 8 10:08:39.294425 sudo[1771]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 8 10:08:39.304118 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 8 10:08:39.360252 augenrules[1794]: No rules Jul 8 10:08:39.362195 systemd[1]: audit-rules.service: Deactivated successfully. Jul 8 10:08:39.362494 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 8 10:08:39.363592 sudo[1771]: pam_unix(sudo:session): session closed for user root Jul 8 10:08:39.365179 sshd[1770]: Connection closed by 10.0.0.1 port 41760 Jul 8 10:08:39.365523 sshd-session[1767]: pam_unix(sshd:session): session closed for user core Jul 8 10:08:39.382735 systemd[1]: sshd@5-10.0.0.19:22-10.0.0.1:41760.service: Deactivated successfully. Jul 8 10:08:39.384637 systemd[1]: session-6.scope: Deactivated successfully. Jul 8 10:08:39.385383 systemd-logind[1579]: Session 6 logged out. Waiting for processes to exit. Jul 8 10:08:39.388169 systemd[1]: Started sshd@6-10.0.0.19:22-10.0.0.1:41776.service - OpenSSH per-connection server daemon (10.0.0.1:41776). Jul 8 10:08:39.388758 systemd-logind[1579]: Removed session 6. Jul 8 10:08:39.446279 sshd[1803]: Accepted publickey for core from 10.0.0.1 port 41776 ssh2: RSA SHA256:BQWGw9qDwwKm1WzdalDD28JNIdz3HBcaxKwPB41deh4 Jul 8 10:08:39.447504 sshd-session[1803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 8 10:08:39.451629 systemd-logind[1579]: New session 7 of user core. Jul 8 10:08:39.467255 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 8 10:08:39.519964 sudo[1807]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 8 10:08:39.520317 sudo[1807]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 8 10:08:40.100660 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 8 10:08:40.122399 (dockerd)[1827]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 8 10:08:40.325957 dockerd[1827]: time="2025-07-08T10:08:40.325879376Z" level=info msg="Starting up" Jul 8 10:08:40.326736 dockerd[1827]: time="2025-07-08T10:08:40.326690537Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 8 10:08:40.341200 dockerd[1827]: time="2025-07-08T10:08:40.341147464Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jul 8 10:08:40.635004 dockerd[1827]: time="2025-07-08T10:08:40.634926950Z" level=info msg="Loading containers: start." Jul 8 10:08:40.645107 kernel: Initializing XFRM netlink socket Jul 8 10:08:40.895618 systemd-networkd[1494]: docker0: Link UP Jul 8 10:08:40.900911 dockerd[1827]: time="2025-07-08T10:08:40.900857851Z" level=info msg="Loading containers: done." Jul 8 10:08:40.915733 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2972244631-merged.mount: Deactivated successfully. Jul 8 10:08:40.916840 dockerd[1827]: time="2025-07-08T10:08:40.916802477Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 8 10:08:40.916917 dockerd[1827]: time="2025-07-08T10:08:40.916875434Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jul 8 10:08:40.916960 dockerd[1827]: time="2025-07-08T10:08:40.916947709Z" level=info msg="Initializing buildkit" Jul 8 10:08:40.944555 dockerd[1827]: time="2025-07-08T10:08:40.944376578Z" level=info msg="Completed buildkit initialization" Jul 8 10:08:40.950438 dockerd[1827]: time="2025-07-08T10:08:40.950391757Z" level=info msg="Daemon has completed initialization" Jul 8 10:08:40.950527 dockerd[1827]: time="2025-07-08T10:08:40.950456107Z" level=info msg="API listen on /run/docker.sock" Jul 8 10:08:40.950668 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 8 10:08:41.955865 containerd[1595]: time="2025-07-08T10:08:41.955813082Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 8 10:08:42.536347 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount855724013.mount: Deactivated successfully. Jul 8 10:08:43.871501 containerd[1595]: time="2025-07-08T10:08:43.871403172Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:08:43.872535 containerd[1595]: time="2025-07-08T10:08:43.872489068Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=28077744" Jul 8 10:08:43.873717 containerd[1595]: time="2025-07-08T10:08:43.873681594Z" level=info msg="ImageCreate event name:\"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:08:43.876348 containerd[1595]: time="2025-07-08T10:08:43.876276660Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:08:43.877246 containerd[1595]: time="2025-07-08T10:08:43.877217274Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"28074544\" in 1.921345623s" Jul 8 10:08:43.877296 containerd[1595]: time="2025-07-08T10:08:43.877268139Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\"" Jul 8 10:08:43.878318 containerd[1595]: time="2025-07-08T10:08:43.878293041Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 8 10:08:45.385915 containerd[1595]: time="2025-07-08T10:08:45.385854879Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:08:45.386651 containerd[1595]: time="2025-07-08T10:08:45.386616557Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=24713294" Jul 8 10:08:45.387870 containerd[1595]: time="2025-07-08T10:08:45.387838859Z" level=info msg="ImageCreate event name:\"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:08:45.390460 containerd[1595]: time="2025-07-08T10:08:45.390433223Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:08:45.391301 containerd[1595]: time="2025-07-08T10:08:45.391232342Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"26315128\" in 1.512904486s" Jul 8 10:08:45.391301 containerd[1595]: time="2025-07-08T10:08:45.391292244Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\"" Jul 8 10:08:45.391888 containerd[1595]: time="2025-07-08T10:08:45.391830864Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 8 10:08:46.861418 containerd[1595]: time="2025-07-08T10:08:46.861344622Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:08:46.862240 containerd[1595]: time="2025-07-08T10:08:46.862196519Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=18783671" Jul 8 10:08:46.863544 containerd[1595]: time="2025-07-08T10:08:46.863487209Z" level=info msg="ImageCreate event name:\"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:08:46.866021 containerd[1595]: time="2025-07-08T10:08:46.865966077Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:08:46.866954 containerd[1595]: time="2025-07-08T10:08:46.866920907Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"20385523\" in 1.474971912s" Jul 8 10:08:46.866954 containerd[1595]: time="2025-07-08T10:08:46.866950773Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\"" Jul 8 10:08:46.867608 containerd[1595]: time="2025-07-08T10:08:46.867579923Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 8 10:08:47.483562 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 8 10:08:47.485340 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 8 10:08:47.757247 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 8 10:08:47.772377 (kubelet)[2116]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 8 10:08:47.887324 kubelet[2116]: E0708 10:08:47.887263 2116 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 8 10:08:47.893888 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 8 10:08:47.894105 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 8 10:08:47.894488 systemd[1]: kubelet.service: Consumed 284ms CPU time, 111.9M memory peak. Jul 8 10:08:48.264821 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1055759969.mount: Deactivated successfully. Jul 8 10:08:48.801455 containerd[1595]: time="2025-07-08T10:08:48.801388413Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:08:48.802189 containerd[1595]: time="2025-07-08T10:08:48.802117260Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=30383943" Jul 8 10:08:48.803337 containerd[1595]: time="2025-07-08T10:08:48.803295409Z" level=info msg="ImageCreate event name:\"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:08:48.805126 containerd[1595]: time="2025-07-08T10:08:48.805070157Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:08:48.805580 containerd[1595]: time="2025-07-08T10:08:48.805543354Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"30382962\" in 1.937933454s" Jul 8 10:08:48.805580 containerd[1595]: time="2025-07-08T10:08:48.805570805Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\"" Jul 8 10:08:48.806060 containerd[1595]: time="2025-07-08T10:08:48.806030728Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 8 10:08:49.719588 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount924501406.mount: Deactivated successfully. Jul 8 10:08:50.552270 containerd[1595]: time="2025-07-08T10:08:50.552206607Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:08:50.552897 containerd[1595]: time="2025-07-08T10:08:50.552832400Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jul 8 10:08:50.554051 containerd[1595]: time="2025-07-08T10:08:50.554009568Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:08:50.556567 containerd[1595]: time="2025-07-08T10:08:50.556538570Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:08:50.557427 containerd[1595]: time="2025-07-08T10:08:50.557377903Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.751320295s" Jul 8 10:08:50.557427 containerd[1595]: time="2025-07-08T10:08:50.557416135Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 8 10:08:50.558094 containerd[1595]: time="2025-07-08T10:08:50.558036608Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 8 10:08:51.082449 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2913878567.mount: Deactivated successfully. Jul 8 10:08:51.087735 containerd[1595]: time="2025-07-08T10:08:51.087695539Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 8 10:08:51.088452 containerd[1595]: time="2025-07-08T10:08:51.088406462Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 8 10:08:51.089520 containerd[1595]: time="2025-07-08T10:08:51.089497858Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 8 10:08:51.091372 containerd[1595]: time="2025-07-08T10:08:51.091343238Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 8 10:08:51.091956 containerd[1595]: time="2025-07-08T10:08:51.091915681Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 533.853545ms" Jul 8 10:08:51.091985 containerd[1595]: time="2025-07-08T10:08:51.091954234Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 8 10:08:51.092618 containerd[1595]: time="2025-07-08T10:08:51.092452799Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 8 10:08:51.669705 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2363306959.mount: Deactivated successfully. Jul 8 10:08:53.563587 containerd[1595]: time="2025-07-08T10:08:53.563518256Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" Jul 8 10:08:53.564032 containerd[1595]: time="2025-07-08T10:08:53.563665532Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:08:53.565329 containerd[1595]: time="2025-07-08T10:08:53.565275350Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:08:53.568233 containerd[1595]: time="2025-07-08T10:08:53.568173655Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:08:53.569242 containerd[1595]: time="2025-07-08T10:08:53.569215799Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.476737482s" Jul 8 10:08:53.569292 containerd[1595]: time="2025-07-08T10:08:53.569244953Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jul 8 10:08:56.353573 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 8 10:08:56.353740 systemd[1]: kubelet.service: Consumed 284ms CPU time, 111.9M memory peak. Jul 8 10:08:56.355983 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 8 10:08:56.390072 systemd[1]: Reload requested from client PID 2273 ('systemctl') (unit session-7.scope)... Jul 8 10:08:56.390125 systemd[1]: Reloading... Jul 8 10:08:56.472134 zram_generator::config[2316]: No configuration found. Jul 8 10:08:56.643100 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 8 10:08:56.770306 systemd[1]: Reloading finished in 379 ms. Jul 8 10:08:56.842040 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 8 10:08:56.842169 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 8 10:08:56.842507 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 8 10:08:56.842559 systemd[1]: kubelet.service: Consumed 155ms CPU time, 98.3M memory peak. Jul 8 10:08:56.844216 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 8 10:08:57.053611 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 8 10:08:57.066461 (kubelet)[2364]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 8 10:08:57.166321 kubelet[2364]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 8 10:08:57.166321 kubelet[2364]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 8 10:08:57.166321 kubelet[2364]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 8 10:08:57.167348 kubelet[2364]: I0708 10:08:57.167248 2364 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 8 10:08:57.575187 kubelet[2364]: I0708 10:08:57.575134 2364 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 8 10:08:57.575187 kubelet[2364]: I0708 10:08:57.575175 2364 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 8 10:08:57.575549 kubelet[2364]: I0708 10:08:57.575523 2364 server.go:934] "Client rotation is on, will bootstrap in background" Jul 8 10:08:57.600369 kubelet[2364]: E0708 10:08:57.600322 2364 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.19:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" Jul 8 10:08:57.600832 kubelet[2364]: I0708 10:08:57.600798 2364 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 8 10:08:57.608680 kubelet[2364]: I0708 10:08:57.608639 2364 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 8 10:08:57.615148 kubelet[2364]: I0708 10:08:57.615117 2364 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 8 10:08:57.615694 kubelet[2364]: I0708 10:08:57.615660 2364 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 8 10:08:57.615853 kubelet[2364]: I0708 10:08:57.615810 2364 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 8 10:08:57.616874 kubelet[2364]: I0708 10:08:57.615841 2364 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 8 10:08:57.616874 kubelet[2364]: I0708 10:08:57.616473 2364 topology_manager.go:138] "Creating topology manager with none policy" Jul 8 10:08:57.616874 kubelet[2364]: I0708 10:08:57.616505 2364 container_manager_linux.go:300] "Creating device plugin manager" Jul 8 10:08:57.616874 kubelet[2364]: I0708 10:08:57.616698 2364 state_mem.go:36] "Initialized new in-memory state store" Jul 8 10:08:57.620682 kubelet[2364]: I0708 10:08:57.620646 2364 kubelet.go:408] "Attempting to sync node with API server" Jul 8 10:08:57.620741 kubelet[2364]: I0708 10:08:57.620694 2364 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 8 10:08:57.620775 kubelet[2364]: I0708 10:08:57.620764 2364 kubelet.go:314] "Adding apiserver pod source" Jul 8 10:08:57.621064 kubelet[2364]: I0708 10:08:57.620804 2364 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 8 10:08:57.628426 kubelet[2364]: I0708 10:08:57.628398 2364 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 8 10:08:57.628751 kubelet[2364]: W0708 10:08:57.628658 2364 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.19:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.19:6443: connect: connection refused Jul 8 10:08:57.628826 kubelet[2364]: E0708 10:08:57.628784 2364 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.19:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" Jul 8 10:08:57.628826 kubelet[2364]: W0708 10:08:57.628726 2364 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.19:6443: connect: connection refused Jul 8 10:08:57.628892 kubelet[2364]: E0708 10:08:57.628834 2364 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" Jul 8 10:08:57.630168 kubelet[2364]: I0708 10:08:57.630133 2364 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 8 10:08:57.631057 kubelet[2364]: W0708 10:08:57.631012 2364 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 8 10:08:57.633384 kubelet[2364]: I0708 10:08:57.633347 2364 server.go:1274] "Started kubelet" Jul 8 10:08:57.633484 kubelet[2364]: I0708 10:08:57.633445 2364 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 8 10:08:57.634899 kubelet[2364]: I0708 10:08:57.634571 2364 server.go:449] "Adding debug handlers to kubelet server" Jul 8 10:08:57.638368 kubelet[2364]: I0708 10:08:57.638333 2364 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 8 10:08:57.640851 kubelet[2364]: I0708 10:08:57.640488 2364 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 8 10:08:57.640851 kubelet[2364]: I0708 10:08:57.640520 2364 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 8 10:08:57.640851 kubelet[2364]: I0708 10:08:57.640771 2364 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 8 10:08:57.642212 kubelet[2364]: E0708 10:08:57.641709 2364 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 8 10:08:57.642535 kubelet[2364]: I0708 10:08:57.642506 2364 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 8 10:08:57.642797 kubelet[2364]: I0708 10:08:57.642669 2364 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 8 10:08:57.642797 kubelet[2364]: I0708 10:08:57.642741 2364 reconciler.go:26] "Reconciler: start to sync state" Jul 8 10:08:57.644328 kubelet[2364]: E0708 10:08:57.642653 2364 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.19:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.19:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18503ed48b029427 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-08 10:08:57.633313831 +0000 UTC m=+0.562552336,LastTimestamp:2025-07-08 10:08:57.633313831 +0000 UTC m=+0.562552336,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 8 10:08:57.644598 kubelet[2364]: W0708 10:08:57.644532 2364 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.19:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.19:6443: connect: connection refused Jul 8 10:08:57.644662 kubelet[2364]: E0708 10:08:57.644608 2364 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.19:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" Jul 8 10:08:57.644662 kubelet[2364]: I0708 10:08:57.644555 2364 factory.go:221] Registration of the systemd container factory successfully Jul 8 10:08:57.644728 kubelet[2364]: E0708 10:08:57.644670 2364 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 8 10:08:57.644752 kubelet[2364]: I0708 10:08:57.644725 2364 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 8 10:08:57.644798 kubelet[2364]: E0708 10:08:57.644771 2364 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.19:6443: connect: connection refused" interval="200ms" Jul 8 10:08:57.645994 kubelet[2364]: I0708 10:08:57.645964 2364 factory.go:221] Registration of the containerd container factory successfully Jul 8 10:08:57.657412 kubelet[2364]: I0708 10:08:57.657329 2364 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 8 10:08:57.659712 kubelet[2364]: I0708 10:08:57.659641 2364 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 8 10:08:57.659712 kubelet[2364]: I0708 10:08:57.659703 2364 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 8 10:08:57.659858 kubelet[2364]: I0708 10:08:57.659751 2364 kubelet.go:2321] "Starting kubelet main sync loop" Jul 8 10:08:57.659858 kubelet[2364]: E0708 10:08:57.659803 2364 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 8 10:08:57.664684 kubelet[2364]: W0708 10:08:57.664642 2364 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.19:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.19:6443: connect: connection refused Jul 8 10:08:57.664823 kubelet[2364]: E0708 10:08:57.664798 2364 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.19:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" Jul 8 10:08:57.665918 kubelet[2364]: I0708 10:08:57.665874 2364 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 8 10:08:57.665918 kubelet[2364]: I0708 10:08:57.665897 2364 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 8 10:08:57.665918 kubelet[2364]: I0708 10:08:57.665920 2364 state_mem.go:36] "Initialized new in-memory state store" Jul 8 10:08:57.745112 kubelet[2364]: E0708 10:08:57.745054 2364 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 8 10:08:57.760397 kubelet[2364]: E0708 10:08:57.760349 2364 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 8 10:08:57.845851 kubelet[2364]: E0708 10:08:57.845763 2364 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 8 10:08:57.846463 kubelet[2364]: E0708 10:08:57.846431 2364 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.19:6443: connect: connection refused" interval="400ms" Jul 8 10:08:57.946904 kubelet[2364]: E0708 10:08:57.946842 2364 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 8 10:08:57.961052 kubelet[2364]: E0708 10:08:57.961013 2364 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 8 10:08:58.032270 kubelet[2364]: I0708 10:08:58.032213 2364 policy_none.go:49] "None policy: Start" Jul 8 10:08:58.033363 kubelet[2364]: I0708 10:08:58.033325 2364 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 8 10:08:58.033410 kubelet[2364]: I0708 10:08:58.033373 2364 state_mem.go:35] "Initializing new in-memory state store" Jul 8 10:08:58.046976 kubelet[2364]: E0708 10:08:58.046945 2364 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 8 10:08:58.049578 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 8 10:08:58.066382 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 8 10:08:58.071136 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 8 10:08:58.087234 kubelet[2364]: I0708 10:08:58.087165 2364 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 8 10:08:58.087637 kubelet[2364]: I0708 10:08:58.087408 2364 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 8 10:08:58.087637 kubelet[2364]: I0708 10:08:58.087431 2364 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 8 10:08:58.087799 kubelet[2364]: I0708 10:08:58.087776 2364 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 8 10:08:58.089234 kubelet[2364]: E0708 10:08:58.089202 2364 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 8 10:08:58.189429 kubelet[2364]: I0708 10:08:58.189328 2364 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 8 10:08:58.189781 kubelet[2364]: E0708 10:08:58.189713 2364 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.19:6443/api/v1/nodes\": dial tcp 10.0.0.19:6443: connect: connection refused" node="localhost" Jul 8 10:08:58.247591 kubelet[2364]: E0708 10:08:58.247541 2364 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.19:6443: connect: connection refused" interval="800ms" Jul 8 10:08:58.370932 systemd[1]: Created slice kubepods-burstable-pod3f04709fe51ae4ab5abd58e8da771b74.slice - libcontainer container kubepods-burstable-pod3f04709fe51ae4ab5abd58e8da771b74.slice. Jul 8 10:08:58.390919 kubelet[2364]: I0708 10:08:58.390887 2364 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 8 10:08:58.391217 kubelet[2364]: E0708 10:08:58.391171 2364 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.19:6443/api/v1/nodes\": dial tcp 10.0.0.19:6443: connect: connection refused" node="localhost" Jul 8 10:08:58.405561 systemd[1]: Created slice kubepods-burstable-pod6ad2e780728bd763a8e76ad42e3d810a.slice - libcontainer container kubepods-burstable-pod6ad2e780728bd763a8e76ad42e3d810a.slice. Jul 8 10:08:58.428999 systemd[1]: Created slice kubepods-burstable-podb35b56493416c25588cb530e37ffc065.slice - libcontainer container kubepods-burstable-podb35b56493416c25588cb530e37ffc065.slice. Jul 8 10:08:58.446379 kubelet[2364]: I0708 10:08:58.446046 2364 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6ad2e780728bd763a8e76ad42e3d810a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6ad2e780728bd763a8e76ad42e3d810a\") " pod="kube-system/kube-apiserver-localhost" Jul 8 10:08:58.446379 kubelet[2364]: I0708 10:08:58.446104 2364 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 8 10:08:58.446379 kubelet[2364]: I0708 10:08:58.446142 2364 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6ad2e780728bd763a8e76ad42e3d810a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6ad2e780728bd763a8e76ad42e3d810a\") " pod="kube-system/kube-apiserver-localhost" Jul 8 10:08:58.446379 kubelet[2364]: I0708 10:08:58.446158 2364 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6ad2e780728bd763a8e76ad42e3d810a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6ad2e780728bd763a8e76ad42e3d810a\") " pod="kube-system/kube-apiserver-localhost" Jul 8 10:08:58.446379 kubelet[2364]: I0708 10:08:58.446176 2364 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 8 10:08:58.446569 kubelet[2364]: I0708 10:08:58.446199 2364 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 8 10:08:58.446569 kubelet[2364]: I0708 10:08:58.446212 2364 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 8 10:08:58.446569 kubelet[2364]: I0708 10:08:58.446224 2364 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 8 10:08:58.446569 kubelet[2364]: I0708 10:08:58.446240 2364 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 8 10:08:58.606675 kubelet[2364]: W0708 10:08:58.606577 2364 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.19:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.19:6443: connect: connection refused Jul 8 10:08:58.606744 kubelet[2364]: E0708 10:08:58.606680 2364 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.19:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" Jul 8 10:08:58.704828 containerd[1595]: time="2025-07-08T10:08:58.704695662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,}" Jul 8 10:08:58.727494 containerd[1595]: time="2025-07-08T10:08:58.727452080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6ad2e780728bd763a8e76ad42e3d810a,Namespace:kube-system,Attempt:0,}" Jul 8 10:08:58.732206 containerd[1595]: time="2025-07-08T10:08:58.732169746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,}" Jul 8 10:08:58.735852 kubelet[2364]: W0708 10:08:58.735772 2364 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.19:6443: connect: connection refused Jul 8 10:08:58.735909 kubelet[2364]: E0708 10:08:58.735849 2364 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" Jul 8 10:08:58.741600 kubelet[2364]: W0708 10:08:58.741540 2364 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.19:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.19:6443: connect: connection refused Jul 8 10:08:58.741600 kubelet[2364]: E0708 10:08:58.741580 2364 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.19:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" Jul 8 10:08:58.793061 kubelet[2364]: I0708 10:08:58.793035 2364 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 8 10:08:58.793352 kubelet[2364]: E0708 10:08:58.793323 2364 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.19:6443/api/v1/nodes\": dial tcp 10.0.0.19:6443: connect: connection refused" node="localhost" Jul 8 10:08:58.944839 containerd[1595]: time="2025-07-08T10:08:58.944777736Z" level=info msg="connecting to shim f85164ec4dfa6db99761edb6e1a2550971b7db776a30f3f9f6352ceceffa0758" address="unix:///run/containerd/s/e88288274a6725b0a740b188080e3e1abd67618a28bf5b8a2e5d1549885c3e77" namespace=k8s.io protocol=ttrpc version=3 Jul 8 10:08:58.950854 kubelet[2364]: W0708 10:08:58.950777 2364 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.19:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.19:6443: connect: connection refused Jul 8 10:08:58.950854 kubelet[2364]: E0708 10:08:58.950851 2364 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.19:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" Jul 8 10:08:58.958434 containerd[1595]: time="2025-07-08T10:08:58.958292786Z" level=info msg="connecting to shim 1a61c3ce45d1fcf7bfce8dc9e49fb253ed1df7c27ecdcbee8594acaabca32d3b" address="unix:///run/containerd/s/e2ce40300bcb7ae1e2f73ee7e620a183cc7f9b2068102cd9ac8e0b6a5fcd2d9d" namespace=k8s.io protocol=ttrpc version=3 Jul 8 10:08:58.966288 containerd[1595]: time="2025-07-08T10:08:58.966239727Z" level=info msg="connecting to shim 529154c559c4e509e2473a781635086b92da04954b60614c025fe45a18941728" address="unix:///run/containerd/s/4a841aa21922e261eef3f287ac11feaf0f2c6d6d8ac7eb6e63293f3833cddf8b" namespace=k8s.io protocol=ttrpc version=3 Jul 8 10:08:59.049072 kubelet[2364]: E0708 10:08:59.049005 2364 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.19:6443: connect: connection refused" interval="1.6s" Jul 8 10:08:59.091376 systemd[1]: Started cri-containerd-1a61c3ce45d1fcf7bfce8dc9e49fb253ed1df7c27ecdcbee8594acaabca32d3b.scope - libcontainer container 1a61c3ce45d1fcf7bfce8dc9e49fb253ed1df7c27ecdcbee8594acaabca32d3b. Jul 8 10:08:59.097417 systemd[1]: Started cri-containerd-529154c559c4e509e2473a781635086b92da04954b60614c025fe45a18941728.scope - libcontainer container 529154c559c4e509e2473a781635086b92da04954b60614c025fe45a18941728. Jul 8 10:08:59.099276 systemd[1]: Started cri-containerd-f85164ec4dfa6db99761edb6e1a2550971b7db776a30f3f9f6352ceceffa0758.scope - libcontainer container f85164ec4dfa6db99761edb6e1a2550971b7db776a30f3f9f6352ceceffa0758. Jul 8 10:08:59.169365 containerd[1595]: time="2025-07-08T10:08:59.169207862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,} returns sandbox id \"529154c559c4e509e2473a781635086b92da04954b60614c025fe45a18941728\"" Jul 8 10:08:59.172045 containerd[1595]: time="2025-07-08T10:08:59.171921821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,} returns sandbox id \"1a61c3ce45d1fcf7bfce8dc9e49fb253ed1df7c27ecdcbee8594acaabca32d3b\"" Jul 8 10:08:59.174400 containerd[1595]: time="2025-07-08T10:08:59.174366935Z" level=info msg="CreateContainer within sandbox \"529154c559c4e509e2473a781635086b92da04954b60614c025fe45a18941728\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 8 10:08:59.174864 containerd[1595]: time="2025-07-08T10:08:59.174624388Z" level=info msg="CreateContainer within sandbox \"1a61c3ce45d1fcf7bfce8dc9e49fb253ed1df7c27ecdcbee8594acaabca32d3b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 8 10:08:59.186605 containerd[1595]: time="2025-07-08T10:08:59.186567201Z" level=info msg="Container e0ce95d5551d106d4c3a94cfcd5973016523d62d45080aa92c96c5b8b0fd2de9: CDI devices from CRI Config.CDIDevices: []" Jul 8 10:08:59.187703 containerd[1595]: time="2025-07-08T10:08:59.187573267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6ad2e780728bd763a8e76ad42e3d810a,Namespace:kube-system,Attempt:0,} returns sandbox id \"f85164ec4dfa6db99761edb6e1a2550971b7db776a30f3f9f6352ceceffa0758\"" Jul 8 10:08:59.190395 containerd[1595]: time="2025-07-08T10:08:59.190362076Z" level=info msg="CreateContainer within sandbox \"f85164ec4dfa6db99761edb6e1a2550971b7db776a30f3f9f6352ceceffa0758\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 8 10:08:59.195393 containerd[1595]: time="2025-07-08T10:08:59.195354317Z" level=info msg="CreateContainer within sandbox \"529154c559c4e509e2473a781635086b92da04954b60614c025fe45a18941728\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e0ce95d5551d106d4c3a94cfcd5973016523d62d45080aa92c96c5b8b0fd2de9\"" Jul 8 10:08:59.195928 containerd[1595]: time="2025-07-08T10:08:59.195905250Z" level=info msg="StartContainer for \"e0ce95d5551d106d4c3a94cfcd5973016523d62d45080aa92c96c5b8b0fd2de9\"" Jul 8 10:08:59.197053 containerd[1595]: time="2025-07-08T10:08:59.197031351Z" level=info msg="connecting to shim e0ce95d5551d106d4c3a94cfcd5973016523d62d45080aa92c96c5b8b0fd2de9" address="unix:///run/containerd/s/4a841aa21922e261eef3f287ac11feaf0f2c6d6d8ac7eb6e63293f3833cddf8b" protocol=ttrpc version=3 Jul 8 10:08:59.197370 containerd[1595]: time="2025-07-08T10:08:59.197261062Z" level=info msg="Container fe3297e1bf3045dc86ae69add505400ed527624e4d867dca8649af8897ef6958: CDI devices from CRI Config.CDIDevices: []" Jul 8 10:08:59.202542 containerd[1595]: time="2025-07-08T10:08:59.202522547Z" level=info msg="Container f6d4794a442b25e7ca1dfbcdec4314e85c22b8fc5cec1219e3ca3c2e59f84401: CDI devices from CRI Config.CDIDevices: []" Jul 8 10:08:59.210843 containerd[1595]: time="2025-07-08T10:08:59.210767767Z" level=info msg="CreateContainer within sandbox \"1a61c3ce45d1fcf7bfce8dc9e49fb253ed1df7c27ecdcbee8594acaabca32d3b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"fe3297e1bf3045dc86ae69add505400ed527624e4d867dca8649af8897ef6958\"" Jul 8 10:08:59.211690 containerd[1595]: time="2025-07-08T10:08:59.211657656Z" level=info msg="StartContainer for \"fe3297e1bf3045dc86ae69add505400ed527624e4d867dca8649af8897ef6958\"" Jul 8 10:08:59.213482 containerd[1595]: time="2025-07-08T10:08:59.213443233Z" level=info msg="connecting to shim fe3297e1bf3045dc86ae69add505400ed527624e4d867dca8649af8897ef6958" address="unix:///run/containerd/s/e2ce40300bcb7ae1e2f73ee7e620a183cc7f9b2068102cd9ac8e0b6a5fcd2d9d" protocol=ttrpc version=3 Jul 8 10:08:59.214596 containerd[1595]: time="2025-07-08T10:08:59.214549448Z" level=info msg="CreateContainer within sandbox \"f85164ec4dfa6db99761edb6e1a2550971b7db776a30f3f9f6352ceceffa0758\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f6d4794a442b25e7ca1dfbcdec4314e85c22b8fc5cec1219e3ca3c2e59f84401\"" Jul 8 10:08:59.215427 containerd[1595]: time="2025-07-08T10:08:59.215239011Z" level=info msg="StartContainer for \"f6d4794a442b25e7ca1dfbcdec4314e85c22b8fc5cec1219e3ca3c2e59f84401\"" Jul 8 10:08:59.216695 containerd[1595]: time="2025-07-08T10:08:59.216675845Z" level=info msg="connecting to shim f6d4794a442b25e7ca1dfbcdec4314e85c22b8fc5cec1219e3ca3c2e59f84401" address="unix:///run/containerd/s/e88288274a6725b0a740b188080e3e1abd67618a28bf5b8a2e5d1549885c3e77" protocol=ttrpc version=3 Jul 8 10:08:59.219408 systemd[1]: Started cri-containerd-e0ce95d5551d106d4c3a94cfcd5973016523d62d45080aa92c96c5b8b0fd2de9.scope - libcontainer container e0ce95d5551d106d4c3a94cfcd5973016523d62d45080aa92c96c5b8b0fd2de9. Jul 8 10:08:59.242219 systemd[1]: Started cri-containerd-f6d4794a442b25e7ca1dfbcdec4314e85c22b8fc5cec1219e3ca3c2e59f84401.scope - libcontainer container f6d4794a442b25e7ca1dfbcdec4314e85c22b8fc5cec1219e3ca3c2e59f84401. Jul 8 10:08:59.243808 systemd[1]: Started cri-containerd-fe3297e1bf3045dc86ae69add505400ed527624e4d867dca8649af8897ef6958.scope - libcontainer container fe3297e1bf3045dc86ae69add505400ed527624e4d867dca8649af8897ef6958. Jul 8 10:08:59.296818 containerd[1595]: time="2025-07-08T10:08:59.296761806Z" level=info msg="StartContainer for \"f6d4794a442b25e7ca1dfbcdec4314e85c22b8fc5cec1219e3ca3c2e59f84401\" returns successfully" Jul 8 10:08:59.297600 containerd[1595]: time="2025-07-08T10:08:59.297286239Z" level=info msg="StartContainer for \"e0ce95d5551d106d4c3a94cfcd5973016523d62d45080aa92c96c5b8b0fd2de9\" returns successfully" Jul 8 10:08:59.301021 containerd[1595]: time="2025-07-08T10:08:59.301003749Z" level=info msg="StartContainer for \"fe3297e1bf3045dc86ae69add505400ed527624e4d867dca8649af8897ef6958\" returns successfully" Jul 8 10:08:59.595634 kubelet[2364]: I0708 10:08:59.595539 2364 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 8 10:09:00.652200 kubelet[2364]: E0708 10:09:00.652143 2364 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 8 10:09:00.701111 kubelet[2364]: I0708 10:09:00.699186 2364 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 8 10:09:00.701111 kubelet[2364]: E0708 10:09:00.699235 2364 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 8 10:09:00.710646 kubelet[2364]: E0708 10:09:00.710579 2364 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 8 10:09:00.811380 kubelet[2364]: E0708 10:09:00.811311 2364 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 8 10:09:01.624238 kubelet[2364]: I0708 10:09:01.624185 2364 apiserver.go:52] "Watching apiserver" Jul 8 10:09:01.643773 kubelet[2364]: I0708 10:09:01.643726 2364 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 8 10:09:02.495735 systemd[1]: Reload requested from client PID 2644 ('systemctl') (unit session-7.scope)... Jul 8 10:09:02.495751 systemd[1]: Reloading... Jul 8 10:09:02.640878 zram_generator::config[2690]: No configuration found. Jul 8 10:09:02.741408 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 8 10:09:02.872737 systemd[1]: Reloading finished in 376 ms. Jul 8 10:09:02.909312 kubelet[2364]: I0708 10:09:02.909205 2364 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 8 10:09:02.909306 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 8 10:09:02.932382 systemd[1]: kubelet.service: Deactivated successfully. Jul 8 10:09:02.932686 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 8 10:09:02.932747 systemd[1]: kubelet.service: Consumed 1.071s CPU time, 131.4M memory peak. Jul 8 10:09:02.934647 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 8 10:09:03.155526 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 8 10:09:03.165380 (kubelet)[2732]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 8 10:09:03.225687 kubelet[2732]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 8 10:09:03.225687 kubelet[2732]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 8 10:09:03.225687 kubelet[2732]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 8 10:09:03.226154 kubelet[2732]: I0708 10:09:03.225736 2732 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 8 10:09:03.231916 kubelet[2732]: I0708 10:09:03.231885 2732 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 8 10:09:03.231916 kubelet[2732]: I0708 10:09:03.231911 2732 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 8 10:09:03.232190 kubelet[2732]: I0708 10:09:03.232170 2732 server.go:934] "Client rotation is on, will bootstrap in background" Jul 8 10:09:03.233475 kubelet[2732]: I0708 10:09:03.233457 2732 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 8 10:09:03.235462 kubelet[2732]: I0708 10:09:03.235405 2732 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 8 10:09:03.239215 kubelet[2732]: I0708 10:09:03.239170 2732 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 8 10:09:03.244031 kubelet[2732]: I0708 10:09:03.244005 2732 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 8 10:09:03.244159 kubelet[2732]: I0708 10:09:03.244145 2732 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 8 10:09:03.244323 kubelet[2732]: I0708 10:09:03.244281 2732 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 8 10:09:03.244484 kubelet[2732]: I0708 10:09:03.244310 2732 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 8 10:09:03.244568 kubelet[2732]: I0708 10:09:03.244485 2732 topology_manager.go:138] "Creating topology manager with none policy" Jul 8 10:09:03.244568 kubelet[2732]: I0708 10:09:03.244510 2732 container_manager_linux.go:300] "Creating device plugin manager" Jul 8 10:09:03.244568 kubelet[2732]: I0708 10:09:03.244540 2732 state_mem.go:36] "Initialized new in-memory state store" Jul 8 10:09:03.244658 kubelet[2732]: I0708 10:09:03.244642 2732 kubelet.go:408] "Attempting to sync node with API server" Jul 8 10:09:03.244658 kubelet[2732]: I0708 10:09:03.244655 2732 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 8 10:09:03.244704 kubelet[2732]: I0708 10:09:03.244688 2732 kubelet.go:314] "Adding apiserver pod source" Jul 8 10:09:03.244704 kubelet[2732]: I0708 10:09:03.244700 2732 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 8 10:09:03.245589 kubelet[2732]: I0708 10:09:03.245528 2732 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 8 10:09:03.246516 kubelet[2732]: I0708 10:09:03.246498 2732 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 8 10:09:03.246918 kubelet[2732]: I0708 10:09:03.246877 2732 server.go:1274] "Started kubelet" Jul 8 10:09:03.248583 kubelet[2732]: I0708 10:09:03.248543 2732 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 8 10:09:03.249182 kubelet[2732]: I0708 10:09:03.249150 2732 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 8 10:09:03.249301 kubelet[2732]: I0708 10:09:03.249229 2732 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 8 10:09:03.250033 kubelet[2732]: I0708 10:09:03.249995 2732 server.go:449] "Adding debug handlers to kubelet server" Jul 8 10:09:03.250869 kubelet[2732]: I0708 10:09:03.250847 2732 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 8 10:09:03.251058 kubelet[2732]: I0708 10:09:03.251034 2732 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 8 10:09:03.251949 kubelet[2732]: I0708 10:09:03.251809 2732 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 8 10:09:03.252356 kubelet[2732]: E0708 10:09:03.252297 2732 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 8 10:09:03.252571 kubelet[2732]: I0708 10:09:03.252548 2732 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 8 10:09:03.252894 kubelet[2732]: I0708 10:09:03.252727 2732 reconciler.go:26] "Reconciler: start to sync state" Jul 8 10:09:03.253770 kubelet[2732]: I0708 10:09:03.253724 2732 factory.go:221] Registration of the systemd container factory successfully Jul 8 10:09:03.253864 kubelet[2732]: I0708 10:09:03.253837 2732 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 8 10:09:03.256132 kubelet[2732]: I0708 10:09:03.255737 2732 factory.go:221] Registration of the containerd container factory successfully Jul 8 10:09:03.256492 kubelet[2732]: E0708 10:09:03.256467 2732 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 8 10:09:03.279047 kubelet[2732]: I0708 10:09:03.279003 2732 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 8 10:09:03.280372 kubelet[2732]: I0708 10:09:03.280345 2732 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 8 10:09:03.280423 kubelet[2732]: I0708 10:09:03.280382 2732 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 8 10:09:03.280423 kubelet[2732]: I0708 10:09:03.280399 2732 kubelet.go:2321] "Starting kubelet main sync loop" Jul 8 10:09:03.280469 kubelet[2732]: E0708 10:09:03.280438 2732 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 8 10:09:03.303254 kubelet[2732]: I0708 10:09:03.303223 2732 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 8 10:09:03.303254 kubelet[2732]: I0708 10:09:03.303242 2732 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 8 10:09:03.303370 kubelet[2732]: I0708 10:09:03.303266 2732 state_mem.go:36] "Initialized new in-memory state store" Jul 8 10:09:03.303455 kubelet[2732]: I0708 10:09:03.303436 2732 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 8 10:09:03.303481 kubelet[2732]: I0708 10:09:03.303449 2732 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 8 10:09:03.303481 kubelet[2732]: I0708 10:09:03.303468 2732 policy_none.go:49] "None policy: Start" Jul 8 10:09:03.304047 kubelet[2732]: I0708 10:09:03.304026 2732 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 8 10:09:03.304132 kubelet[2732]: I0708 10:09:03.304067 2732 state_mem.go:35] "Initializing new in-memory state store" Jul 8 10:09:03.304272 kubelet[2732]: I0708 10:09:03.304254 2732 state_mem.go:75] "Updated machine memory state" Jul 8 10:09:03.308461 kubelet[2732]: I0708 10:09:03.308303 2732 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 8 10:09:03.308582 kubelet[2732]: I0708 10:09:03.308477 2732 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 8 10:09:03.308582 kubelet[2732]: I0708 10:09:03.308489 2732 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 8 10:09:03.309800 kubelet[2732]: I0708 10:09:03.308929 2732 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 8 10:09:03.387740 kubelet[2732]: E0708 10:09:03.387695 2732 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 8 10:09:03.411990 kubelet[2732]: I0708 10:09:03.411529 2732 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 8 10:09:03.419100 kubelet[2732]: I0708 10:09:03.419040 2732 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jul 8 10:09:03.419194 kubelet[2732]: I0708 10:09:03.419156 2732 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 8 10:09:03.554338 kubelet[2732]: I0708 10:09:03.554285 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6ad2e780728bd763a8e76ad42e3d810a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6ad2e780728bd763a8e76ad42e3d810a\") " pod="kube-system/kube-apiserver-localhost" Jul 8 10:09:03.554338 kubelet[2732]: I0708 10:09:03.554332 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6ad2e780728bd763a8e76ad42e3d810a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6ad2e780728bd763a8e76ad42e3d810a\") " pod="kube-system/kube-apiserver-localhost" Jul 8 10:09:03.554550 kubelet[2732]: I0708 10:09:03.554357 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 8 10:09:03.554550 kubelet[2732]: I0708 10:09:03.554374 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 8 10:09:03.554550 kubelet[2732]: I0708 10:09:03.554392 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 8 10:09:03.554550 kubelet[2732]: I0708 10:09:03.554432 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 8 10:09:03.554550 kubelet[2732]: I0708 10:09:03.554453 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 8 10:09:03.554693 kubelet[2732]: I0708 10:09:03.554470 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6ad2e780728bd763a8e76ad42e3d810a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6ad2e780728bd763a8e76ad42e3d810a\") " pod="kube-system/kube-apiserver-localhost" Jul 8 10:09:03.554693 kubelet[2732]: I0708 10:09:03.554484 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 8 10:09:04.246209 kubelet[2732]: I0708 10:09:04.246159 2732 apiserver.go:52] "Watching apiserver" Jul 8 10:09:04.253181 kubelet[2732]: I0708 10:09:04.253153 2732 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 8 10:09:04.377113 kubelet[2732]: E0708 10:09:04.376560 2732 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 8 10:09:04.377605 kubelet[2732]: E0708 10:09:04.377580 2732 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 8 10:09:04.389541 kubelet[2732]: I0708 10:09:04.389008 2732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.38897284 podStartE2EDuration="1.38897284s" podCreationTimestamp="2025-07-08 10:09:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-08 10:09:04.377116009 +0000 UTC m=+1.188939122" watchObservedRunningTime="2025-07-08 10:09:04.38897284 +0000 UTC m=+1.200795943" Jul 8 10:09:04.389805 kubelet[2732]: I0708 10:09:04.389608 2732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.389598934 podStartE2EDuration="1.389598934s" podCreationTimestamp="2025-07-08 10:09:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-08 10:09:04.388645096 +0000 UTC m=+1.200468209" watchObservedRunningTime="2025-07-08 10:09:04.389598934 +0000 UTC m=+1.201422037" Jul 8 10:09:04.399163 kubelet[2732]: I0708 10:09:04.399062 2732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.399031931 podStartE2EDuration="3.399031931s" podCreationTimestamp="2025-07-08 10:09:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-08 10:09:04.397930937 +0000 UTC m=+1.209754050" watchObservedRunningTime="2025-07-08 10:09:04.399031931 +0000 UTC m=+1.210855044" Jul 8 10:09:08.960201 kubelet[2732]: I0708 10:09:08.960134 2732 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 8 10:09:08.960681 containerd[1595]: time="2025-07-08T10:09:08.960640730Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 8 10:09:08.961050 kubelet[2732]: I0708 10:09:08.960993 2732 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 8 10:09:10.291636 systemd[1]: Created slice kubepods-besteffort-podec699017_52fd_4219_b2a8_bcbdab70c0d4.slice - libcontainer container kubepods-besteffort-podec699017_52fd_4219_b2a8_bcbdab70c0d4.slice. Jul 8 10:09:10.307360 kubelet[2732]: I0708 10:09:10.307257 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ec699017-52fd-4219-b2a8-bcbdab70c0d4-kube-proxy\") pod \"kube-proxy-l95w8\" (UID: \"ec699017-52fd-4219-b2a8-bcbdab70c0d4\") " pod="kube-system/kube-proxy-l95w8" Jul 8 10:09:10.307360 kubelet[2732]: I0708 10:09:10.307319 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ec699017-52fd-4219-b2a8-bcbdab70c0d4-xtables-lock\") pod \"kube-proxy-l95w8\" (UID: \"ec699017-52fd-4219-b2a8-bcbdab70c0d4\") " pod="kube-system/kube-proxy-l95w8" Jul 8 10:09:10.307831 kubelet[2732]: I0708 10:09:10.307356 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vrkr\" (UniqueName: \"kubernetes.io/projected/ec699017-52fd-4219-b2a8-bcbdab70c0d4-kube-api-access-4vrkr\") pod \"kube-proxy-l95w8\" (UID: \"ec699017-52fd-4219-b2a8-bcbdab70c0d4\") " pod="kube-system/kube-proxy-l95w8" Jul 8 10:09:10.307831 kubelet[2732]: I0708 10:09:10.307505 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ec699017-52fd-4219-b2a8-bcbdab70c0d4-lib-modules\") pod \"kube-proxy-l95w8\" (UID: \"ec699017-52fd-4219-b2a8-bcbdab70c0d4\") " pod="kube-system/kube-proxy-l95w8" Jul 8 10:09:10.324383 systemd[1]: Created slice kubepods-besteffort-pod911e31d3_d511_474d_a74e_4cacc51f6bbe.slice - libcontainer container kubepods-besteffort-pod911e31d3_d511_474d_a74e_4cacc51f6bbe.slice. Jul 8 10:09:10.408711 kubelet[2732]: I0708 10:09:10.408602 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjdt4\" (UniqueName: \"kubernetes.io/projected/911e31d3-d511-474d-a74e-4cacc51f6bbe-kube-api-access-sjdt4\") pod \"tigera-operator-5bf8dfcb4-wtzhr\" (UID: \"911e31d3-d511-474d-a74e-4cacc51f6bbe\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-wtzhr" Jul 8 10:09:10.408711 kubelet[2732]: I0708 10:09:10.408673 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/911e31d3-d511-474d-a74e-4cacc51f6bbe-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-wtzhr\" (UID: \"911e31d3-d511-474d-a74e-4cacc51f6bbe\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-wtzhr" Jul 8 10:09:10.603292 containerd[1595]: time="2025-07-08T10:09:10.603131567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-l95w8,Uid:ec699017-52fd-4219-b2a8-bcbdab70c0d4,Namespace:kube-system,Attempt:0,}" Jul 8 10:09:10.629024 containerd[1595]: time="2025-07-08T10:09:10.628914331Z" level=info msg="connecting to shim a02cd6c81bf4eace13270e1cbc3e138024e6fd8b3e90b42c7ae49d77454a7cbf" address="unix:///run/containerd/s/99c14ce9b8aad0bce7e5797e9099aff0f513ff6b59438495c707f15804885896" namespace=k8s.io protocol=ttrpc version=3 Jul 8 10:09:10.629493 containerd[1595]: time="2025-07-08T10:09:10.629424526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-wtzhr,Uid:911e31d3-d511-474d-a74e-4cacc51f6bbe,Namespace:tigera-operator,Attempt:0,}" Jul 8 10:09:10.671427 systemd[1]: Started cri-containerd-a02cd6c81bf4eace13270e1cbc3e138024e6fd8b3e90b42c7ae49d77454a7cbf.scope - libcontainer container a02cd6c81bf4eace13270e1cbc3e138024e6fd8b3e90b42c7ae49d77454a7cbf. Jul 8 10:09:10.673141 containerd[1595]: time="2025-07-08T10:09:10.673087349Z" level=info msg="connecting to shim 4c3da60aa70c86bcde152459d54fff5528c4682626c826d5b003f6519365085e" address="unix:///run/containerd/s/d4f3cb8b73c3750bf55a81b9cff3a0953888b4b61e7caf5d951d4ca43121341c" namespace=k8s.io protocol=ttrpc version=3 Jul 8 10:09:10.708264 systemd[1]: Started cri-containerd-4c3da60aa70c86bcde152459d54fff5528c4682626c826d5b003f6519365085e.scope - libcontainer container 4c3da60aa70c86bcde152459d54fff5528c4682626c826d5b003f6519365085e. Jul 8 10:09:10.806543 containerd[1595]: time="2025-07-08T10:09:10.806488131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-l95w8,Uid:ec699017-52fd-4219-b2a8-bcbdab70c0d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"a02cd6c81bf4eace13270e1cbc3e138024e6fd8b3e90b42c7ae49d77454a7cbf\"" Jul 8 10:09:10.810248 containerd[1595]: time="2025-07-08T10:09:10.810205584Z" level=info msg="CreateContainer within sandbox \"a02cd6c81bf4eace13270e1cbc3e138024e6fd8b3e90b42c7ae49d77454a7cbf\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 8 10:09:11.069201 containerd[1595]: time="2025-07-08T10:09:11.069144166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-wtzhr,Uid:911e31d3-d511-474d-a74e-4cacc51f6bbe,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"4c3da60aa70c86bcde152459d54fff5528c4682626c826d5b003f6519365085e\"" Jul 8 10:09:11.071371 containerd[1595]: time="2025-07-08T10:09:11.071171831Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 8 10:09:11.620305 containerd[1595]: time="2025-07-08T10:09:11.620253562Z" level=info msg="Container ae8bcf8e4f25dade3086e3f536a206c35026352622429beab2d3413df61844c9: CDI devices from CRI Config.CDIDevices: []" Jul 8 10:09:12.063457 containerd[1595]: time="2025-07-08T10:09:12.063409346Z" level=info msg="CreateContainer within sandbox \"a02cd6c81bf4eace13270e1cbc3e138024e6fd8b3e90b42c7ae49d77454a7cbf\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ae8bcf8e4f25dade3086e3f536a206c35026352622429beab2d3413df61844c9\"" Jul 8 10:09:12.064019 containerd[1595]: time="2025-07-08T10:09:12.063964194Z" level=info msg="StartContainer for \"ae8bcf8e4f25dade3086e3f536a206c35026352622429beab2d3413df61844c9\"" Jul 8 10:09:12.066567 containerd[1595]: time="2025-07-08T10:09:12.066507640Z" level=info msg="connecting to shim ae8bcf8e4f25dade3086e3f536a206c35026352622429beab2d3413df61844c9" address="unix:///run/containerd/s/99c14ce9b8aad0bce7e5797e9099aff0f513ff6b59438495c707f15804885896" protocol=ttrpc version=3 Jul 8 10:09:12.098244 systemd[1]: Started cri-containerd-ae8bcf8e4f25dade3086e3f536a206c35026352622429beab2d3413df61844c9.scope - libcontainer container ae8bcf8e4f25dade3086e3f536a206c35026352622429beab2d3413df61844c9. Jul 8 10:09:12.618323 containerd[1595]: time="2025-07-08T10:09:12.618246625Z" level=info msg="StartContainer for \"ae8bcf8e4f25dade3086e3f536a206c35026352622429beab2d3413df61844c9\" returns successfully" Jul 8 10:09:14.221067 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1070905720.mount: Deactivated successfully. Jul 8 10:09:14.555449 containerd[1595]: time="2025-07-08T10:09:14.555361579Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:09:14.556252 containerd[1595]: time="2025-07-08T10:09:14.556204946Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 8 10:09:14.557521 containerd[1595]: time="2025-07-08T10:09:14.557481999Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:09:14.559351 containerd[1595]: time="2025-07-08T10:09:14.559307898Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:09:14.559887 containerd[1595]: time="2025-07-08T10:09:14.559845552Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 3.48862648s" Jul 8 10:09:14.559887 containerd[1595]: time="2025-07-08T10:09:14.559876671Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 8 10:09:14.562496 containerd[1595]: time="2025-07-08T10:09:14.562436057Z" level=info msg="CreateContainer within sandbox \"4c3da60aa70c86bcde152459d54fff5528c4682626c826d5b003f6519365085e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 8 10:09:14.745914 containerd[1595]: time="2025-07-08T10:09:14.745851075Z" level=info msg="Container 9811ad2ae1a31c74aeebd971559f4584d7ebd436e6e79a5e6df24ae61c6d1f7c: CDI devices from CRI Config.CDIDevices: []" Jul 8 10:09:15.132472 containerd[1595]: time="2025-07-08T10:09:15.132410355Z" level=info msg="CreateContainer within sandbox \"4c3da60aa70c86bcde152459d54fff5528c4682626c826d5b003f6519365085e\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"9811ad2ae1a31c74aeebd971559f4584d7ebd436e6e79a5e6df24ae61c6d1f7c\"" Jul 8 10:09:15.132879 containerd[1595]: time="2025-07-08T10:09:15.132838199Z" level=info msg="StartContainer for \"9811ad2ae1a31c74aeebd971559f4584d7ebd436e6e79a5e6df24ae61c6d1f7c\"" Jul 8 10:09:15.133718 containerd[1595]: time="2025-07-08T10:09:15.133686033Z" level=info msg="connecting to shim 9811ad2ae1a31c74aeebd971559f4584d7ebd436e6e79a5e6df24ae61c6d1f7c" address="unix:///run/containerd/s/d4f3cb8b73c3750bf55a81b9cff3a0953888b4b61e7caf5d951d4ca43121341c" protocol=ttrpc version=3 Jul 8 10:09:15.190263 systemd[1]: Started cri-containerd-9811ad2ae1a31c74aeebd971559f4584d7ebd436e6e79a5e6df24ae61c6d1f7c.scope - libcontainer container 9811ad2ae1a31c74aeebd971559f4584d7ebd436e6e79a5e6df24ae61c6d1f7c. Jul 8 10:09:15.395162 containerd[1595]: time="2025-07-08T10:09:15.394992664Z" level=info msg="StartContainer for \"9811ad2ae1a31c74aeebd971559f4584d7ebd436e6e79a5e6df24ae61c6d1f7c\" returns successfully" Jul 8 10:09:15.630923 kubelet[2732]: I0708 10:09:15.630859 2732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-l95w8" podStartSLOduration=5.630834103 podStartE2EDuration="5.630834103s" podCreationTimestamp="2025-07-08 10:09:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-08 10:09:13.630506738 +0000 UTC m=+10.442329851" watchObservedRunningTime="2025-07-08 10:09:15.630834103 +0000 UTC m=+12.442657226" Jul 8 10:09:16.042461 kubelet[2732]: I0708 10:09:16.042134 2732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-wtzhr" podStartSLOduration=2.552034425 podStartE2EDuration="6.042108085s" podCreationTimestamp="2025-07-08 10:09:10 +0000 UTC" firstStartedPulling="2025-07-08 10:09:11.070656616 +0000 UTC m=+7.882479729" lastFinishedPulling="2025-07-08 10:09:14.560730276 +0000 UTC m=+11.372553389" observedRunningTime="2025-07-08 10:09:15.644262118 +0000 UTC m=+12.456085241" watchObservedRunningTime="2025-07-08 10:09:16.042108085 +0000 UTC m=+12.853931198" Jul 8 10:09:17.603274 systemd[1]: cri-containerd-9811ad2ae1a31c74aeebd971559f4584d7ebd436e6e79a5e6df24ae61c6d1f7c.scope: Deactivated successfully. Jul 8 10:09:17.606727 containerd[1595]: time="2025-07-08T10:09:17.606583609Z" level=info msg="received exit event container_id:\"9811ad2ae1a31c74aeebd971559f4584d7ebd436e6e79a5e6df24ae61c6d1f7c\" id:\"9811ad2ae1a31c74aeebd971559f4584d7ebd436e6e79a5e6df24ae61c6d1f7c\" pid:3052 exit_status:1 exited_at:{seconds:1751969357 nanos:606054785}" Jul 8 10:09:17.606727 containerd[1595]: time="2025-07-08T10:09:17.606701864Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9811ad2ae1a31c74aeebd971559f4584d7ebd436e6e79a5e6df24ae61c6d1f7c\" id:\"9811ad2ae1a31c74aeebd971559f4584d7ebd436e6e79a5e6df24ae61c6d1f7c\" pid:3052 exit_status:1 exited_at:{seconds:1751969357 nanos:606054785}" Jul 8 10:09:17.649851 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9811ad2ae1a31c74aeebd971559f4584d7ebd436e6e79a5e6df24ae61c6d1f7c-rootfs.mount: Deactivated successfully. Jul 8 10:09:18.632873 kubelet[2732]: I0708 10:09:18.632824 2732 scope.go:117] "RemoveContainer" containerID="9811ad2ae1a31c74aeebd971559f4584d7ebd436e6e79a5e6df24ae61c6d1f7c" Jul 8 10:09:18.635485 containerd[1595]: time="2025-07-08T10:09:18.635435846Z" level=info msg="CreateContainer within sandbox \"4c3da60aa70c86bcde152459d54fff5528c4682626c826d5b003f6519365085e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jul 8 10:09:18.958245 containerd[1595]: time="2025-07-08T10:09:18.957908898Z" level=info msg="Container 82c435f75408ccf9552a143356c982cccb60b7b3807f57457afb40d3b257fafa: CDI devices from CRI Config.CDIDevices: []" Jul 8 10:09:18.996653 update_engine[1583]: I20250708 10:09:18.996532 1583 update_attempter.cc:509] Updating boot flags... Jul 8 10:09:19.226338 containerd[1595]: time="2025-07-08T10:09:19.225711563Z" level=info msg="CreateContainer within sandbox \"4c3da60aa70c86bcde152459d54fff5528c4682626c826d5b003f6519365085e\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"82c435f75408ccf9552a143356c982cccb60b7b3807f57457afb40d3b257fafa\"" Jul 8 10:09:19.227041 containerd[1595]: time="2025-07-08T10:09:19.226952437Z" level=info msg="StartContainer for \"82c435f75408ccf9552a143356c982cccb60b7b3807f57457afb40d3b257fafa\"" Jul 8 10:09:19.228231 containerd[1595]: time="2025-07-08T10:09:19.228185876Z" level=info msg="connecting to shim 82c435f75408ccf9552a143356c982cccb60b7b3807f57457afb40d3b257fafa" address="unix:///run/containerd/s/d4f3cb8b73c3750bf55a81b9cff3a0953888b4b61e7caf5d951d4ca43121341c" protocol=ttrpc version=3 Jul 8 10:09:19.299333 systemd[1]: Started cri-containerd-82c435f75408ccf9552a143356c982cccb60b7b3807f57457afb40d3b257fafa.scope - libcontainer container 82c435f75408ccf9552a143356c982cccb60b7b3807f57457afb40d3b257fafa. Jul 8 10:09:19.467234 containerd[1595]: time="2025-07-08T10:09:19.467182522Z" level=info msg="StartContainer for \"82c435f75408ccf9552a143356c982cccb60b7b3807f57457afb40d3b257fafa\" returns successfully" Jul 8 10:09:21.188751 sudo[1807]: pam_unix(sudo:session): session closed for user root Jul 8 10:09:21.191026 sshd[1806]: Connection closed by 10.0.0.1 port 41776 Jul 8 10:09:21.191732 sshd-session[1803]: pam_unix(sshd:session): session closed for user core Jul 8 10:09:21.196558 systemd[1]: sshd@6-10.0.0.19:22-10.0.0.1:41776.service: Deactivated successfully. Jul 8 10:09:21.198882 systemd[1]: session-7.scope: Deactivated successfully. Jul 8 10:09:21.199121 systemd[1]: session-7.scope: Consumed 5.497s CPU time, 222.6M memory peak. Jul 8 10:09:21.200327 systemd-logind[1579]: Session 7 logged out. Waiting for processes to exit. Jul 8 10:09:21.201716 systemd-logind[1579]: Removed session 7. Jul 8 10:09:25.737936 systemd[1]: Created slice kubepods-besteffort-podb0b01f7e_7cce_4913_8c1a_e911ebd8282c.slice - libcontainer container kubepods-besteffort-podb0b01f7e_7cce_4913_8c1a_e911ebd8282c.slice. Jul 8 10:09:25.798093 kubelet[2732]: I0708 10:09:25.798003 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b0b01f7e-7cce-4913-8c1a-e911ebd8282c-tigera-ca-bundle\") pod \"calico-typha-6bbb787b4f-k9fzd\" (UID: \"b0b01f7e-7cce-4913-8c1a-e911ebd8282c\") " pod="calico-system/calico-typha-6bbb787b4f-k9fzd" Jul 8 10:09:25.798093 kubelet[2732]: I0708 10:09:25.798062 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sm5dt\" (UniqueName: \"kubernetes.io/projected/b0b01f7e-7cce-4913-8c1a-e911ebd8282c-kube-api-access-sm5dt\") pod \"calico-typha-6bbb787b4f-k9fzd\" (UID: \"b0b01f7e-7cce-4913-8c1a-e911ebd8282c\") " pod="calico-system/calico-typha-6bbb787b4f-k9fzd" Jul 8 10:09:25.798093 kubelet[2732]: I0708 10:09:25.798101 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b0b01f7e-7cce-4913-8c1a-e911ebd8282c-typha-certs\") pod \"calico-typha-6bbb787b4f-k9fzd\" (UID: \"b0b01f7e-7cce-4913-8c1a-e911ebd8282c\") " pod="calico-system/calico-typha-6bbb787b4f-k9fzd" Jul 8 10:09:26.042112 containerd[1595]: time="2025-07-08T10:09:26.041947989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6bbb787b4f-k9fzd,Uid:b0b01f7e-7cce-4913-8c1a-e911ebd8282c,Namespace:calico-system,Attempt:0,}" Jul 8 10:09:26.512521 containerd[1595]: time="2025-07-08T10:09:26.512458196Z" level=info msg="connecting to shim 7aa39094d7207ee52413d0bcbf7ff0f1b39ade1602fa536b17e5f1324f57697f" address="unix:///run/containerd/s/a3d376cdeee58e1bbf0e21dde05ee7728a485c3fc813e4127f64aa8e49fe3988" namespace=k8s.io protocol=ttrpc version=3 Jul 8 10:09:26.546218 systemd[1]: Started cri-containerd-7aa39094d7207ee52413d0bcbf7ff0f1b39ade1602fa536b17e5f1324f57697f.scope - libcontainer container 7aa39094d7207ee52413d0bcbf7ff0f1b39ade1602fa536b17e5f1324f57697f. Jul 8 10:09:26.629842 containerd[1595]: time="2025-07-08T10:09:26.629793250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6bbb787b4f-k9fzd,Uid:b0b01f7e-7cce-4913-8c1a-e911ebd8282c,Namespace:calico-system,Attempt:0,} returns sandbox id \"7aa39094d7207ee52413d0bcbf7ff0f1b39ade1602fa536b17e5f1324f57697f\"" Jul 8 10:09:26.631607 containerd[1595]: time="2025-07-08T10:09:26.631549246Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 8 10:09:26.962020 systemd[1]: Created slice kubepods-besteffort-pod212bf004_5d5b_4047_99ad_a1436a30443e.slice - libcontainer container kubepods-besteffort-pod212bf004_5d5b_4047_99ad_a1436a30443e.slice. Jul 8 10:09:27.007204 kubelet[2732]: I0708 10:09:27.007154 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/212bf004-5d5b-4047-99ad-a1436a30443e-lib-modules\") pod \"calico-node-v7ftq\" (UID: \"212bf004-5d5b-4047-99ad-a1436a30443e\") " pod="calico-system/calico-node-v7ftq" Jul 8 10:09:27.007204 kubelet[2732]: I0708 10:09:27.007188 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/212bf004-5d5b-4047-99ad-a1436a30443e-node-certs\") pod \"calico-node-v7ftq\" (UID: \"212bf004-5d5b-4047-99ad-a1436a30443e\") " pod="calico-system/calico-node-v7ftq" Jul 8 10:09:27.007204 kubelet[2732]: I0708 10:09:27.007203 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/212bf004-5d5b-4047-99ad-a1436a30443e-var-lib-calico\") pod \"calico-node-v7ftq\" (UID: \"212bf004-5d5b-4047-99ad-a1436a30443e\") " pod="calico-system/calico-node-v7ftq" Jul 8 10:09:27.007663 kubelet[2732]: I0708 10:09:27.007219 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/212bf004-5d5b-4047-99ad-a1436a30443e-cni-log-dir\") pod \"calico-node-v7ftq\" (UID: \"212bf004-5d5b-4047-99ad-a1436a30443e\") " pod="calico-system/calico-node-v7ftq" Jul 8 10:09:27.007663 kubelet[2732]: I0708 10:09:27.007280 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/212bf004-5d5b-4047-99ad-a1436a30443e-cni-net-dir\") pod \"calico-node-v7ftq\" (UID: \"212bf004-5d5b-4047-99ad-a1436a30443e\") " pod="calico-system/calico-node-v7ftq" Jul 8 10:09:27.007663 kubelet[2732]: I0708 10:09:27.007314 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/212bf004-5d5b-4047-99ad-a1436a30443e-policysync\") pod \"calico-node-v7ftq\" (UID: \"212bf004-5d5b-4047-99ad-a1436a30443e\") " pod="calico-system/calico-node-v7ftq" Jul 8 10:09:27.007663 kubelet[2732]: I0708 10:09:27.007335 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/212bf004-5d5b-4047-99ad-a1436a30443e-tigera-ca-bundle\") pod \"calico-node-v7ftq\" (UID: \"212bf004-5d5b-4047-99ad-a1436a30443e\") " pod="calico-system/calico-node-v7ftq" Jul 8 10:09:27.007663 kubelet[2732]: I0708 10:09:27.007350 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/212bf004-5d5b-4047-99ad-a1436a30443e-cni-bin-dir\") pod \"calico-node-v7ftq\" (UID: \"212bf004-5d5b-4047-99ad-a1436a30443e\") " pod="calico-system/calico-node-v7ftq" Jul 8 10:09:27.007778 kubelet[2732]: I0708 10:09:27.007368 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/212bf004-5d5b-4047-99ad-a1436a30443e-var-run-calico\") pod \"calico-node-v7ftq\" (UID: \"212bf004-5d5b-4047-99ad-a1436a30443e\") " pod="calico-system/calico-node-v7ftq" Jul 8 10:09:27.007778 kubelet[2732]: I0708 10:09:27.007382 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/212bf004-5d5b-4047-99ad-a1436a30443e-xtables-lock\") pod \"calico-node-v7ftq\" (UID: \"212bf004-5d5b-4047-99ad-a1436a30443e\") " pod="calico-system/calico-node-v7ftq" Jul 8 10:09:27.007778 kubelet[2732]: I0708 10:09:27.007418 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/212bf004-5d5b-4047-99ad-a1436a30443e-flexvol-driver-host\") pod \"calico-node-v7ftq\" (UID: \"212bf004-5d5b-4047-99ad-a1436a30443e\") " pod="calico-system/calico-node-v7ftq" Jul 8 10:09:27.007778 kubelet[2732]: I0708 10:09:27.007442 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kk6p\" (UniqueName: \"kubernetes.io/projected/212bf004-5d5b-4047-99ad-a1436a30443e-kube-api-access-9kk6p\") pod \"calico-node-v7ftq\" (UID: \"212bf004-5d5b-4047-99ad-a1436a30443e\") " pod="calico-system/calico-node-v7ftq" Jul 8 10:09:27.110154 kubelet[2732]: E0708 10:09:27.110119 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:27.110154 kubelet[2732]: W0708 10:09:27.110142 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:27.110154 kubelet[2732]: E0708 10:09:27.110167 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:27.113103 kubelet[2732]: E0708 10:09:27.113049 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:27.113266 kubelet[2732]: W0708 10:09:27.113134 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:27.113266 kubelet[2732]: E0708 10:09:27.113157 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:27.170686 kubelet[2732]: E0708 10:09:27.170650 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:27.170686 kubelet[2732]: W0708 10:09:27.170674 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:27.170686 kubelet[2732]: E0708 10:09:27.170694 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:27.266719 containerd[1595]: time="2025-07-08T10:09:27.266666806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-v7ftq,Uid:212bf004-5d5b-4047-99ad-a1436a30443e,Namespace:calico-system,Attempt:0,}" Jul 8 10:09:28.234739 kubelet[2732]: E0708 10:09:28.233934 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6fn56" podUID="9657a89f-7630-4cdb-b9bf-4bfd1c84e9b7" Jul 8 10:09:28.304814 kubelet[2732]: E0708 10:09:28.304731 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:28.304814 kubelet[2732]: W0708 10:09:28.304756 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:28.304814 kubelet[2732]: E0708 10:09:28.304780 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:28.305105 kubelet[2732]: E0708 10:09:28.305062 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:28.305105 kubelet[2732]: W0708 10:09:28.305101 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:28.305187 kubelet[2732]: E0708 10:09:28.305115 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:28.305348 kubelet[2732]: E0708 10:09:28.305318 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:28.305348 kubelet[2732]: W0708 10:09:28.305330 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:28.305348 kubelet[2732]: E0708 10:09:28.305341 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:28.305565 kubelet[2732]: E0708 10:09:28.305535 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:28.305565 kubelet[2732]: W0708 10:09:28.305547 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:28.305565 kubelet[2732]: E0708 10:09:28.305557 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:28.305761 kubelet[2732]: E0708 10:09:28.305732 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:28.305761 kubelet[2732]: W0708 10:09:28.305743 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:28.305761 kubelet[2732]: E0708 10:09:28.305753 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:28.305953 kubelet[2732]: E0708 10:09:28.305924 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:28.305953 kubelet[2732]: W0708 10:09:28.305936 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:28.305953 kubelet[2732]: E0708 10:09:28.305946 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:28.306160 kubelet[2732]: E0708 10:09:28.306136 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:28.306160 kubelet[2732]: W0708 10:09:28.306148 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:28.306160 kubelet[2732]: E0708 10:09:28.306157 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:28.306388 kubelet[2732]: E0708 10:09:28.306356 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:28.306388 kubelet[2732]: W0708 10:09:28.306368 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:28.306388 kubelet[2732]: E0708 10:09:28.306378 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:28.306586 kubelet[2732]: E0708 10:09:28.306566 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:28.306586 kubelet[2732]: W0708 10:09:28.306576 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:28.306586 kubelet[2732]: E0708 10:09:28.306586 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:28.306767 kubelet[2732]: E0708 10:09:28.306747 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:28.306767 kubelet[2732]: W0708 10:09:28.306757 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:28.306767 kubelet[2732]: E0708 10:09:28.306767 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:28.306951 kubelet[2732]: E0708 10:09:28.306930 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:28.306951 kubelet[2732]: W0708 10:09:28.306940 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:28.306951 kubelet[2732]: E0708 10:09:28.306950 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:28.307163 kubelet[2732]: E0708 10:09:28.307143 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:28.307163 kubelet[2732]: W0708 10:09:28.307153 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:28.307163 kubelet[2732]: E0708 10:09:28.307163 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:28.307348 kubelet[2732]: E0708 10:09:28.307329 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:28.307348 kubelet[2732]: W0708 10:09:28.307338 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:28.307348 kubelet[2732]: E0708 10:09:28.307348 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:28.307528 kubelet[2732]: E0708 10:09:28.307508 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:28.307528 kubelet[2732]: W0708 10:09:28.307519 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:28.307528 kubelet[2732]: E0708 10:09:28.307529 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:28.307707 kubelet[2732]: E0708 10:09:28.307687 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:28.307707 kubelet[2732]: W0708 10:09:28.307697 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:28.307707 kubelet[2732]: E0708 10:09:28.307707 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:28.307885 kubelet[2732]: E0708 10:09:28.307864 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:28.307885 kubelet[2732]: W0708 10:09:28.307875 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:28.307885 kubelet[2732]: E0708 10:09:28.307884 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:28.308090 kubelet[2732]: E0708 10:09:28.308061 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:28.308090 kubelet[2732]: W0708 10:09:28.308071 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:28.308169 kubelet[2732]: E0708 10:09:28.308095 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:28.308282 kubelet[2732]: E0708 10:09:28.308262 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:28.308282 kubelet[2732]: W0708 10:09:28.308272 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:28.308282 kubelet[2732]: E0708 10:09:28.308282 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:28.308471 kubelet[2732]: E0708 10:09:28.308451 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:28.308471 kubelet[2732]: W0708 10:09:28.308461 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:28.308543 kubelet[2732]: E0708 10:09:28.308470 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:28.308651 kubelet[2732]: E0708 10:09:28.308632 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:28.308651 kubelet[2732]: W0708 10:09:28.308641 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:28.308651 kubelet[2732]: E0708 10:09:28.308650 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:28.318655 kubelet[2732]: E0708 10:09:28.318628 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:28.318655 kubelet[2732]: W0708 10:09:28.318642 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:28.318655 kubelet[2732]: E0708 10:09:28.318653 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:28.318782 kubelet[2732]: I0708 10:09:28.318677 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9657a89f-7630-4cdb-b9bf-4bfd1c84e9b7-registration-dir\") pod \"csi-node-driver-6fn56\" (UID: \"9657a89f-7630-4cdb-b9bf-4bfd1c84e9b7\") " pod="calico-system/csi-node-driver-6fn56" Jul 8 10:09:28.318868 kubelet[2732]: E0708 10:09:28.318844 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:28.318868 kubelet[2732]: W0708 10:09:28.318854 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:28.318937 kubelet[2732]: E0708 10:09:28.318877 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:28.318937 kubelet[2732]: I0708 10:09:28.318900 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4w7vf\" (UniqueName: \"kubernetes.io/projected/9657a89f-7630-4cdb-b9bf-4bfd1c84e9b7-kube-api-access-4w7vf\") pod \"csi-node-driver-6fn56\" (UID: \"9657a89f-7630-4cdb-b9bf-4bfd1c84e9b7\") " pod="calico-system/csi-node-driver-6fn56" Jul 8 10:09:28.319143 kubelet[2732]: E0708 10:09:28.319118 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:28.319143 kubelet[2732]: W0708 10:09:28.319130 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:28.319143 kubelet[2732]: E0708 10:09:28.319143 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:28.319240 kubelet[2732]: I0708 10:09:28.319156 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9657a89f-7630-4cdb-b9bf-4bfd1c84e9b7-socket-dir\") pod \"csi-node-driver-6fn56\" (UID: \"9657a89f-7630-4cdb-b9bf-4bfd1c84e9b7\") " pod="calico-system/csi-node-driver-6fn56" Jul 8 10:09:28.319353 kubelet[2732]: E0708 10:09:28.319330 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:28.319353 kubelet[2732]: W0708 10:09:28.319340 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:28.319353 kubelet[2732]: E0708 10:09:28.319352 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:28.319437 kubelet[2732]: I0708 10:09:28.319364 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/9657a89f-7630-4cdb-b9bf-4bfd1c84e9b7-varrun\") pod \"csi-node-driver-6fn56\" (UID: \"9657a89f-7630-4cdb-b9bf-4bfd1c84e9b7\") " pod="calico-system/csi-node-driver-6fn56" Jul 8 10:09:28.319553 kubelet[2732]: E0708 10:09:28.319539 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:28.319553 kubelet[2732]: W0708 10:09:28.319550 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:28.319614 kubelet[2732]: E0708 10:09:28.319561 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:28.319614 kubelet[2732]: I0708 10:09:28.319574 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9657a89f-7630-4cdb-b9bf-4bfd1c84e9b7-kubelet-dir\") pod \"csi-node-driver-6fn56\" (UID: \"9657a89f-7630-4cdb-b9bf-4bfd1c84e9b7\") " pod="calico-system/csi-node-driver-6fn56" Jul 8 10:09:28.319785 kubelet[2732]: E0708 10:09:28.319759 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:28.319785 kubelet[2732]: W0708 10:09:28.319771 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:28.319785 kubelet[2732]: E0708 10:09:28.319784 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:28.319950 kubelet[2732]: E0708 10:09:28.319938 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:28.319950 kubelet[2732]: W0708 10:09:28.319946 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:28.320037 kubelet[2732]: E0708 10:09:28.320005 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:28.320178 kubelet[2732]: E0708 10:09:28.320165 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:28.320178 kubelet[2732]: W0708 10:09:28.320175 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:28.320243 kubelet[2732]: E0708 10:09:28.320204 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:28.320351 kubelet[2732]: E0708 10:09:28.320339 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:28.320351 kubelet[2732]: W0708 10:09:28.320347 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:28.320410 kubelet[2732]: E0708 10:09:28.320381 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:28.320516 kubelet[2732]: E0708 10:09:28.320503 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:28.320516 kubelet[2732]: W0708 10:09:28.320511 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:28.320572 kubelet[2732]: E0708 10:09:28.320537 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:28.320680 kubelet[2732]: E0708 10:09:28.320667 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:28.320680 kubelet[2732]: W0708 10:09:28.320675 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:28.320734 kubelet[2732]: E0708 10:09:28.320686 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:28.320842 kubelet[2732]: E0708 10:09:28.320830 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:28.320842 kubelet[2732]: W0708 10:09:28.320838 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:28.320896 kubelet[2732]: E0708 10:09:28.320845 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:28.321027 kubelet[2732]: E0708 10:09:28.321014 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:28.321027 kubelet[2732]: W0708 10:09:28.321025 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:28.321101 kubelet[2732]: E0708 10:09:28.321034 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:28.321219 kubelet[2732]: E0708 10:09:28.321206 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:28.321219 kubelet[2732]: W0708 10:09:28.321214 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:28.321274 kubelet[2732]: E0708 10:09:28.321221 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:28.321386 kubelet[2732]: E0708 10:09:28.321374 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:28.321386 kubelet[2732]: W0708 10:09:28.321382 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:28.321447 kubelet[2732]: E0708 10:09:28.321389 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:28.420927 kubelet[2732]: E0708 10:09:28.420886 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:28.420927 kubelet[2732]: W0708 10:09:28.420909 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:28.420927 kubelet[2732]: E0708 10:09:28.420931 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:28.421197 kubelet[2732]: E0708 10:09:28.421166 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:28.421197 kubelet[2732]: W0708 10:09:28.421179 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:28.421197 kubelet[2732]: E0708 10:09:28.421194 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:28.421419 kubelet[2732]: E0708 10:09:28.421399 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:28.421419 kubelet[2732]: W0708 10:09:28.421414 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:28.421482 kubelet[2732]: E0708 10:09:28.421434 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:28.421616 kubelet[2732]: E0708 10:09:28.421600 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:28.421616 kubelet[2732]: W0708 10:09:28.421609 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:28.421680 kubelet[2732]: E0708 10:09:28.421622 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:28.421788 kubelet[2732]: E0708 10:09:28.421771 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:28.421788 kubelet[2732]: W0708 10:09:28.421779 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:28.421848 kubelet[2732]: E0708 10:09:28.421791 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:28.422008 kubelet[2732]: E0708 10:09:28.421969 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:28.422008 kubelet[2732]: W0708 10:09:28.421981 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:28.422008 kubelet[2732]: E0708 10:09:28.422001 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:28.422302 kubelet[2732]: E0708 10:09:28.422289 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:28.422302 kubelet[2732]: W0708 10:09:28.422297 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:28.422476 kubelet[2732]: E0708 10:09:28.422310 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:28.422553 kubelet[2732]: E0708 10:09:28.422527 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:28.422553 kubelet[2732]: W0708 10:09:28.422541 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:28.422743 kubelet[2732]: E0708 10:09:28.422614 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:28.422780 kubelet[2732]: E0708 10:09:28.422774 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:28.422817 kubelet[2732]: W0708 10:09:28.422783 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:28.422845 kubelet[2732]: E0708 10:09:28.422833 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:28.423029 kubelet[2732]: E0708 10:09:28.422965 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:28.423029 kubelet[2732]: W0708 10:09:28.422976 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:28.423029 kubelet[2732]: E0708 10:09:28.423003 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:28.423263 kubelet[2732]: E0708 10:09:28.423202 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:28.423263 kubelet[2732]: W0708 10:09:28.423209 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:28.423263 kubelet[2732]: E0708 10:09:28.423223 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:28.423466 kubelet[2732]: E0708 10:09:28.423444 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:28.423466 kubelet[2732]: W0708 10:09:28.423455 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:28.423466 kubelet[2732]: E0708 10:09:28.423469 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:28.423693 kubelet[2732]: E0708 10:09:28.423672 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:28.423693 kubelet[2732]: W0708 10:09:28.423682 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:28.423693 kubelet[2732]: E0708 10:09:28.423696 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:28.423963 kubelet[2732]: E0708 10:09:28.423938 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:28.423963 kubelet[2732]: W0708 10:09:28.423954 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:28.424052 kubelet[2732]: E0708 10:09:28.423973 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:28.424257 kubelet[2732]: E0708 10:09:28.424236 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:28.424257 kubelet[2732]: W0708 10:09:28.424248 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:28.424339 kubelet[2732]: E0708 10:09:28.424278 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:28.424443 kubelet[2732]: E0708 10:09:28.424422 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:28.424443 kubelet[2732]: W0708 10:09:28.424434 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:28.424519 kubelet[2732]: E0708 10:09:28.424462 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:28.424618 kubelet[2732]: E0708 10:09:28.424598 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:28.424618 kubelet[2732]: W0708 10:09:28.424610 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:28.424686 kubelet[2732]: E0708 10:09:28.424637 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:28.424785 kubelet[2732]: E0708 10:09:28.424767 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:28.424785 kubelet[2732]: W0708 10:09:28.424777 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:28.424868 kubelet[2732]: E0708 10:09:28.424793 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:28.425138 kubelet[2732]: E0708 10:09:28.425118 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:28.425138 kubelet[2732]: W0708 10:09:28.425130 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:28.425227 kubelet[2732]: E0708 10:09:28.425146 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:28.425342 kubelet[2732]: E0708 10:09:28.425320 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:28.425342 kubelet[2732]: W0708 10:09:28.425332 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:28.425407 kubelet[2732]: E0708 10:09:28.425346 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:28.425529 kubelet[2732]: E0708 10:09:28.425509 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:28.425529 kubelet[2732]: W0708 10:09:28.425519 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:28.425596 kubelet[2732]: E0708 10:09:28.425534 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:28.425744 kubelet[2732]: E0708 10:09:28.425724 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:28.425744 kubelet[2732]: W0708 10:09:28.425734 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:28.425805 kubelet[2732]: E0708 10:09:28.425747 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:28.425926 kubelet[2732]: E0708 10:09:28.425907 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:28.425926 kubelet[2732]: W0708 10:09:28.425917 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:28.425985 kubelet[2732]: E0708 10:09:28.425930 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:28.426212 kubelet[2732]: E0708 10:09:28.426193 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:28.426212 kubelet[2732]: W0708 10:09:28.426204 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:28.426286 kubelet[2732]: E0708 10:09:28.426214 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:28.432852 kubelet[2732]: E0708 10:09:28.432820 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:28.432852 kubelet[2732]: W0708 10:09:28.432837 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:28.432852 kubelet[2732]: E0708 10:09:28.432850 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:28.525252 kubelet[2732]: E0708 10:09:28.525141 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:28.525252 kubelet[2732]: W0708 10:09:28.525166 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:28.525252 kubelet[2732]: E0708 10:09:28.525189 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:28.625784 kubelet[2732]: E0708 10:09:28.625748 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:28.625784 kubelet[2732]: W0708 10:09:28.625772 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:28.625784 kubelet[2732]: E0708 10:09:28.625795 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:28.677391 kubelet[2732]: E0708 10:09:28.677183 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:28.677391 kubelet[2732]: W0708 10:09:28.677203 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:28.677391 kubelet[2732]: E0708 10:09:28.677222 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:28.806228 containerd[1595]: time="2025-07-08T10:09:28.806104978Z" level=info msg="connecting to shim 16b1f70400056d43f56eaccf7da6c141f658efd4643feb7623876ddc7ff5318b" address="unix:///run/containerd/s/d43a77411cb518f7a40047529fe566ebb980196ca9c6944e29cb846cdedc7d29" namespace=k8s.io protocol=ttrpc version=3 Jul 8 10:09:28.838221 systemd[1]: Started cri-containerd-16b1f70400056d43f56eaccf7da6c141f658efd4643feb7623876ddc7ff5318b.scope - libcontainer container 16b1f70400056d43f56eaccf7da6c141f658efd4643feb7623876ddc7ff5318b. Jul 8 10:09:28.869060 containerd[1595]: time="2025-07-08T10:09:28.869002654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-v7ftq,Uid:212bf004-5d5b-4047-99ad-a1436a30443e,Namespace:calico-system,Attempt:0,} returns sandbox id \"16b1f70400056d43f56eaccf7da6c141f658efd4643feb7623876ddc7ff5318b\"" Jul 8 10:09:30.216249 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3625778109.mount: Deactivated successfully. Jul 8 10:09:30.280856 kubelet[2732]: E0708 10:09:30.280792 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6fn56" podUID="9657a89f-7630-4cdb-b9bf-4bfd1c84e9b7" Jul 8 10:09:31.163891 containerd[1595]: time="2025-07-08T10:09:31.163826594Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:09:31.164761 containerd[1595]: time="2025-07-08T10:09:31.164728614Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Jul 8 10:09:31.165926 containerd[1595]: time="2025-07-08T10:09:31.165860689Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:09:31.168190 containerd[1595]: time="2025-07-08T10:09:31.168131278Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:09:31.168652 containerd[1595]: time="2025-07-08T10:09:31.168607065Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 4.537017524s" Jul 8 10:09:31.168652 containerd[1595]: time="2025-07-08T10:09:31.168646881Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 8 10:09:31.169624 containerd[1595]: time="2025-07-08T10:09:31.169588205Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 8 10:09:31.176697 containerd[1595]: time="2025-07-08T10:09:31.176646981Z" level=info msg="CreateContainer within sandbox \"7aa39094d7207ee52413d0bcbf7ff0f1b39ade1602fa536b17e5f1324f57697f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 8 10:09:31.188820 containerd[1595]: time="2025-07-08T10:09:31.188176554Z" level=info msg="Container 75d8afa1ff5b0c82d2495ee1ab8fa55a7f81a519a1d396de799ddc91cbf62d1a: CDI devices from CRI Config.CDIDevices: []" Jul 8 10:09:31.196411 containerd[1595]: time="2025-07-08T10:09:31.196365992Z" level=info msg="CreateContainer within sandbox \"7aa39094d7207ee52413d0bcbf7ff0f1b39ade1602fa536b17e5f1324f57697f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"75d8afa1ff5b0c82d2495ee1ab8fa55a7f81a519a1d396de799ddc91cbf62d1a\"" Jul 8 10:09:31.196940 containerd[1595]: time="2025-07-08T10:09:31.196912051Z" level=info msg="StartContainer for \"75d8afa1ff5b0c82d2495ee1ab8fa55a7f81a519a1d396de799ddc91cbf62d1a\"" Jul 8 10:09:31.197908 containerd[1595]: time="2025-07-08T10:09:31.197868704Z" level=info msg="connecting to shim 75d8afa1ff5b0c82d2495ee1ab8fa55a7f81a519a1d396de799ddc91cbf62d1a" address="unix:///run/containerd/s/a3d376cdeee58e1bbf0e21dde05ee7728a485c3fc813e4127f64aa8e49fe3988" protocol=ttrpc version=3 Jul 8 10:09:31.226298 systemd[1]: Started cri-containerd-75d8afa1ff5b0c82d2495ee1ab8fa55a7f81a519a1d396de799ddc91cbf62d1a.scope - libcontainer container 75d8afa1ff5b0c82d2495ee1ab8fa55a7f81a519a1d396de799ddc91cbf62d1a. Jul 8 10:09:31.272512 containerd[1595]: time="2025-07-08T10:09:31.272455420Z" level=info msg="StartContainer for \"75d8afa1ff5b0c82d2495ee1ab8fa55a7f81a519a1d396de799ddc91cbf62d1a\" returns successfully" Jul 8 10:09:31.731808 kubelet[2732]: E0708 10:09:31.731757 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:31.731808 kubelet[2732]: W0708 10:09:31.731786 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:31.731808 kubelet[2732]: E0708 10:09:31.731811 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:31.732386 kubelet[2732]: E0708 10:09:31.732105 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:31.732386 kubelet[2732]: W0708 10:09:31.732113 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:31.732386 kubelet[2732]: E0708 10:09:31.732122 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:31.732386 kubelet[2732]: E0708 10:09:31.732277 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:31.732386 kubelet[2732]: W0708 10:09:31.732284 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:31.732386 kubelet[2732]: E0708 10:09:31.732294 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:31.732531 kubelet[2732]: E0708 10:09:31.732444 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:31.732531 kubelet[2732]: W0708 10:09:31.732451 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:31.732531 kubelet[2732]: E0708 10:09:31.732459 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:31.732643 kubelet[2732]: E0708 10:09:31.732610 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:31.732643 kubelet[2732]: W0708 10:09:31.732622 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:31.732643 kubelet[2732]: E0708 10:09:31.732632 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:31.732858 kubelet[2732]: E0708 10:09:31.732805 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:31.732858 kubelet[2732]: W0708 10:09:31.732812 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:31.732858 kubelet[2732]: E0708 10:09:31.732819 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:31.733111 kubelet[2732]: E0708 10:09:31.733065 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:31.733161 kubelet[2732]: W0708 10:09:31.733112 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:31.733161 kubelet[2732]: E0708 10:09:31.733145 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:31.733429 kubelet[2732]: E0708 10:09:31.733409 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:31.733429 kubelet[2732]: W0708 10:09:31.733419 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:31.733429 kubelet[2732]: E0708 10:09:31.733428 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:31.733612 kubelet[2732]: E0708 10:09:31.733601 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:31.733612 kubelet[2732]: W0708 10:09:31.733609 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:31.733655 kubelet[2732]: E0708 10:09:31.733616 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:31.733779 kubelet[2732]: E0708 10:09:31.733768 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:31.733779 kubelet[2732]: W0708 10:09:31.733777 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:31.733823 kubelet[2732]: E0708 10:09:31.733784 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:31.733959 kubelet[2732]: E0708 10:09:31.733937 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:31.733959 kubelet[2732]: W0708 10:09:31.733956 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:31.734019 kubelet[2732]: E0708 10:09:31.733963 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:31.734148 kubelet[2732]: E0708 10:09:31.734137 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:31.734148 kubelet[2732]: W0708 10:09:31.734146 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:31.734196 kubelet[2732]: E0708 10:09:31.734153 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:31.734377 kubelet[2732]: E0708 10:09:31.734359 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:31.734377 kubelet[2732]: W0708 10:09:31.734374 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:31.734426 kubelet[2732]: E0708 10:09:31.734384 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:31.734569 kubelet[2732]: E0708 10:09:31.734555 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:31.734569 kubelet[2732]: W0708 10:09:31.734565 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:31.734614 kubelet[2732]: E0708 10:09:31.734584 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:31.734745 kubelet[2732]: E0708 10:09:31.734730 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:31.734745 kubelet[2732]: W0708 10:09:31.734740 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:31.734798 kubelet[2732]: E0708 10:09:31.734747 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:31.748494 kubelet[2732]: E0708 10:09:31.748459 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:31.748494 kubelet[2732]: W0708 10:09:31.748485 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:31.748593 kubelet[2732]: E0708 10:09:31.748510 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:31.748728 kubelet[2732]: E0708 10:09:31.748710 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:31.748728 kubelet[2732]: W0708 10:09:31.748720 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:31.748783 kubelet[2732]: E0708 10:09:31.748735 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:31.748956 kubelet[2732]: E0708 10:09:31.748927 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:31.748956 kubelet[2732]: W0708 10:09:31.748938 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:31.749034 kubelet[2732]: E0708 10:09:31.748964 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:31.749203 kubelet[2732]: E0708 10:09:31.749174 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:31.749203 kubelet[2732]: W0708 10:09:31.749187 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:31.749203 kubelet[2732]: E0708 10:09:31.749203 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:31.749415 kubelet[2732]: E0708 10:09:31.749397 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:31.749415 kubelet[2732]: W0708 10:09:31.749407 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:31.749478 kubelet[2732]: E0708 10:09:31.749421 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:31.749604 kubelet[2732]: E0708 10:09:31.749586 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:31.749604 kubelet[2732]: W0708 10:09:31.749596 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:31.749657 kubelet[2732]: E0708 10:09:31.749608 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:31.749816 kubelet[2732]: E0708 10:09:31.749798 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:31.749816 kubelet[2732]: W0708 10:09:31.749808 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:31.749868 kubelet[2732]: E0708 10:09:31.749839 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:31.749998 kubelet[2732]: E0708 10:09:31.749977 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:31.749998 kubelet[2732]: W0708 10:09:31.749988 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:31.750045 kubelet[2732]: E0708 10:09:31.750024 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:31.750187 kubelet[2732]: E0708 10:09:31.750168 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:31.750187 kubelet[2732]: W0708 10:09:31.750180 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:31.750240 kubelet[2732]: E0708 10:09:31.750195 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:31.750393 kubelet[2732]: E0708 10:09:31.750375 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:31.750393 kubelet[2732]: W0708 10:09:31.750385 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:31.750437 kubelet[2732]: E0708 10:09:31.750398 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:31.750612 kubelet[2732]: E0708 10:09:31.750587 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:31.750612 kubelet[2732]: W0708 10:09:31.750603 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:31.750719 kubelet[2732]: E0708 10:09:31.750624 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:31.750896 kubelet[2732]: E0708 10:09:31.750866 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:31.750896 kubelet[2732]: W0708 10:09:31.750879 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:31.750896 kubelet[2732]: E0708 10:09:31.750896 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:31.751140 kubelet[2732]: E0708 10:09:31.751121 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:31.751140 kubelet[2732]: W0708 10:09:31.751136 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:31.751217 kubelet[2732]: E0708 10:09:31.751153 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:31.751372 kubelet[2732]: E0708 10:09:31.751354 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:31.751372 kubelet[2732]: W0708 10:09:31.751367 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:31.751438 kubelet[2732]: E0708 10:09:31.751383 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:31.751589 kubelet[2732]: E0708 10:09:31.751575 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:31.751589 kubelet[2732]: W0708 10:09:31.751587 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:31.751658 kubelet[2732]: E0708 10:09:31.751602 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:31.751824 kubelet[2732]: E0708 10:09:31.751803 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:31.751824 kubelet[2732]: W0708 10:09:31.751816 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:31.751824 kubelet[2732]: E0708 10:09:31.751826 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:31.752049 kubelet[2732]: E0708 10:09:31.752024 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:31.752049 kubelet[2732]: W0708 10:09:31.752044 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:31.752131 kubelet[2732]: E0708 10:09:31.752052 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:31.752513 kubelet[2732]: E0708 10:09:31.752496 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:31.752513 kubelet[2732]: W0708 10:09:31.752507 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:31.752571 kubelet[2732]: E0708 10:09:31.752516 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:32.281534 kubelet[2732]: E0708 10:09:32.281490 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6fn56" podUID="9657a89f-7630-4cdb-b9bf-4bfd1c84e9b7" Jul 8 10:09:32.678069 kubelet[2732]: I0708 10:09:32.677942 2732 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 8 10:09:32.741651 kubelet[2732]: E0708 10:09:32.741615 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:32.741651 kubelet[2732]: W0708 10:09:32.741639 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:32.742096 kubelet[2732]: E0708 10:09:32.741664 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:32.742096 kubelet[2732]: E0708 10:09:32.741861 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:32.742096 kubelet[2732]: W0708 10:09:32.741878 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:32.742096 kubelet[2732]: E0708 10:09:32.741886 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:32.742096 kubelet[2732]: E0708 10:09:32.742072 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:32.742096 kubelet[2732]: W0708 10:09:32.742096 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:32.742243 kubelet[2732]: E0708 10:09:32.742105 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:32.742265 kubelet[2732]: E0708 10:09:32.742246 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:32.742265 kubelet[2732]: W0708 10:09:32.742253 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:32.742265 kubelet[2732]: E0708 10:09:32.742260 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:32.742497 kubelet[2732]: E0708 10:09:32.742463 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:32.742497 kubelet[2732]: W0708 10:09:32.742485 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:32.742547 kubelet[2732]: E0708 10:09:32.742509 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:32.742721 kubelet[2732]: E0708 10:09:32.742706 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:32.742721 kubelet[2732]: W0708 10:09:32.742716 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:32.742787 kubelet[2732]: E0708 10:09:32.742725 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:32.742908 kubelet[2732]: E0708 10:09:32.742893 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:32.742908 kubelet[2732]: W0708 10:09:32.742904 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:32.742957 kubelet[2732]: E0708 10:09:32.742914 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:32.743106 kubelet[2732]: E0708 10:09:32.743094 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:32.743106 kubelet[2732]: W0708 10:09:32.743104 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:32.743148 kubelet[2732]: E0708 10:09:32.743113 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:32.743280 kubelet[2732]: E0708 10:09:32.743269 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:32.743280 kubelet[2732]: W0708 10:09:32.743277 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:32.743331 kubelet[2732]: E0708 10:09:32.743284 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:32.743448 kubelet[2732]: E0708 10:09:32.743436 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:32.743470 kubelet[2732]: W0708 10:09:32.743446 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:32.743470 kubelet[2732]: E0708 10:09:32.743458 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:32.743629 kubelet[2732]: E0708 10:09:32.743618 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:32.743629 kubelet[2732]: W0708 10:09:32.743626 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:32.743670 kubelet[2732]: E0708 10:09:32.743634 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:32.743782 kubelet[2732]: E0708 10:09:32.743771 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:32.743782 kubelet[2732]: W0708 10:09:32.743779 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:32.743827 kubelet[2732]: E0708 10:09:32.743786 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:32.743934 kubelet[2732]: E0708 10:09:32.743909 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:32.743934 kubelet[2732]: W0708 10:09:32.743915 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:32.743974 kubelet[2732]: E0708 10:09:32.743933 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:32.744163 kubelet[2732]: E0708 10:09:32.744150 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:32.744163 kubelet[2732]: W0708 10:09:32.744159 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:32.744204 kubelet[2732]: E0708 10:09:32.744167 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:32.744305 kubelet[2732]: E0708 10:09:32.744294 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:32.744305 kubelet[2732]: W0708 10:09:32.744302 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:32.744348 kubelet[2732]: E0708 10:09:32.744308 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:32.755768 kubelet[2732]: E0708 10:09:32.755750 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:32.755768 kubelet[2732]: W0708 10:09:32.755762 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:32.755835 kubelet[2732]: E0708 10:09:32.755772 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:32.755971 kubelet[2732]: E0708 10:09:32.755955 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:32.755971 kubelet[2732]: W0708 10:09:32.755964 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:32.756034 kubelet[2732]: E0708 10:09:32.755978 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:32.756175 kubelet[2732]: E0708 10:09:32.756153 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:32.756175 kubelet[2732]: W0708 10:09:32.756162 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:32.756217 kubelet[2732]: E0708 10:09:32.756176 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:32.756400 kubelet[2732]: E0708 10:09:32.756374 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:32.756400 kubelet[2732]: W0708 10:09:32.756388 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:32.756466 kubelet[2732]: E0708 10:09:32.756404 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:32.756621 kubelet[2732]: E0708 10:09:32.756602 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:32.756621 kubelet[2732]: W0708 10:09:32.756618 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:32.756692 kubelet[2732]: E0708 10:09:32.756635 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:32.756835 kubelet[2732]: E0708 10:09:32.756819 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:32.756835 kubelet[2732]: W0708 10:09:32.756830 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:32.756885 kubelet[2732]: E0708 10:09:32.756843 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:32.757055 kubelet[2732]: E0708 10:09:32.757039 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:32.757055 kubelet[2732]: W0708 10:09:32.757050 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:32.757135 kubelet[2732]: E0708 10:09:32.757063 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:32.757240 kubelet[2732]: E0708 10:09:32.757227 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:32.757240 kubelet[2732]: W0708 10:09:32.757238 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:32.757288 kubelet[2732]: E0708 10:09:32.757250 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:32.757421 kubelet[2732]: E0708 10:09:32.757407 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:32.757421 kubelet[2732]: W0708 10:09:32.757418 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:32.757477 kubelet[2732]: E0708 10:09:32.757433 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:32.757618 kubelet[2732]: E0708 10:09:32.757603 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:32.757618 kubelet[2732]: W0708 10:09:32.757614 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:32.757679 kubelet[2732]: E0708 10:09:32.757632 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:32.757812 kubelet[2732]: E0708 10:09:32.757798 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:32.757812 kubelet[2732]: W0708 10:09:32.757807 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:32.757859 kubelet[2732]: E0708 10:09:32.757819 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:32.758005 kubelet[2732]: E0708 10:09:32.757990 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:32.758005 kubelet[2732]: W0708 10:09:32.758001 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:32.758060 kubelet[2732]: E0708 10:09:32.758014 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:32.758261 kubelet[2732]: E0708 10:09:32.758246 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:32.758261 kubelet[2732]: W0708 10:09:32.758258 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:32.758318 kubelet[2732]: E0708 10:09:32.758272 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:32.758444 kubelet[2732]: E0708 10:09:32.758432 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:32.758471 kubelet[2732]: W0708 10:09:32.758442 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:32.758471 kubelet[2732]: E0708 10:09:32.758458 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:32.758629 kubelet[2732]: E0708 10:09:32.758617 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:32.758629 kubelet[2732]: W0708 10:09:32.758626 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:32.758677 kubelet[2732]: E0708 10:09:32.758638 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:32.758810 kubelet[2732]: E0708 10:09:32.758798 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:32.758810 kubelet[2732]: W0708 10:09:32.758808 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:32.758853 kubelet[2732]: E0708 10:09:32.758820 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:32.759007 kubelet[2732]: E0708 10:09:32.758996 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:32.759007 kubelet[2732]: W0708 10:09:32.759004 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:32.759051 kubelet[2732]: E0708 10:09:32.759012 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:32.759378 kubelet[2732]: E0708 10:09:32.759358 2732 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 8 10:09:32.759378 kubelet[2732]: W0708 10:09:32.759368 2732 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 8 10:09:32.759378 kubelet[2732]: E0708 10:09:32.759376 2732 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 8 10:09:34.085840 containerd[1595]: time="2025-07-08T10:09:34.085774935Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:09:34.086507 containerd[1595]: time="2025-07-08T10:09:34.086476346Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Jul 8 10:09:34.087580 containerd[1595]: time="2025-07-08T10:09:34.087550069Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:09:34.089299 containerd[1595]: time="2025-07-08T10:09:34.089263866Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:09:34.089757 containerd[1595]: time="2025-07-08T10:09:34.089724163Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 2.920107695s" Jul 8 10:09:34.089785 containerd[1595]: time="2025-07-08T10:09:34.089754620Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 8 10:09:34.091560 containerd[1595]: time="2025-07-08T10:09:34.091535654Z" level=info msg="CreateContainer within sandbox \"16b1f70400056d43f56eaccf7da6c141f658efd4643feb7623876ddc7ff5318b\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 8 10:09:34.097957 containerd[1595]: time="2025-07-08T10:09:34.097914548Z" level=info msg="Container dc8f2c4fb2d5188d70024bafb196fce00b4ec643c909f6509fbb804c75df45e4: CDI devices from CRI Config.CDIDevices: []" Jul 8 10:09:34.107333 containerd[1595]: time="2025-07-08T10:09:34.107297528Z" level=info msg="CreateContainer within sandbox \"16b1f70400056d43f56eaccf7da6c141f658efd4643feb7623876ddc7ff5318b\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"dc8f2c4fb2d5188d70024bafb196fce00b4ec643c909f6509fbb804c75df45e4\"" Jul 8 10:09:34.107791 containerd[1595]: time="2025-07-08T10:09:34.107746183Z" level=info msg="StartContainer for \"dc8f2c4fb2d5188d70024bafb196fce00b4ec643c909f6509fbb804c75df45e4\"" Jul 8 10:09:34.109191 containerd[1595]: time="2025-07-08T10:09:34.109164264Z" level=info msg="connecting to shim dc8f2c4fb2d5188d70024bafb196fce00b4ec643c909f6509fbb804c75df45e4" address="unix:///run/containerd/s/d43a77411cb518f7a40047529fe566ebb980196ca9c6944e29cb846cdedc7d29" protocol=ttrpc version=3 Jul 8 10:09:34.138268 systemd[1]: Started cri-containerd-dc8f2c4fb2d5188d70024bafb196fce00b4ec643c909f6509fbb804c75df45e4.scope - libcontainer container dc8f2c4fb2d5188d70024bafb196fce00b4ec643c909f6509fbb804c75df45e4. Jul 8 10:09:34.181292 containerd[1595]: time="2025-07-08T10:09:34.181244999Z" level=info msg="StartContainer for \"dc8f2c4fb2d5188d70024bafb196fce00b4ec643c909f6509fbb804c75df45e4\" returns successfully" Jul 8 10:09:34.189912 systemd[1]: cri-containerd-dc8f2c4fb2d5188d70024bafb196fce00b4ec643c909f6509fbb804c75df45e4.scope: Deactivated successfully. Jul 8 10:09:34.191575 containerd[1595]: time="2025-07-08T10:09:34.191528716Z" level=info msg="received exit event container_id:\"dc8f2c4fb2d5188d70024bafb196fce00b4ec643c909f6509fbb804c75df45e4\" id:\"dc8f2c4fb2d5188d70024bafb196fce00b4ec643c909f6509fbb804c75df45e4\" pid:3523 exited_at:{seconds:1751969374 nanos:191244330}" Jul 8 10:09:34.191710 containerd[1595]: time="2025-07-08T10:09:34.191681904Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dc8f2c4fb2d5188d70024bafb196fce00b4ec643c909f6509fbb804c75df45e4\" id:\"dc8f2c4fb2d5188d70024bafb196fce00b4ec643c909f6509fbb804c75df45e4\" pid:3523 exited_at:{seconds:1751969374 nanos:191244330}" Jul 8 10:09:34.218642 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dc8f2c4fb2d5188d70024bafb196fce00b4ec643c909f6509fbb804c75df45e4-rootfs.mount: Deactivated successfully. Jul 8 10:09:34.281888 kubelet[2732]: E0708 10:09:34.281796 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6fn56" podUID="9657a89f-7630-4cdb-b9bf-4bfd1c84e9b7" Jul 8 10:09:34.687130 containerd[1595]: time="2025-07-08T10:09:34.686994068Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 8 10:09:34.702331 kubelet[2732]: I0708 10:09:34.702228 2732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6bbb787b4f-k9fzd" podStartSLOduration=5.164090091 podStartE2EDuration="9.702202118s" podCreationTimestamp="2025-07-08 10:09:25 +0000 UTC" firstStartedPulling="2025-07-08 10:09:26.631283254 +0000 UTC m=+23.443106367" lastFinishedPulling="2025-07-08 10:09:31.169395281 +0000 UTC m=+27.981218394" observedRunningTime="2025-07-08 10:09:31.691270979 +0000 UTC m=+28.503094092" watchObservedRunningTime="2025-07-08 10:09:34.702202118 +0000 UTC m=+31.514025231" Jul 8 10:09:36.281449 kubelet[2732]: E0708 10:09:36.281386 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6fn56" podUID="9657a89f-7630-4cdb-b9bf-4bfd1c84e9b7" Jul 8 10:09:38.281584 kubelet[2732]: E0708 10:09:38.281509 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6fn56" podUID="9657a89f-7630-4cdb-b9bf-4bfd1c84e9b7" Jul 8 10:09:39.403909 containerd[1595]: time="2025-07-08T10:09:39.403826726Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:09:39.404974 containerd[1595]: time="2025-07-08T10:09:39.404891429Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 8 10:09:39.406221 containerd[1595]: time="2025-07-08T10:09:39.406180363Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:09:39.408631 containerd[1595]: time="2025-07-08T10:09:39.408601888Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:09:39.409278 containerd[1595]: time="2025-07-08T10:09:39.409235920Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 4.722194383s" Jul 8 10:09:39.409278 containerd[1595]: time="2025-07-08T10:09:39.409264714Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 8 10:09:39.411517 containerd[1595]: time="2025-07-08T10:09:39.411462810Z" level=info msg="CreateContainer within sandbox \"16b1f70400056d43f56eaccf7da6c141f658efd4643feb7623876ddc7ff5318b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 8 10:09:39.419068 containerd[1595]: time="2025-07-08T10:09:39.419011328Z" level=info msg="Container 26fd8b16aef02b463ca8e37931425f17252d2b90eb14cbd6427b6d418b21d878: CDI devices from CRI Config.CDIDevices: []" Jul 8 10:09:39.428881 containerd[1595]: time="2025-07-08T10:09:39.428834324Z" level=info msg="CreateContainer within sandbox \"16b1f70400056d43f56eaccf7da6c141f658efd4643feb7623876ddc7ff5318b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"26fd8b16aef02b463ca8e37931425f17252d2b90eb14cbd6427b6d418b21d878\"" Jul 8 10:09:39.429492 containerd[1595]: time="2025-07-08T10:09:39.429450033Z" level=info msg="StartContainer for \"26fd8b16aef02b463ca8e37931425f17252d2b90eb14cbd6427b6d418b21d878\"" Jul 8 10:09:39.430823 containerd[1595]: time="2025-07-08T10:09:39.430772851Z" level=info msg="connecting to shim 26fd8b16aef02b463ca8e37931425f17252d2b90eb14cbd6427b6d418b21d878" address="unix:///run/containerd/s/d43a77411cb518f7a40047529fe566ebb980196ca9c6944e29cb846cdedc7d29" protocol=ttrpc version=3 Jul 8 10:09:39.455253 systemd[1]: Started cri-containerd-26fd8b16aef02b463ca8e37931425f17252d2b90eb14cbd6427b6d418b21d878.scope - libcontainer container 26fd8b16aef02b463ca8e37931425f17252d2b90eb14cbd6427b6d418b21d878. Jul 8 10:09:40.193346 containerd[1595]: time="2025-07-08T10:09:40.193284562Z" level=info msg="StartContainer for \"26fd8b16aef02b463ca8e37931425f17252d2b90eb14cbd6427b6d418b21d878\" returns successfully" Jul 8 10:09:40.281196 kubelet[2732]: E0708 10:09:40.281059 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6fn56" podUID="9657a89f-7630-4cdb-b9bf-4bfd1c84e9b7" Jul 8 10:09:40.859202 containerd[1595]: time="2025-07-08T10:09:40.859125219Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 8 10:09:40.862235 systemd[1]: cri-containerd-26fd8b16aef02b463ca8e37931425f17252d2b90eb14cbd6427b6d418b21d878.scope: Deactivated successfully. Jul 8 10:09:40.863101 systemd[1]: cri-containerd-26fd8b16aef02b463ca8e37931425f17252d2b90eb14cbd6427b6d418b21d878.scope: Consumed 580ms CPU time, 179M memory peak, 3.3M read from disk, 171.2M written to disk. Jul 8 10:09:40.864783 containerd[1595]: time="2025-07-08T10:09:40.864722144Z" level=info msg="received exit event container_id:\"26fd8b16aef02b463ca8e37931425f17252d2b90eb14cbd6427b6d418b21d878\" id:\"26fd8b16aef02b463ca8e37931425f17252d2b90eb14cbd6427b6d418b21d878\" pid:3585 exited_at:{seconds:1751969380 nanos:864454331}" Jul 8 10:09:40.864849 containerd[1595]: time="2025-07-08T10:09:40.864803407Z" level=info msg="TaskExit event in podsandbox handler container_id:\"26fd8b16aef02b463ca8e37931425f17252d2b90eb14cbd6427b6d418b21d878\" id:\"26fd8b16aef02b463ca8e37931425f17252d2b90eb14cbd6427b6d418b21d878\" pid:3585 exited_at:{seconds:1751969380 nanos:864454331}" Jul 8 10:09:40.887033 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-26fd8b16aef02b463ca8e37931425f17252d2b90eb14cbd6427b6d418b21d878-rootfs.mount: Deactivated successfully. Jul 8 10:09:40.911821 kubelet[2732]: I0708 10:09:40.911774 2732 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 8 10:09:40.955819 systemd[1]: Created slice kubepods-burstable-pod061d687d_cf7c_442d_bfd4_1574d84082c3.slice - libcontainer container kubepods-burstable-pod061d687d_cf7c_442d_bfd4_1574d84082c3.slice. Jul 8 10:09:40.966771 systemd[1]: Created slice kubepods-besteffort-pod2cff0e85_674f_4a7c_bb93_e9dd47a60308.slice - libcontainer container kubepods-besteffort-pod2cff0e85_674f_4a7c_bb93_e9dd47a60308.slice. Jul 8 10:09:40.973267 systemd[1]: Created slice kubepods-besteffort-pod17edc73a_6465_4ae3_a2c7_29809051b166.slice - libcontainer container kubepods-besteffort-pod17edc73a_6465_4ae3_a2c7_29809051b166.slice. Jul 8 10:09:40.977581 systemd[1]: Created slice kubepods-besteffort-podebd03ea5_e1d3_4fb4_9b4b_f073c91f8b8a.slice - libcontainer container kubepods-besteffort-podebd03ea5_e1d3_4fb4_9b4b_f073c91f8b8a.slice. Jul 8 10:09:40.982600 systemd[1]: Created slice kubepods-besteffort-podd86093fb_3f40_4474_985f_2d29a5aa42f9.slice - libcontainer container kubepods-besteffort-podd86093fb_3f40_4474_985f_2d29a5aa42f9.slice. Jul 8 10:09:40.988729 systemd[1]: Created slice kubepods-burstable-pod8abd2c33_f81d_4365_ad56_4ad33e89415f.slice - libcontainer container kubepods-burstable-pod8abd2c33_f81d_4365_ad56_4ad33e89415f.slice. Jul 8 10:09:40.992206 systemd[1]: Created slice kubepods-besteffort-pod2f37cc6d_720d_4b49_b845_4d7d2188107a.slice - libcontainer container kubepods-besteffort-pod2f37cc6d_720d_4b49_b845_4d7d2188107a.slice. Jul 8 10:09:41.010582 kubelet[2732]: I0708 10:09:41.010530 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pm6x\" (UniqueName: \"kubernetes.io/projected/2cff0e85-674f-4a7c-bb93-e9dd47a60308-kube-api-access-8pm6x\") pod \"calico-apiserver-d589945c5-wwn7p\" (UID: \"2cff0e85-674f-4a7c-bb93-e9dd47a60308\") " pod="calico-apiserver/calico-apiserver-d589945c5-wwn7p" Jul 8 10:09:41.010582 kubelet[2732]: I0708 10:09:41.010580 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/17edc73a-6465-4ae3-a2c7-29809051b166-goldmane-key-pair\") pod \"goldmane-58fd7646b9-dz6z6\" (UID: \"17edc73a-6465-4ae3-a2c7-29809051b166\") " pod="calico-system/goldmane-58fd7646b9-dz6z6" Jul 8 10:09:41.010726 kubelet[2732]: I0708 10:09:41.010597 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/061d687d-cf7c-442d-bfd4-1574d84082c3-config-volume\") pod \"coredns-7c65d6cfc9-zr8q6\" (UID: \"061d687d-cf7c-442d-bfd4-1574d84082c3\") " pod="kube-system/coredns-7c65d6cfc9-zr8q6" Jul 8 10:09:41.010726 kubelet[2732]: I0708 10:09:41.010621 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nr4fw\" (UniqueName: \"kubernetes.io/projected/2f37cc6d-720d-4b49-b845-4d7d2188107a-kube-api-access-nr4fw\") pod \"calico-kube-controllers-6b995ff85b-5jvv6\" (UID: \"2f37cc6d-720d-4b49-b845-4d7d2188107a\") " pod="calico-system/calico-kube-controllers-6b995ff85b-5jvv6" Jul 8 10:09:41.010726 kubelet[2732]: I0708 10:09:41.010640 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d86093fb-3f40-4474-985f-2d29a5aa42f9-whisker-backend-key-pair\") pod \"whisker-6cf8c96d4-m5nnt\" (UID: \"d86093fb-3f40-4474-985f-2d29a5aa42f9\") " pod="calico-system/whisker-6cf8c96d4-m5nnt" Jul 8 10:09:41.010726 kubelet[2732]: I0708 10:09:41.010657 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17edc73a-6465-4ae3-a2c7-29809051b166-config\") pod \"goldmane-58fd7646b9-dz6z6\" (UID: \"17edc73a-6465-4ae3-a2c7-29809051b166\") " pod="calico-system/goldmane-58fd7646b9-dz6z6" Jul 8 10:09:41.010726 kubelet[2732]: I0708 10:09:41.010673 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4m87\" (UniqueName: \"kubernetes.io/projected/ebd03ea5-e1d3-4fb4-9b4b-f073c91f8b8a-kube-api-access-h4m87\") pod \"calico-apiserver-d589945c5-lcwrp\" (UID: \"ebd03ea5-e1d3-4fb4-9b4b-f073c91f8b8a\") " pod="calico-apiserver/calico-apiserver-d589945c5-lcwrp" Jul 8 10:09:41.010914 kubelet[2732]: I0708 10:09:41.010688 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d86093fb-3f40-4474-985f-2d29a5aa42f9-whisker-ca-bundle\") pod \"whisker-6cf8c96d4-m5nnt\" (UID: \"d86093fb-3f40-4474-985f-2d29a5aa42f9\") " pod="calico-system/whisker-6cf8c96d4-m5nnt" Jul 8 10:09:41.010914 kubelet[2732]: I0708 10:09:41.010707 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17edc73a-6465-4ae3-a2c7-29809051b166-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-dz6z6\" (UID: \"17edc73a-6465-4ae3-a2c7-29809051b166\") " pod="calico-system/goldmane-58fd7646b9-dz6z6" Jul 8 10:09:41.010914 kubelet[2732]: I0708 10:09:41.010720 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg8dm\" (UniqueName: \"kubernetes.io/projected/17edc73a-6465-4ae3-a2c7-29809051b166-kube-api-access-zg8dm\") pod \"goldmane-58fd7646b9-dz6z6\" (UID: \"17edc73a-6465-4ae3-a2c7-29809051b166\") " pod="calico-system/goldmane-58fd7646b9-dz6z6" Jul 8 10:09:41.010914 kubelet[2732]: I0708 10:09:41.010736 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f37cc6d-720d-4b49-b845-4d7d2188107a-tigera-ca-bundle\") pod \"calico-kube-controllers-6b995ff85b-5jvv6\" (UID: \"2f37cc6d-720d-4b49-b845-4d7d2188107a\") " pod="calico-system/calico-kube-controllers-6b995ff85b-5jvv6" Jul 8 10:09:41.010914 kubelet[2732]: I0708 10:09:41.010753 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2cff0e85-674f-4a7c-bb93-e9dd47a60308-calico-apiserver-certs\") pod \"calico-apiserver-d589945c5-wwn7p\" (UID: \"2cff0e85-674f-4a7c-bb93-e9dd47a60308\") " pod="calico-apiserver/calico-apiserver-d589945c5-wwn7p" Jul 8 10:09:41.011115 kubelet[2732]: I0708 10:09:41.010778 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9q9s9\" (UniqueName: \"kubernetes.io/projected/061d687d-cf7c-442d-bfd4-1574d84082c3-kube-api-access-9q9s9\") pod \"coredns-7c65d6cfc9-zr8q6\" (UID: \"061d687d-cf7c-442d-bfd4-1574d84082c3\") " pod="kube-system/coredns-7c65d6cfc9-zr8q6" Jul 8 10:09:41.011115 kubelet[2732]: I0708 10:09:41.010863 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zlxc\" (UniqueName: \"kubernetes.io/projected/8abd2c33-f81d-4365-ad56-4ad33e89415f-kube-api-access-7zlxc\") pod \"coredns-7c65d6cfc9-t9sld\" (UID: \"8abd2c33-f81d-4365-ad56-4ad33e89415f\") " pod="kube-system/coredns-7c65d6cfc9-t9sld" Jul 8 10:09:41.011115 kubelet[2732]: I0708 10:09:41.010907 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vz2d8\" (UniqueName: \"kubernetes.io/projected/d86093fb-3f40-4474-985f-2d29a5aa42f9-kube-api-access-vz2d8\") pod \"whisker-6cf8c96d4-m5nnt\" (UID: \"d86093fb-3f40-4474-985f-2d29a5aa42f9\") " pod="calico-system/whisker-6cf8c96d4-m5nnt" Jul 8 10:09:41.011115 kubelet[2732]: I0708 10:09:41.010945 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8abd2c33-f81d-4365-ad56-4ad33e89415f-config-volume\") pod \"coredns-7c65d6cfc9-t9sld\" (UID: \"8abd2c33-f81d-4365-ad56-4ad33e89415f\") " pod="kube-system/coredns-7c65d6cfc9-t9sld" Jul 8 10:09:41.011115 kubelet[2732]: I0708 10:09:41.010964 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ebd03ea5-e1d3-4fb4-9b4b-f073c91f8b8a-calico-apiserver-certs\") pod \"calico-apiserver-d589945c5-lcwrp\" (UID: \"ebd03ea5-e1d3-4fb4-9b4b-f073c91f8b8a\") " pod="calico-apiserver/calico-apiserver-d589945c5-lcwrp" Jul 8 10:09:41.378808 containerd[1595]: time="2025-07-08T10:09:41.377045407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-zr8q6,Uid:061d687d-cf7c-442d-bfd4-1574d84082c3,Namespace:kube-system,Attempt:0,}" Jul 8 10:09:41.379832 containerd[1595]: time="2025-07-08T10:09:41.378937194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6cf8c96d4-m5nnt,Uid:d86093fb-3f40-4474-985f-2d29a5aa42f9,Namespace:calico-system,Attempt:0,}" Jul 8 10:09:41.379832 containerd[1595]: time="2025-07-08T10:09:41.379197693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d589945c5-lcwrp,Uid:ebd03ea5-e1d3-4fb4-9b4b-f073c91f8b8a,Namespace:calico-apiserver,Attempt:0,}" Jul 8 10:09:41.379832 containerd[1595]: time="2025-07-08T10:09:41.377572046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d589945c5-wwn7p,Uid:2cff0e85-674f-4a7c-bb93-e9dd47a60308,Namespace:calico-apiserver,Attempt:0,}" Jul 8 10:09:41.379832 containerd[1595]: time="2025-07-08T10:09:41.377669340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-dz6z6,Uid:17edc73a-6465-4ae3-a2c7-29809051b166,Namespace:calico-system,Attempt:0,}" Jul 8 10:09:41.379832 containerd[1595]: time="2025-07-08T10:09:41.379826796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b995ff85b-5jvv6,Uid:2f37cc6d-720d-4b49-b845-4d7d2188107a,Namespace:calico-system,Attempt:0,}" Jul 8 10:09:41.379991 containerd[1595]: time="2025-07-08T10:09:41.379841393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-t9sld,Uid:8abd2c33-f81d-4365-ad56-4ad33e89415f,Namespace:kube-system,Attempt:0,}" Jul 8 10:09:41.521108 containerd[1595]: time="2025-07-08T10:09:41.520469299Z" level=error msg="Failed to destroy network for sandbox \"20c42fba53658c9ca397cff2388538e414d8ee037d53779f3801338b94055382\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 8 10:09:41.527521 containerd[1595]: time="2025-07-08T10:09:41.527463329Z" level=error msg="Failed to destroy network for sandbox \"732e4553ac0d4a837a625704c510bffdf1685e31849be06467b49065b8594a2c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 8 10:09:41.528770 containerd[1595]: time="2025-07-08T10:09:41.528736573Z" level=error msg="Failed to destroy network for sandbox \"8d39046137a82e177517ac564e8f897ce8cc6d78708a010d82ddb295e15b91be\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 8 10:09:41.529842 containerd[1595]: time="2025-07-08T10:09:41.529789844Z" level=error msg="Failed to destroy network for sandbox \"8793306ee5f0532a4f60b10eeba796a2d8b6851fc2b6b384573c711ca4ecc8ff\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 8 10:09:41.530457 containerd[1595]: time="2025-07-08T10:09:41.530405692Z" level=error msg="Failed to destroy network for sandbox \"02ef8b577869745c5031188a3b3819109118eb40956ada2d03a7bab24a28222d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 8 10:09:41.536013 containerd[1595]: time="2025-07-08T10:09:41.535947752Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b995ff85b-5jvv6,Uid:2f37cc6d-720d-4b49-b845-4d7d2188107a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8793306ee5f0532a4f60b10eeba796a2d8b6851fc2b6b384573c711ca4ecc8ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 8 10:09:41.536326 containerd[1595]: time="2025-07-08T10:09:41.535959945Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d589945c5-lcwrp,Uid:ebd03ea5-e1d3-4fb4-9b4b-f073c91f8b8a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d39046137a82e177517ac564e8f897ce8cc6d78708a010d82ddb295e15b91be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 8 10:09:41.536444 containerd[1595]: time="2025-07-08T10:09:41.535968150Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-dz6z6,Uid:17edc73a-6465-4ae3-a2c7-29809051b166,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"20c42fba53658c9ca397cff2388538e414d8ee037d53779f3801338b94055382\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 8 10:09:41.537052 containerd[1595]: time="2025-07-08T10:09:41.537011483Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6cf8c96d4-m5nnt,Uid:d86093fb-3f40-4474-985f-2d29a5aa42f9,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"732e4553ac0d4a837a625704c510bffdf1685e31849be06467b49065b8594a2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 8 10:09:41.538023 containerd[1595]: time="2025-07-08T10:09:41.537569011Z" level=error msg="Failed to destroy network for sandbox \"6eae671dba703dba96b4f6ac4e22a46354eda3ea8ff87d8c39126c2e516460b5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 8 10:09:41.538453 containerd[1595]: time="2025-07-08T10:09:41.538426403Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d589945c5-wwn7p,Uid:2cff0e85-674f-4a7c-bb93-e9dd47a60308,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"02ef8b577869745c5031188a3b3819109118eb40956ada2d03a7bab24a28222d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 8 10:09:41.539326 containerd[1595]: time="2025-07-08T10:09:41.539298583Z" level=error msg="Failed to destroy network for sandbox \"fdaebe802e3e8969bda5f8326090b0c6e49f3edc7b951a115dc55d579c94ddcc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 8 10:09:41.539392 containerd[1595]: time="2025-07-08T10:09:41.539351242Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-t9sld,Uid:8abd2c33-f81d-4365-ad56-4ad33e89415f,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6eae671dba703dba96b4f6ac4e22a46354eda3ea8ff87d8c39126c2e516460b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 8 10:09:41.540597 containerd[1595]: time="2025-07-08T10:09:41.540570133Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-zr8q6,Uid:061d687d-cf7c-442d-bfd4-1574d84082c3,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fdaebe802e3e8969bda5f8326090b0c6e49f3edc7b951a115dc55d579c94ddcc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 8 10:09:41.544107 kubelet[2732]: E0708 10:09:41.544014 2732 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fdaebe802e3e8969bda5f8326090b0c6e49f3edc7b951a115dc55d579c94ddcc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 8 10:09:41.544107 kubelet[2732]: E0708 10:09:41.544053 2732 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"732e4553ac0d4a837a625704c510bffdf1685e31849be06467b49065b8594a2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 8 10:09:41.544461 kubelet[2732]: E0708 10:09:41.544028 2732 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8793306ee5f0532a4f60b10eeba796a2d8b6851fc2b6b384573c711ca4ecc8ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 8 10:09:41.544461 kubelet[2732]: E0708 10:09:41.544121 2732 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"732e4553ac0d4a837a625704c510bffdf1685e31849be06467b49065b8594a2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6cf8c96d4-m5nnt" Jul 8 10:09:41.544461 kubelet[2732]: E0708 10:09:41.544137 2732 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6eae671dba703dba96b4f6ac4e22a46354eda3ea8ff87d8c39126c2e516460b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 8 10:09:41.544461 kubelet[2732]: E0708 10:09:41.544143 2732 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"732e4553ac0d4a837a625704c510bffdf1685e31849be06467b49065b8594a2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6cf8c96d4-m5nnt" Jul 8 10:09:41.544570 kubelet[2732]: E0708 10:09:41.544153 2732 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6eae671dba703dba96b4f6ac4e22a46354eda3ea8ff87d8c39126c2e516460b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-t9sld" Jul 8 10:09:41.544570 kubelet[2732]: E0708 10:09:41.544169 2732 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6eae671dba703dba96b4f6ac4e22a46354eda3ea8ff87d8c39126c2e516460b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-t9sld" Jul 8 10:09:41.544570 kubelet[2732]: E0708 10:09:41.544195 2732 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8793306ee5f0532a4f60b10eeba796a2d8b6851fc2b6b384573c711ca4ecc8ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6b995ff85b-5jvv6" Jul 8 10:09:41.544570 kubelet[2732]: E0708 10:09:41.544223 2732 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8793306ee5f0532a4f60b10eeba796a2d8b6851fc2b6b384573c711ca4ecc8ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6b995ff85b-5jvv6" Jul 8 10:09:41.544664 kubelet[2732]: E0708 10:09:41.544118 2732 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fdaebe802e3e8969bda5f8326090b0c6e49f3edc7b951a115dc55d579c94ddcc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-zr8q6" Jul 8 10:09:41.544664 kubelet[2732]: E0708 10:09:41.544240 2732 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fdaebe802e3e8969bda5f8326090b0c6e49f3edc7b951a115dc55d579c94ddcc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-zr8q6" Jul 8 10:09:41.544664 kubelet[2732]: E0708 10:09:41.544188 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6cf8c96d4-m5nnt_calico-system(d86093fb-3f40-4474-985f-2d29a5aa42f9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6cf8c96d4-m5nnt_calico-system(d86093fb-3f40-4474-985f-2d29a5aa42f9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"732e4553ac0d4a837a625704c510bffdf1685e31849be06467b49065b8594a2c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6cf8c96d4-m5nnt" podUID="d86093fb-3f40-4474-985f-2d29a5aa42f9" Jul 8 10:09:41.544761 kubelet[2732]: E0708 10:09:41.544010 2732 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02ef8b577869745c5031188a3b3819109118eb40956ada2d03a7bab24a28222d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 8 10:09:41.544761 kubelet[2732]: E0708 10:09:41.544279 2732 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02ef8b577869745c5031188a3b3819109118eb40956ada2d03a7bab24a28222d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d589945c5-wwn7p" Jul 8 10:09:41.544761 kubelet[2732]: E0708 10:09:41.544288 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6b995ff85b-5jvv6_calico-system(2f37cc6d-720d-4b49-b845-4d7d2188107a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6b995ff85b-5jvv6_calico-system(2f37cc6d-720d-4b49-b845-4d7d2188107a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8793306ee5f0532a4f60b10eeba796a2d8b6851fc2b6b384573c711ca4ecc8ff\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6b995ff85b-5jvv6" podUID="2f37cc6d-720d-4b49-b845-4d7d2188107a" Jul 8 10:09:41.544854 kubelet[2732]: E0708 10:09:41.544195 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-t9sld_kube-system(8abd2c33-f81d-4365-ad56-4ad33e89415f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-t9sld_kube-system(8abd2c33-f81d-4365-ad56-4ad33e89415f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6eae671dba703dba96b4f6ac4e22a46354eda3ea8ff87d8c39126c2e516460b5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-t9sld" podUID="8abd2c33-f81d-4365-ad56-4ad33e89415f" Jul 8 10:09:41.544854 kubelet[2732]: E0708 10:09:41.544014 2732 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20c42fba53658c9ca397cff2388538e414d8ee037d53779f3801338b94055382\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 8 10:09:41.544854 kubelet[2732]: E0708 10:09:41.544331 2732 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20c42fba53658c9ca397cff2388538e414d8ee037d53779f3801338b94055382\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-dz6z6" Jul 8 10:09:41.544941 kubelet[2732]: E0708 10:09:41.544341 2732 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20c42fba53658c9ca397cff2388538e414d8ee037d53779f3801338b94055382\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-dz6z6" Jul 8 10:09:41.544941 kubelet[2732]: E0708 10:09:41.544359 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-dz6z6_calico-system(17edc73a-6465-4ae3-a2c7-29809051b166)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-dz6z6_calico-system(17edc73a-6465-4ae3-a2c7-29809051b166)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"20c42fba53658c9ca397cff2388538e414d8ee037d53779f3801338b94055382\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-dz6z6" podUID="17edc73a-6465-4ae3-a2c7-29809051b166" Jul 8 10:09:41.544941 kubelet[2732]: E0708 10:09:41.544303 2732 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02ef8b577869745c5031188a3b3819109118eb40956ada2d03a7bab24a28222d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d589945c5-wwn7p" Jul 8 10:09:41.545025 kubelet[2732]: E0708 10:09:41.544384 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-d589945c5-wwn7p_calico-apiserver(2cff0e85-674f-4a7c-bb93-e9dd47a60308)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-d589945c5-wwn7p_calico-apiserver(2cff0e85-674f-4a7c-bb93-e9dd47a60308)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"02ef8b577869745c5031188a3b3819109118eb40956ada2d03a7bab24a28222d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d589945c5-wwn7p" podUID="2cff0e85-674f-4a7c-bb93-e9dd47a60308" Jul 8 10:09:41.545025 kubelet[2732]: E0708 10:09:41.544350 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-zr8q6_kube-system(061d687d-cf7c-442d-bfd4-1574d84082c3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-zr8q6_kube-system(061d687d-cf7c-442d-bfd4-1574d84082c3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fdaebe802e3e8969bda5f8326090b0c6e49f3edc7b951a115dc55d579c94ddcc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-zr8q6" podUID="061d687d-cf7c-442d-bfd4-1574d84082c3" Jul 8 10:09:41.545025 kubelet[2732]: E0708 10:09:41.544071 2732 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d39046137a82e177517ac564e8f897ce8cc6d78708a010d82ddb295e15b91be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 8 10:09:41.545147 kubelet[2732]: E0708 10:09:41.544652 2732 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d39046137a82e177517ac564e8f897ce8cc6d78708a010d82ddb295e15b91be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d589945c5-lcwrp" Jul 8 10:09:41.545147 kubelet[2732]: E0708 10:09:41.544668 2732 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d39046137a82e177517ac564e8f897ce8cc6d78708a010d82ddb295e15b91be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d589945c5-lcwrp" Jul 8 10:09:41.545147 kubelet[2732]: E0708 10:09:41.544713 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-d589945c5-lcwrp_calico-apiserver(ebd03ea5-e1d3-4fb4-9b4b-f073c91f8b8a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-d589945c5-lcwrp_calico-apiserver(ebd03ea5-e1d3-4fb4-9b4b-f073c91f8b8a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8d39046137a82e177517ac564e8f897ce8cc6d78708a010d82ddb295e15b91be\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d589945c5-lcwrp" podUID="ebd03ea5-e1d3-4fb4-9b4b-f073c91f8b8a" Jul 8 10:09:42.203610 containerd[1595]: time="2025-07-08T10:09:42.203559951Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 8 10:09:42.288574 systemd[1]: Created slice kubepods-besteffort-pod9657a89f_7630_4cdb_b9bf_4bfd1c84e9b7.slice - libcontainer container kubepods-besteffort-pod9657a89f_7630_4cdb_b9bf_4bfd1c84e9b7.slice. Jul 8 10:09:42.293206 containerd[1595]: time="2025-07-08T10:09:42.293163650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6fn56,Uid:9657a89f-7630-4cdb-b9bf-4bfd1c84e9b7,Namespace:calico-system,Attempt:0,}" Jul 8 10:09:42.342097 containerd[1595]: time="2025-07-08T10:09:42.342026187Z" level=error msg="Failed to destroy network for sandbox \"96b81f17355cd7b65ac48a735310203204948b9255dc593f864a935e216272a5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 8 10:09:42.344721 containerd[1595]: time="2025-07-08T10:09:42.344652595Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6fn56,Uid:9657a89f-7630-4cdb-b9bf-4bfd1c84e9b7,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"96b81f17355cd7b65ac48a735310203204948b9255dc593f864a935e216272a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 8 10:09:42.345024 systemd[1]: run-netns-cni\x2d7902f5b4\x2d8af0\x2d32d2\x2db9b5\x2de177b75e1927.mount: Deactivated successfully. Jul 8 10:09:42.345153 kubelet[2732]: E0708 10:09:42.345007 2732 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96b81f17355cd7b65ac48a735310203204948b9255dc593f864a935e216272a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 8 10:09:42.345153 kubelet[2732]: E0708 10:09:42.345104 2732 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96b81f17355cd7b65ac48a735310203204948b9255dc593f864a935e216272a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6fn56" Jul 8 10:09:42.345153 kubelet[2732]: E0708 10:09:42.345125 2732 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96b81f17355cd7b65ac48a735310203204948b9255dc593f864a935e216272a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6fn56" Jul 8 10:09:42.345263 kubelet[2732]: E0708 10:09:42.345180 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-6fn56_calico-system(9657a89f-7630-4cdb-b9bf-4bfd1c84e9b7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-6fn56_calico-system(9657a89f-7630-4cdb-b9bf-4bfd1c84e9b7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"96b81f17355cd7b65ac48a735310203204948b9255dc593f864a935e216272a5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6fn56" podUID="9657a89f-7630-4cdb-b9bf-4bfd1c84e9b7" Jul 8 10:09:49.039500 kubelet[2732]: I0708 10:09:49.039445 2732 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 8 10:09:49.279907 systemd[1]: Started sshd@7-10.0.0.19:22-10.0.0.1:47056.service - OpenSSH per-connection server daemon (10.0.0.1:47056). Jul 8 10:09:49.353921 sshd[3893]: Accepted publickey for core from 10.0.0.1 port 47056 ssh2: RSA SHA256:BQWGw9qDwwKm1WzdalDD28JNIdz3HBcaxKwPB41deh4 Jul 8 10:09:49.355941 sshd-session[3893]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 8 10:09:49.361418 systemd-logind[1579]: New session 8 of user core. Jul 8 10:09:49.371298 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 8 10:09:49.550648 sshd[3898]: Connection closed by 10.0.0.1 port 47056 Jul 8 10:09:49.551261 sshd-session[3893]: pam_unix(sshd:session): session closed for user core Jul 8 10:09:49.557115 systemd[1]: sshd@7-10.0.0.19:22-10.0.0.1:47056.service: Deactivated successfully. Jul 8 10:09:49.559356 systemd[1]: session-8.scope: Deactivated successfully. Jul 8 10:09:49.560144 systemd-logind[1579]: Session 8 logged out. Waiting for processes to exit. Jul 8 10:09:49.562182 systemd-logind[1579]: Removed session 8. Jul 8 10:09:51.382419 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1858082675.mount: Deactivated successfully. Jul 8 10:09:52.226263 containerd[1595]: time="2025-07-08T10:09:52.226192138Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:09:52.227852 containerd[1595]: time="2025-07-08T10:09:52.227826356Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 8 10:09:52.229487 containerd[1595]: time="2025-07-08T10:09:52.229459372Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:09:52.231839 containerd[1595]: time="2025-07-08T10:09:52.231813122Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:09:52.232611 containerd[1595]: time="2025-07-08T10:09:52.232515831Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 10.028899322s" Jul 8 10:09:52.232611 containerd[1595]: time="2025-07-08T10:09:52.232585021Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 8 10:09:52.241470 containerd[1595]: time="2025-07-08T10:09:52.241419398Z" level=info msg="CreateContainer within sandbox \"16b1f70400056d43f56eaccf7da6c141f658efd4643feb7623876ddc7ff5318b\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 8 10:09:52.268312 containerd[1595]: time="2025-07-08T10:09:52.268263240Z" level=info msg="Container 556c574613f9876731555a16fcda7a6fc473ec9868aeaede9f6e9c9e3adf7288: CDI devices from CRI Config.CDIDevices: []" Jul 8 10:09:52.281700 containerd[1595]: time="2025-07-08T10:09:52.281652138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b995ff85b-5jvv6,Uid:2f37cc6d-720d-4b49-b845-4d7d2188107a,Namespace:calico-system,Attempt:0,}" Jul 8 10:09:52.281861 containerd[1595]: time="2025-07-08T10:09:52.281829091Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-zr8q6,Uid:061d687d-cf7c-442d-bfd4-1574d84082c3,Namespace:kube-system,Attempt:0,}" Jul 8 10:09:52.288239 containerd[1595]: time="2025-07-08T10:09:52.288194191Z" level=info msg="CreateContainer within sandbox \"16b1f70400056d43f56eaccf7da6c141f658efd4643feb7623876ddc7ff5318b\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"556c574613f9876731555a16fcda7a6fc473ec9868aeaede9f6e9c9e3adf7288\"" Jul 8 10:09:52.289380 containerd[1595]: time="2025-07-08T10:09:52.289341786Z" level=info msg="StartContainer for \"556c574613f9876731555a16fcda7a6fc473ec9868aeaede9f6e9c9e3adf7288\"" Jul 8 10:09:52.291124 containerd[1595]: time="2025-07-08T10:09:52.290910641Z" level=info msg="connecting to shim 556c574613f9876731555a16fcda7a6fc473ec9868aeaede9f6e9c9e3adf7288" address="unix:///run/containerd/s/d43a77411cb518f7a40047529fe566ebb980196ca9c6944e29cb846cdedc7d29" protocol=ttrpc version=3 Jul 8 10:09:52.316670 systemd[1]: Started cri-containerd-556c574613f9876731555a16fcda7a6fc473ec9868aeaede9f6e9c9e3adf7288.scope - libcontainer container 556c574613f9876731555a16fcda7a6fc473ec9868aeaede9f6e9c9e3adf7288. Jul 8 10:09:52.359433 containerd[1595]: time="2025-07-08T10:09:52.359365481Z" level=error msg="Failed to destroy network for sandbox \"0b4a8130d751a4bfe89ea68cd9b750499db1902e56a0984c3781d6ef83c35e40\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 8 10:09:52.363280 containerd[1595]: time="2025-07-08T10:09:52.363248722Z" level=error msg="Failed to destroy network for sandbox \"98613b257069be58715f76a99a8a84594d8258091ee60bf3ee89fc3d029f3e74\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 8 10:09:52.956818 containerd[1595]: time="2025-07-08T10:09:52.956727155Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-zr8q6,Uid:061d687d-cf7c-442d-bfd4-1574d84082c3,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b4a8130d751a4bfe89ea68cd9b750499db1902e56a0984c3781d6ef83c35e40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 8 10:09:52.957112 kubelet[2732]: E0708 10:09:52.957038 2732 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b4a8130d751a4bfe89ea68cd9b750499db1902e56a0984c3781d6ef83c35e40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 8 10:09:52.957607 kubelet[2732]: E0708 10:09:52.957147 2732 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b4a8130d751a4bfe89ea68cd9b750499db1902e56a0984c3781d6ef83c35e40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-zr8q6" Jul 8 10:09:52.957607 kubelet[2732]: E0708 10:09:52.957173 2732 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b4a8130d751a4bfe89ea68cd9b750499db1902e56a0984c3781d6ef83c35e40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-zr8q6" Jul 8 10:09:52.959936 kubelet[2732]: E0708 10:09:52.959877 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-zr8q6_kube-system(061d687d-cf7c-442d-bfd4-1574d84082c3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-zr8q6_kube-system(061d687d-cf7c-442d-bfd4-1574d84082c3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0b4a8130d751a4bfe89ea68cd9b750499db1902e56a0984c3781d6ef83c35e40\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-zr8q6" podUID="061d687d-cf7c-442d-bfd4-1574d84082c3" Jul 8 10:09:52.974324 containerd[1595]: time="2025-07-08T10:09:52.974271225Z" level=info msg="StartContainer for \"556c574613f9876731555a16fcda7a6fc473ec9868aeaede9f6e9c9e3adf7288\" returns successfully" Jul 8 10:09:52.976917 containerd[1595]: time="2025-07-08T10:09:52.976869344Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b995ff85b-5jvv6,Uid:2f37cc6d-720d-4b49-b845-4d7d2188107a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"98613b257069be58715f76a99a8a84594d8258091ee60bf3ee89fc3d029f3e74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 8 10:09:52.977452 kubelet[2732]: E0708 10:09:52.977313 2732 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98613b257069be58715f76a99a8a84594d8258091ee60bf3ee89fc3d029f3e74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 8 10:09:52.977452 kubelet[2732]: E0708 10:09:52.977402 2732 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98613b257069be58715f76a99a8a84594d8258091ee60bf3ee89fc3d029f3e74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6b995ff85b-5jvv6" Jul 8 10:09:52.977727 kubelet[2732]: E0708 10:09:52.977624 2732 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98613b257069be58715f76a99a8a84594d8258091ee60bf3ee89fc3d029f3e74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6b995ff85b-5jvv6" Jul 8 10:09:52.977902 kubelet[2732]: E0708 10:09:52.977805 2732 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6b995ff85b-5jvv6_calico-system(2f37cc6d-720d-4b49-b845-4d7d2188107a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6b995ff85b-5jvv6_calico-system(2f37cc6d-720d-4b49-b845-4d7d2188107a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"98613b257069be58715f76a99a8a84594d8258091ee60bf3ee89fc3d029f3e74\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6b995ff85b-5jvv6" podUID="2f37cc6d-720d-4b49-b845-4d7d2188107a" Jul 8 10:09:52.980883 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 8 10:09:52.980968 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 8 10:09:53.088594 kubelet[2732]: I0708 10:09:53.088462 2732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d86093fb-3f40-4474-985f-2d29a5aa42f9-whisker-backend-key-pair\") pod \"d86093fb-3f40-4474-985f-2d29a5aa42f9\" (UID: \"d86093fb-3f40-4474-985f-2d29a5aa42f9\") " Jul 8 10:09:53.089235 kubelet[2732]: I0708 10:09:53.088968 2732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d86093fb-3f40-4474-985f-2d29a5aa42f9-whisker-ca-bundle\") pod \"d86093fb-3f40-4474-985f-2d29a5aa42f9\" (UID: \"d86093fb-3f40-4474-985f-2d29a5aa42f9\") " Jul 8 10:09:53.089235 kubelet[2732]: I0708 10:09:53.088995 2732 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vz2d8\" (UniqueName: \"kubernetes.io/projected/d86093fb-3f40-4474-985f-2d29a5aa42f9-kube-api-access-vz2d8\") pod \"d86093fb-3f40-4474-985f-2d29a5aa42f9\" (UID: \"d86093fb-3f40-4474-985f-2d29a5aa42f9\") " Jul 8 10:09:53.089913 kubelet[2732]: I0708 10:09:53.089812 2732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d86093fb-3f40-4474-985f-2d29a5aa42f9-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "d86093fb-3f40-4474-985f-2d29a5aa42f9" (UID: "d86093fb-3f40-4474-985f-2d29a5aa42f9"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 8 10:09:53.099849 kubelet[2732]: I0708 10:09:53.099760 2732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d86093fb-3f40-4474-985f-2d29a5aa42f9-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "d86093fb-3f40-4474-985f-2d29a5aa42f9" (UID: "d86093fb-3f40-4474-985f-2d29a5aa42f9"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 8 10:09:53.100037 systemd[1]: var-lib-kubelet-pods-d86093fb\x2d3f40\x2d4474\x2d985f\x2d2d29a5aa42f9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvz2d8.mount: Deactivated successfully. Jul 8 10:09:53.102024 kubelet[2732]: I0708 10:09:53.100189 2732 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d86093fb-3f40-4474-985f-2d29a5aa42f9-kube-api-access-vz2d8" (OuterVolumeSpecName: "kube-api-access-vz2d8") pod "d86093fb-3f40-4474-985f-2d29a5aa42f9" (UID: "d86093fb-3f40-4474-985f-2d29a5aa42f9"). InnerVolumeSpecName "kube-api-access-vz2d8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 8 10:09:53.100655 systemd[1]: var-lib-kubelet-pods-d86093fb\x2d3f40\x2d4474\x2d985f\x2d2d29a5aa42f9-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 8 10:09:53.189703 kubelet[2732]: I0708 10:09:53.189582 2732 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d86093fb-3f40-4474-985f-2d29a5aa42f9-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 8 10:09:53.190010 kubelet[2732]: I0708 10:09:53.189724 2732 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d86093fb-3f40-4474-985f-2d29a5aa42f9-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 8 10:09:53.190010 kubelet[2732]: I0708 10:09:53.189734 2732 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vz2d8\" (UniqueName: \"kubernetes.io/projected/d86093fb-3f40-4474-985f-2d29a5aa42f9-kube-api-access-vz2d8\") on node \"localhost\" DevicePath \"\"" Jul 8 10:09:53.276995 systemd[1]: Removed slice kubepods-besteffort-podd86093fb_3f40_4474_985f_2d29a5aa42f9.slice - libcontainer container kubepods-besteffort-podd86093fb_3f40_4474_985f_2d29a5aa42f9.slice. Jul 8 10:09:53.282433 containerd[1595]: time="2025-07-08T10:09:53.282387302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-t9sld,Uid:8abd2c33-f81d-4365-ad56-4ad33e89415f,Namespace:kube-system,Attempt:0,}" Jul 8 10:09:53.284115 kubelet[2732]: I0708 10:09:53.283999 2732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-v7ftq" podStartSLOduration=3.920933587 podStartE2EDuration="27.283839028s" podCreationTimestamp="2025-07-08 10:09:26 +0000 UTC" firstStartedPulling="2025-07-08 10:09:28.870542711 +0000 UTC m=+25.682365824" lastFinishedPulling="2025-07-08 10:09:52.233448152 +0000 UTC m=+49.045271265" observedRunningTime="2025-07-08 10:09:53.280274456 +0000 UTC m=+50.092097579" watchObservedRunningTime="2025-07-08 10:09:53.283839028 +0000 UTC m=+50.095662141" Jul 8 10:09:53.349947 systemd[1]: Created slice kubepods-besteffort-pod17018493_76f5_49b7_85c9_634ec1d59cbf.slice - libcontainer container kubepods-besteffort-pod17018493_76f5_49b7_85c9_634ec1d59cbf.slice. Jul 8 10:09:53.391349 kubelet[2732]: I0708 10:09:53.391295 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/17018493-76f5-49b7-85c9-634ec1d59cbf-whisker-backend-key-pair\") pod \"whisker-6768bdd548-mmrtt\" (UID: \"17018493-76f5-49b7-85c9-634ec1d59cbf\") " pod="calico-system/whisker-6768bdd548-mmrtt" Jul 8 10:09:53.391515 kubelet[2732]: I0708 10:09:53.391437 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17018493-76f5-49b7-85c9-634ec1d59cbf-whisker-ca-bundle\") pod \"whisker-6768bdd548-mmrtt\" (UID: \"17018493-76f5-49b7-85c9-634ec1d59cbf\") " pod="calico-system/whisker-6768bdd548-mmrtt" Jul 8 10:09:53.391515 kubelet[2732]: I0708 10:09:53.391465 2732 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8jsf\" (UniqueName: \"kubernetes.io/projected/17018493-76f5-49b7-85c9-634ec1d59cbf-kube-api-access-b8jsf\") pod \"whisker-6768bdd548-mmrtt\" (UID: \"17018493-76f5-49b7-85c9-634ec1d59cbf\") " pod="calico-system/whisker-6768bdd548-mmrtt" Jul 8 10:09:53.454272 systemd-networkd[1494]: cali753de41101b: Link UP Jul 8 10:09:53.454586 systemd-networkd[1494]: cali753de41101b: Gained carrier Jul 8 10:09:53.469226 containerd[1595]: 2025-07-08 10:09:53.312 [INFO][4041] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 8 10:09:53.469226 containerd[1595]: 2025-07-08 10:09:53.342 [INFO][4041] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--t9sld-eth0 coredns-7c65d6cfc9- kube-system 8abd2c33-f81d-4365-ad56-4ad33e89415f 815 0 2025-07-08 10:09:10 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-t9sld eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali753de41101b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="bea5f0f18474e8c0cab7c3854558f5172e8cbaa49488a7fb6cd29cc89f5eb961" Namespace="kube-system" Pod="coredns-7c65d6cfc9-t9sld" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--t9sld-" Jul 8 10:09:53.469226 containerd[1595]: 2025-07-08 10:09:53.342 [INFO][4041] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bea5f0f18474e8c0cab7c3854558f5172e8cbaa49488a7fb6cd29cc89f5eb961" Namespace="kube-system" Pod="coredns-7c65d6cfc9-t9sld" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--t9sld-eth0" Jul 8 10:09:53.469226 containerd[1595]: 2025-07-08 10:09:53.410 [INFO][4054] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bea5f0f18474e8c0cab7c3854558f5172e8cbaa49488a7fb6cd29cc89f5eb961" HandleID="k8s-pod-network.bea5f0f18474e8c0cab7c3854558f5172e8cbaa49488a7fb6cd29cc89f5eb961" Workload="localhost-k8s-coredns--7c65d6cfc9--t9sld-eth0" Jul 8 10:09:53.469464 containerd[1595]: 2025-07-08 10:09:53.411 [INFO][4054] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bea5f0f18474e8c0cab7c3854558f5172e8cbaa49488a7fb6cd29cc89f5eb961" HandleID="k8s-pod-network.bea5f0f18474e8c0cab7c3854558f5172e8cbaa49488a7fb6cd29cc89f5eb961" Workload="localhost-k8s-coredns--7c65d6cfc9--t9sld-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004ee20), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-t9sld", "timestamp":"2025-07-08 10:09:53.4107802 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 8 10:09:53.469464 containerd[1595]: 2025-07-08 10:09:53.411 [INFO][4054] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 8 10:09:53.469464 containerd[1595]: 2025-07-08 10:09:53.411 [INFO][4054] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 8 10:09:53.469464 containerd[1595]: 2025-07-08 10:09:53.412 [INFO][4054] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 8 10:09:53.469464 containerd[1595]: 2025-07-08 10:09:53.419 [INFO][4054] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bea5f0f18474e8c0cab7c3854558f5172e8cbaa49488a7fb6cd29cc89f5eb961" host="localhost" Jul 8 10:09:53.469464 containerd[1595]: 2025-07-08 10:09:53.424 [INFO][4054] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 8 10:09:53.469464 containerd[1595]: 2025-07-08 10:09:53.429 [INFO][4054] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 8 10:09:53.469464 containerd[1595]: 2025-07-08 10:09:53.430 [INFO][4054] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 8 10:09:53.469464 containerd[1595]: 2025-07-08 10:09:53.433 [INFO][4054] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 8 10:09:53.469464 containerd[1595]: 2025-07-08 10:09:53.433 [INFO][4054] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bea5f0f18474e8c0cab7c3854558f5172e8cbaa49488a7fb6cd29cc89f5eb961" host="localhost" Jul 8 10:09:53.469693 containerd[1595]: 2025-07-08 10:09:53.434 [INFO][4054] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.bea5f0f18474e8c0cab7c3854558f5172e8cbaa49488a7fb6cd29cc89f5eb961 Jul 8 10:09:53.469693 containerd[1595]: 2025-07-08 10:09:53.438 [INFO][4054] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bea5f0f18474e8c0cab7c3854558f5172e8cbaa49488a7fb6cd29cc89f5eb961" host="localhost" Jul 8 10:09:53.469693 containerd[1595]: 2025-07-08 10:09:53.443 [INFO][4054] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.bea5f0f18474e8c0cab7c3854558f5172e8cbaa49488a7fb6cd29cc89f5eb961" host="localhost" Jul 8 10:09:53.469693 containerd[1595]: 2025-07-08 10:09:53.443 [INFO][4054] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.bea5f0f18474e8c0cab7c3854558f5172e8cbaa49488a7fb6cd29cc89f5eb961" host="localhost" Jul 8 10:09:53.469693 containerd[1595]: 2025-07-08 10:09:53.443 [INFO][4054] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 8 10:09:53.469693 containerd[1595]: 2025-07-08 10:09:53.443 [INFO][4054] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="bea5f0f18474e8c0cab7c3854558f5172e8cbaa49488a7fb6cd29cc89f5eb961" HandleID="k8s-pod-network.bea5f0f18474e8c0cab7c3854558f5172e8cbaa49488a7fb6cd29cc89f5eb961" Workload="localhost-k8s-coredns--7c65d6cfc9--t9sld-eth0" Jul 8 10:09:53.469877 containerd[1595]: 2025-07-08 10:09:53.447 [INFO][4041] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bea5f0f18474e8c0cab7c3854558f5172e8cbaa49488a7fb6cd29cc89f5eb961" Namespace="kube-system" Pod="coredns-7c65d6cfc9-t9sld" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--t9sld-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--t9sld-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"8abd2c33-f81d-4365-ad56-4ad33e89415f", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.July, 8, 10, 9, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-t9sld", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali753de41101b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 8 10:09:53.469963 containerd[1595]: 2025-07-08 10:09:53.447 [INFO][4041] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="bea5f0f18474e8c0cab7c3854558f5172e8cbaa49488a7fb6cd29cc89f5eb961" Namespace="kube-system" Pod="coredns-7c65d6cfc9-t9sld" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--t9sld-eth0" Jul 8 10:09:53.469963 containerd[1595]: 2025-07-08 10:09:53.447 [INFO][4041] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali753de41101b ContainerID="bea5f0f18474e8c0cab7c3854558f5172e8cbaa49488a7fb6cd29cc89f5eb961" Namespace="kube-system" Pod="coredns-7c65d6cfc9-t9sld" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--t9sld-eth0" Jul 8 10:09:53.469963 containerd[1595]: 2025-07-08 10:09:53.455 [INFO][4041] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bea5f0f18474e8c0cab7c3854558f5172e8cbaa49488a7fb6cd29cc89f5eb961" Namespace="kube-system" Pod="coredns-7c65d6cfc9-t9sld" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--t9sld-eth0" Jul 8 10:09:53.470038 containerd[1595]: 2025-07-08 10:09:53.455 [INFO][4041] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bea5f0f18474e8c0cab7c3854558f5172e8cbaa49488a7fb6cd29cc89f5eb961" Namespace="kube-system" Pod="coredns-7c65d6cfc9-t9sld" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--t9sld-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--t9sld-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"8abd2c33-f81d-4365-ad56-4ad33e89415f", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.July, 8, 10, 9, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bea5f0f18474e8c0cab7c3854558f5172e8cbaa49488a7fb6cd29cc89f5eb961", Pod:"coredns-7c65d6cfc9-t9sld", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali753de41101b", MAC:"da:7b:88:04:55:1e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 8 10:09:53.470038 containerd[1595]: 2025-07-08 10:09:53.463 [INFO][4041] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bea5f0f18474e8c0cab7c3854558f5172e8cbaa49488a7fb6cd29cc89f5eb961" Namespace="kube-system" Pod="coredns-7c65d6cfc9-t9sld" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--t9sld-eth0" Jul 8 10:09:53.546728 containerd[1595]: time="2025-07-08T10:09:53.546550598Z" level=info msg="connecting to shim bea5f0f18474e8c0cab7c3854558f5172e8cbaa49488a7fb6cd29cc89f5eb961" address="unix:///run/containerd/s/0ab1643dac750982cef77c05614f2c48c4584dc7be87bfae0f27b9573da09489" namespace=k8s.io protocol=ttrpc version=3 Jul 8 10:09:53.578333 systemd[1]: Started cri-containerd-bea5f0f18474e8c0cab7c3854558f5172e8cbaa49488a7fb6cd29cc89f5eb961.scope - libcontainer container bea5f0f18474e8c0cab7c3854558f5172e8cbaa49488a7fb6cd29cc89f5eb961. Jul 8 10:09:53.594141 systemd-resolved[1403]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 8 10:09:53.655597 containerd[1595]: time="2025-07-08T10:09:53.655536410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6768bdd548-mmrtt,Uid:17018493-76f5-49b7-85c9-634ec1d59cbf,Namespace:calico-system,Attempt:0,}" Jul 8 10:09:53.812226 containerd[1595]: time="2025-07-08T10:09:53.810355620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-t9sld,Uid:8abd2c33-f81d-4365-ad56-4ad33e89415f,Namespace:kube-system,Attempt:0,} returns sandbox id \"bea5f0f18474e8c0cab7c3854558f5172e8cbaa49488a7fb6cd29cc89f5eb961\"" Jul 8 10:09:53.815559 containerd[1595]: time="2025-07-08T10:09:53.815503935Z" level=info msg="CreateContainer within sandbox \"bea5f0f18474e8c0cab7c3854558f5172e8cbaa49488a7fb6cd29cc89f5eb961\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 8 10:09:53.831940 containerd[1595]: time="2025-07-08T10:09:53.831880461Z" level=info msg="Container 7b5ead9fe55f95fb0d0049eabba7f28b6e25b9615785b62901da1a9d06c1515b: CDI devices from CRI Config.CDIDevices: []" Jul 8 10:09:53.840235 containerd[1595]: time="2025-07-08T10:09:53.840201223Z" level=info msg="CreateContainer within sandbox \"bea5f0f18474e8c0cab7c3854558f5172e8cbaa49488a7fb6cd29cc89f5eb961\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7b5ead9fe55f95fb0d0049eabba7f28b6e25b9615785b62901da1a9d06c1515b\"" Jul 8 10:09:53.841127 containerd[1595]: time="2025-07-08T10:09:53.841087717Z" level=info msg="StartContainer for \"7b5ead9fe55f95fb0d0049eabba7f28b6e25b9615785b62901da1a9d06c1515b\"" Jul 8 10:09:53.842027 containerd[1595]: time="2025-07-08T10:09:53.841993338Z" level=info msg="connecting to shim 7b5ead9fe55f95fb0d0049eabba7f28b6e25b9615785b62901da1a9d06c1515b" address="unix:///run/containerd/s/0ab1643dac750982cef77c05614f2c48c4584dc7be87bfae0f27b9573da09489" protocol=ttrpc version=3 Jul 8 10:09:53.865220 systemd[1]: Started cri-containerd-7b5ead9fe55f95fb0d0049eabba7f28b6e25b9615785b62901da1a9d06c1515b.scope - libcontainer container 7b5ead9fe55f95fb0d0049eabba7f28b6e25b9615785b62901da1a9d06c1515b. Jul 8 10:09:53.901631 containerd[1595]: time="2025-07-08T10:09:53.901579425Z" level=info msg="StartContainer for \"7b5ead9fe55f95fb0d0049eabba7f28b6e25b9615785b62901da1a9d06c1515b\" returns successfully" Jul 8 10:09:53.929643 systemd-networkd[1494]: cali0dcd05db5da: Link UP Jul 8 10:09:53.931617 systemd-networkd[1494]: cali0dcd05db5da: Gained carrier Jul 8 10:09:53.946878 containerd[1595]: 2025-07-08 10:09:53.842 [INFO][4118] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 8 10:09:53.946878 containerd[1595]: 2025-07-08 10:09:53.854 [INFO][4118] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--6768bdd548--mmrtt-eth0 whisker-6768bdd548- calico-system 17018493-76f5-49b7-85c9-634ec1d59cbf 947 0 2025-07-08 10:09:53 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6768bdd548 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-6768bdd548-mmrtt eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali0dcd05db5da [] [] }} ContainerID="0eb9b22d7f0b798e03c2e587410e6997a7349a7f4976dde9eae7ac9a8aee8210" Namespace="calico-system" Pod="whisker-6768bdd548-mmrtt" WorkloadEndpoint="localhost-k8s-whisker--6768bdd548--mmrtt-" Jul 8 10:09:53.946878 containerd[1595]: 2025-07-08 10:09:53.855 [INFO][4118] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0eb9b22d7f0b798e03c2e587410e6997a7349a7f4976dde9eae7ac9a8aee8210" Namespace="calico-system" Pod="whisker-6768bdd548-mmrtt" WorkloadEndpoint="localhost-k8s-whisker--6768bdd548--mmrtt-eth0" Jul 8 10:09:53.946878 containerd[1595]: 2025-07-08 10:09:53.883 [INFO][4146] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0eb9b22d7f0b798e03c2e587410e6997a7349a7f4976dde9eae7ac9a8aee8210" HandleID="k8s-pod-network.0eb9b22d7f0b798e03c2e587410e6997a7349a7f4976dde9eae7ac9a8aee8210" Workload="localhost-k8s-whisker--6768bdd548--mmrtt-eth0" Jul 8 10:09:53.946878 containerd[1595]: 2025-07-08 10:09:53.883 [INFO][4146] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0eb9b22d7f0b798e03c2e587410e6997a7349a7f4976dde9eae7ac9a8aee8210" HandleID="k8s-pod-network.0eb9b22d7f0b798e03c2e587410e6997a7349a7f4976dde9eae7ac9a8aee8210" Workload="localhost-k8s-whisker--6768bdd548--mmrtt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c6fe0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-6768bdd548-mmrtt", "timestamp":"2025-07-08 10:09:53.88336005 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 8 10:09:53.946878 containerd[1595]: 2025-07-08 10:09:53.883 [INFO][4146] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 8 10:09:53.946878 containerd[1595]: 2025-07-08 10:09:53.883 [INFO][4146] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 8 10:09:53.946878 containerd[1595]: 2025-07-08 10:09:53.883 [INFO][4146] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 8 10:09:53.946878 containerd[1595]: 2025-07-08 10:09:53.892 [INFO][4146] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0eb9b22d7f0b798e03c2e587410e6997a7349a7f4976dde9eae7ac9a8aee8210" host="localhost" Jul 8 10:09:53.946878 containerd[1595]: 2025-07-08 10:09:53.898 [INFO][4146] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 8 10:09:53.946878 containerd[1595]: 2025-07-08 10:09:53.904 [INFO][4146] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 8 10:09:53.946878 containerd[1595]: 2025-07-08 10:09:53.907 [INFO][4146] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 8 10:09:53.946878 containerd[1595]: 2025-07-08 10:09:53.909 [INFO][4146] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 8 10:09:53.946878 containerd[1595]: 2025-07-08 10:09:53.910 [INFO][4146] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0eb9b22d7f0b798e03c2e587410e6997a7349a7f4976dde9eae7ac9a8aee8210" host="localhost" Jul 8 10:09:53.946878 containerd[1595]: 2025-07-08 10:09:53.912 [INFO][4146] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0eb9b22d7f0b798e03c2e587410e6997a7349a7f4976dde9eae7ac9a8aee8210 Jul 8 10:09:53.946878 containerd[1595]: 2025-07-08 10:09:53.916 [INFO][4146] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0eb9b22d7f0b798e03c2e587410e6997a7349a7f4976dde9eae7ac9a8aee8210" host="localhost" Jul 8 10:09:53.946878 containerd[1595]: 2025-07-08 10:09:53.923 [INFO][4146] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.0eb9b22d7f0b798e03c2e587410e6997a7349a7f4976dde9eae7ac9a8aee8210" host="localhost" Jul 8 10:09:53.946878 containerd[1595]: 2025-07-08 10:09:53.923 [INFO][4146] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.0eb9b22d7f0b798e03c2e587410e6997a7349a7f4976dde9eae7ac9a8aee8210" host="localhost" Jul 8 10:09:53.946878 containerd[1595]: 2025-07-08 10:09:53.923 [INFO][4146] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 8 10:09:53.946878 containerd[1595]: 2025-07-08 10:09:53.923 [INFO][4146] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="0eb9b22d7f0b798e03c2e587410e6997a7349a7f4976dde9eae7ac9a8aee8210" HandleID="k8s-pod-network.0eb9b22d7f0b798e03c2e587410e6997a7349a7f4976dde9eae7ac9a8aee8210" Workload="localhost-k8s-whisker--6768bdd548--mmrtt-eth0" Jul 8 10:09:53.947673 containerd[1595]: 2025-07-08 10:09:53.927 [INFO][4118] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0eb9b22d7f0b798e03c2e587410e6997a7349a7f4976dde9eae7ac9a8aee8210" Namespace="calico-system" Pod="whisker-6768bdd548-mmrtt" WorkloadEndpoint="localhost-k8s-whisker--6768bdd548--mmrtt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6768bdd548--mmrtt-eth0", GenerateName:"whisker-6768bdd548-", Namespace:"calico-system", SelfLink:"", UID:"17018493-76f5-49b7-85c9-634ec1d59cbf", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2025, time.July, 8, 10, 9, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6768bdd548", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-6768bdd548-mmrtt", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0dcd05db5da", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 8 10:09:53.947673 containerd[1595]: 2025-07-08 10:09:53.927 [INFO][4118] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="0eb9b22d7f0b798e03c2e587410e6997a7349a7f4976dde9eae7ac9a8aee8210" Namespace="calico-system" Pod="whisker-6768bdd548-mmrtt" WorkloadEndpoint="localhost-k8s-whisker--6768bdd548--mmrtt-eth0" Jul 8 10:09:53.947673 containerd[1595]: 2025-07-08 10:09:53.927 [INFO][4118] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0dcd05db5da ContainerID="0eb9b22d7f0b798e03c2e587410e6997a7349a7f4976dde9eae7ac9a8aee8210" Namespace="calico-system" Pod="whisker-6768bdd548-mmrtt" WorkloadEndpoint="localhost-k8s-whisker--6768bdd548--mmrtt-eth0" Jul 8 10:09:53.947673 containerd[1595]: 2025-07-08 10:09:53.932 [INFO][4118] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0eb9b22d7f0b798e03c2e587410e6997a7349a7f4976dde9eae7ac9a8aee8210" Namespace="calico-system" Pod="whisker-6768bdd548-mmrtt" WorkloadEndpoint="localhost-k8s-whisker--6768bdd548--mmrtt-eth0" Jul 8 10:09:53.947673 containerd[1595]: 2025-07-08 10:09:53.933 [INFO][4118] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0eb9b22d7f0b798e03c2e587410e6997a7349a7f4976dde9eae7ac9a8aee8210" Namespace="calico-system" Pod="whisker-6768bdd548-mmrtt" WorkloadEndpoint="localhost-k8s-whisker--6768bdd548--mmrtt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6768bdd548--mmrtt-eth0", GenerateName:"whisker-6768bdd548-", Namespace:"calico-system", SelfLink:"", UID:"17018493-76f5-49b7-85c9-634ec1d59cbf", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2025, time.July, 8, 10, 9, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6768bdd548", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0eb9b22d7f0b798e03c2e587410e6997a7349a7f4976dde9eae7ac9a8aee8210", Pod:"whisker-6768bdd548-mmrtt", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0dcd05db5da", MAC:"b6:2f:b3:dc:70:e2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 8 10:09:53.947673 containerd[1595]: 2025-07-08 10:09:53.941 [INFO][4118] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0eb9b22d7f0b798e03c2e587410e6997a7349a7f4976dde9eae7ac9a8aee8210" Namespace="calico-system" Pod="whisker-6768bdd548-mmrtt" WorkloadEndpoint="localhost-k8s-whisker--6768bdd548--mmrtt-eth0" Jul 8 10:09:53.970324 containerd[1595]: time="2025-07-08T10:09:53.970261680Z" level=info msg="connecting to shim 0eb9b22d7f0b798e03c2e587410e6997a7349a7f4976dde9eae7ac9a8aee8210" address="unix:///run/containerd/s/1bd4010bf66f09d173131ca28db668959aeb04f54fcc5350cb4162115afd0d06" namespace=k8s.io protocol=ttrpc version=3 Jul 8 10:09:54.000355 systemd[1]: Started cri-containerd-0eb9b22d7f0b798e03c2e587410e6997a7349a7f4976dde9eae7ac9a8aee8210.scope - libcontainer container 0eb9b22d7f0b798e03c2e587410e6997a7349a7f4976dde9eae7ac9a8aee8210. Jul 8 10:09:54.014001 systemd-resolved[1403]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 8 10:09:54.042965 containerd[1595]: time="2025-07-08T10:09:54.042901379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6768bdd548-mmrtt,Uid:17018493-76f5-49b7-85c9-634ec1d59cbf,Namespace:calico-system,Attempt:0,} returns sandbox id \"0eb9b22d7f0b798e03c2e587410e6997a7349a7f4976dde9eae7ac9a8aee8210\"" Jul 8 10:09:54.044877 containerd[1595]: time="2025-07-08T10:09:54.044722187Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 8 10:09:54.284117 containerd[1595]: time="2025-07-08T10:09:54.283536286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d589945c5-lcwrp,Uid:ebd03ea5-e1d3-4fb4-9b4b-f073c91f8b8a,Namespace:calico-apiserver,Attempt:0,}" Jul 8 10:09:54.289331 containerd[1595]: time="2025-07-08T10:09:54.289258889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-dz6z6,Uid:17edc73a-6465-4ae3-a2c7-29809051b166,Namespace:calico-system,Attempt:0,}" Jul 8 10:09:54.292241 kubelet[2732]: I0708 10:09:54.291808 2732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-t9sld" podStartSLOduration=44.291780732 podStartE2EDuration="44.291780732s" podCreationTimestamp="2025-07-08 10:09:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-08 10:09:54.287644377 +0000 UTC m=+51.099467501" watchObservedRunningTime="2025-07-08 10:09:54.291780732 +0000 UTC m=+51.103603835" Jul 8 10:09:54.565937 systemd[1]: Started sshd@8-10.0.0.19:22-10.0.0.1:47062.service - OpenSSH per-connection server daemon (10.0.0.1:47062). Jul 8 10:09:54.637396 containerd[1595]: time="2025-07-08T10:09:54.637336884Z" level=info msg="TaskExit event in podsandbox handler container_id:\"556c574613f9876731555a16fcda7a6fc473ec9868aeaede9f6e9c9e3adf7288\" id:\"d55d61067255448658c89cd934c2933a3dbf03fe5c3d72728634101721d6dd24\" pid:4372 exit_status:1 exited_at:{seconds:1751969394 nanos:636864647}" Jul 8 10:09:54.683405 systemd-networkd[1494]: caliec595912c50: Link UP Jul 8 10:09:54.685637 systemd-networkd[1494]: caliec595912c50: Gained carrier Jul 8 10:09:54.696555 systemd-networkd[1494]: cali753de41101b: Gained IPv6LL Jul 8 10:09:54.701912 sshd[4420]: Accepted publickey for core from 10.0.0.1 port 47062 ssh2: RSA SHA256:BQWGw9qDwwKm1WzdalDD28JNIdz3HBcaxKwPB41deh4 Jul 8 10:09:54.704264 sshd-session[4420]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 8 10:09:54.717165 systemd-logind[1579]: New session 9 of user core. Jul 8 10:09:54.717840 containerd[1595]: 2025-07-08 10:09:54.493 [INFO][4343] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 8 10:09:54.717840 containerd[1595]: 2025-07-08 10:09:54.512 [INFO][4343] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--58fd7646b9--dz6z6-eth0 goldmane-58fd7646b9- calico-system 17edc73a-6465-4ae3-a2c7-29809051b166 812 0 2025-07-08 10:09:26 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-58fd7646b9-dz6z6 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] caliec595912c50 [] [] }} ContainerID="e880a05b23b002a3939d90fe3c046ea527b6018e3d956ce5d230e585740994a3" Namespace="calico-system" Pod="goldmane-58fd7646b9-dz6z6" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--dz6z6-" Jul 8 10:09:54.717840 containerd[1595]: 2025-07-08 10:09:54.512 [INFO][4343] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e880a05b23b002a3939d90fe3c046ea527b6018e3d956ce5d230e585740994a3" Namespace="calico-system" Pod="goldmane-58fd7646b9-dz6z6" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--dz6z6-eth0" Jul 8 10:09:54.717840 containerd[1595]: 2025-07-08 10:09:54.576 [INFO][4386] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e880a05b23b002a3939d90fe3c046ea527b6018e3d956ce5d230e585740994a3" HandleID="k8s-pod-network.e880a05b23b002a3939d90fe3c046ea527b6018e3d956ce5d230e585740994a3" Workload="localhost-k8s-goldmane--58fd7646b9--dz6z6-eth0" Jul 8 10:09:54.717840 containerd[1595]: 2025-07-08 10:09:54.577 [INFO][4386] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e880a05b23b002a3939d90fe3c046ea527b6018e3d956ce5d230e585740994a3" HandleID="k8s-pod-network.e880a05b23b002a3939d90fe3c046ea527b6018e3d956ce5d230e585740994a3" Workload="localhost-k8s-goldmane--58fd7646b9--dz6z6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00035f5f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-58fd7646b9-dz6z6", "timestamp":"2025-07-08 10:09:54.576675006 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 8 10:09:54.717840 containerd[1595]: 2025-07-08 10:09:54.577 [INFO][4386] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 8 10:09:54.717840 containerd[1595]: 2025-07-08 10:09:54.577 [INFO][4386] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 8 10:09:54.717840 containerd[1595]: 2025-07-08 10:09:54.577 [INFO][4386] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 8 10:09:54.717840 containerd[1595]: 2025-07-08 10:09:54.585 [INFO][4386] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e880a05b23b002a3939d90fe3c046ea527b6018e3d956ce5d230e585740994a3" host="localhost" Jul 8 10:09:54.717840 containerd[1595]: 2025-07-08 10:09:54.593 [INFO][4386] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 8 10:09:54.717840 containerd[1595]: 2025-07-08 10:09:54.599 [INFO][4386] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 8 10:09:54.717840 containerd[1595]: 2025-07-08 10:09:54.600 [INFO][4386] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 8 10:09:54.717840 containerd[1595]: 2025-07-08 10:09:54.602 [INFO][4386] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 8 10:09:54.717840 containerd[1595]: 2025-07-08 10:09:54.602 [INFO][4386] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e880a05b23b002a3939d90fe3c046ea527b6018e3d956ce5d230e585740994a3" host="localhost" Jul 8 10:09:54.717840 containerd[1595]: 2025-07-08 10:09:54.603 [INFO][4386] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e880a05b23b002a3939d90fe3c046ea527b6018e3d956ce5d230e585740994a3 Jul 8 10:09:54.717840 containerd[1595]: 2025-07-08 10:09:54.617 [INFO][4386] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e880a05b23b002a3939d90fe3c046ea527b6018e3d956ce5d230e585740994a3" host="localhost" Jul 8 10:09:54.717840 containerd[1595]: 2025-07-08 10:09:54.675 [INFO][4386] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.e880a05b23b002a3939d90fe3c046ea527b6018e3d956ce5d230e585740994a3" host="localhost" Jul 8 10:09:54.717840 containerd[1595]: 2025-07-08 10:09:54.675 [INFO][4386] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.e880a05b23b002a3939d90fe3c046ea527b6018e3d956ce5d230e585740994a3" host="localhost" Jul 8 10:09:54.717840 containerd[1595]: 2025-07-08 10:09:54.675 [INFO][4386] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 8 10:09:54.717840 containerd[1595]: 2025-07-08 10:09:54.675 [INFO][4386] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="e880a05b23b002a3939d90fe3c046ea527b6018e3d956ce5d230e585740994a3" HandleID="k8s-pod-network.e880a05b23b002a3939d90fe3c046ea527b6018e3d956ce5d230e585740994a3" Workload="localhost-k8s-goldmane--58fd7646b9--dz6z6-eth0" Jul 8 10:09:54.718390 containerd[1595]: 2025-07-08 10:09:54.680 [INFO][4343] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e880a05b23b002a3939d90fe3c046ea527b6018e3d956ce5d230e585740994a3" Namespace="calico-system" Pod="goldmane-58fd7646b9-dz6z6" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--dz6z6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--dz6z6-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"17edc73a-6465-4ae3-a2c7-29809051b166", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.July, 8, 10, 9, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-58fd7646b9-dz6z6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliec595912c50", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 8 10:09:54.718390 containerd[1595]: 2025-07-08 10:09:54.680 [INFO][4343] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="e880a05b23b002a3939d90fe3c046ea527b6018e3d956ce5d230e585740994a3" Namespace="calico-system" Pod="goldmane-58fd7646b9-dz6z6" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--dz6z6-eth0" Jul 8 10:09:54.718390 containerd[1595]: 2025-07-08 10:09:54.680 [INFO][4343] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliec595912c50 ContainerID="e880a05b23b002a3939d90fe3c046ea527b6018e3d956ce5d230e585740994a3" Namespace="calico-system" Pod="goldmane-58fd7646b9-dz6z6" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--dz6z6-eth0" Jul 8 10:09:54.718390 containerd[1595]: 2025-07-08 10:09:54.685 [INFO][4343] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e880a05b23b002a3939d90fe3c046ea527b6018e3d956ce5d230e585740994a3" Namespace="calico-system" Pod="goldmane-58fd7646b9-dz6z6" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--dz6z6-eth0" Jul 8 10:09:54.718390 containerd[1595]: 2025-07-08 10:09:54.685 [INFO][4343] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e880a05b23b002a3939d90fe3c046ea527b6018e3d956ce5d230e585740994a3" Namespace="calico-system" Pod="goldmane-58fd7646b9-dz6z6" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--dz6z6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--dz6z6-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"17edc73a-6465-4ae3-a2c7-29809051b166", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.July, 8, 10, 9, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e880a05b23b002a3939d90fe3c046ea527b6018e3d956ce5d230e585740994a3", Pod:"goldmane-58fd7646b9-dz6z6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliec595912c50", MAC:"92:c4:05:ee:e5:78", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 8 10:09:54.718390 containerd[1595]: 2025-07-08 10:09:54.709 [INFO][4343] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e880a05b23b002a3939d90fe3c046ea527b6018e3d956ce5d230e585740994a3" Namespace="calico-system" Pod="goldmane-58fd7646b9-dz6z6" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--dz6z6-eth0" Jul 8 10:09:54.719745 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 8 10:09:54.758467 containerd[1595]: time="2025-07-08T10:09:54.758393350Z" level=info msg="connecting to shim e880a05b23b002a3939d90fe3c046ea527b6018e3d956ce5d230e585740994a3" address="unix:///run/containerd/s/a1364fa2611ef7fa8c99ae9ee93125de1022fc9d23272e564f0b18f87e5f379b" namespace=k8s.io protocol=ttrpc version=3 Jul 8 10:09:54.770132 systemd-networkd[1494]: calidaeffee611f: Link UP Jul 8 10:09:54.771420 systemd-networkd[1494]: calidaeffee611f: Gained carrier Jul 8 10:09:54.800246 systemd[1]: Started cri-containerd-e880a05b23b002a3939d90fe3c046ea527b6018e3d956ce5d230e585740994a3.scope - libcontainer container e880a05b23b002a3939d90fe3c046ea527b6018e3d956ce5d230e585740994a3. Jul 8 10:09:54.815126 systemd-resolved[1403]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 8 10:09:54.831211 containerd[1595]: 2025-07-08 10:09:54.502 [INFO][4331] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 8 10:09:54.831211 containerd[1595]: 2025-07-08 10:09:54.519 [INFO][4331] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--d589945c5--lcwrp-eth0 calico-apiserver-d589945c5- calico-apiserver ebd03ea5-e1d3-4fb4-9b4b-f073c91f8b8a 816 0 2025-07-08 10:09:23 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:d589945c5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-d589945c5-lcwrp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calidaeffee611f [] [] }} ContainerID="4da3234d9886716ec497f3cb3299c5c3e8466b7bad22d42212c43394a8aaf493" Namespace="calico-apiserver" Pod="calico-apiserver-d589945c5-lcwrp" WorkloadEndpoint="localhost-k8s-calico--apiserver--d589945c5--lcwrp-" Jul 8 10:09:54.831211 containerd[1595]: 2025-07-08 10:09:54.519 [INFO][4331] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4da3234d9886716ec497f3cb3299c5c3e8466b7bad22d42212c43394a8aaf493" Namespace="calico-apiserver" Pod="calico-apiserver-d589945c5-lcwrp" WorkloadEndpoint="localhost-k8s-calico--apiserver--d589945c5--lcwrp-eth0" Jul 8 10:09:54.831211 containerd[1595]: 2025-07-08 10:09:54.580 [INFO][4394] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4da3234d9886716ec497f3cb3299c5c3e8466b7bad22d42212c43394a8aaf493" HandleID="k8s-pod-network.4da3234d9886716ec497f3cb3299c5c3e8466b7bad22d42212c43394a8aaf493" Workload="localhost-k8s-calico--apiserver--d589945c5--lcwrp-eth0" Jul 8 10:09:54.831211 containerd[1595]: 2025-07-08 10:09:54.580 [INFO][4394] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4da3234d9886716ec497f3cb3299c5c3e8466b7bad22d42212c43394a8aaf493" HandleID="k8s-pod-network.4da3234d9886716ec497f3cb3299c5c3e8466b7bad22d42212c43394a8aaf493" Workload="localhost-k8s-calico--apiserver--d589945c5--lcwrp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f750), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-d589945c5-lcwrp", "timestamp":"2025-07-08 10:09:54.580490317 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 8 10:09:54.831211 containerd[1595]: 2025-07-08 10:09:54.580 [INFO][4394] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 8 10:09:54.831211 containerd[1595]: 2025-07-08 10:09:54.675 [INFO][4394] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 8 10:09:54.831211 containerd[1595]: 2025-07-08 10:09:54.675 [INFO][4394] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 8 10:09:54.831211 containerd[1595]: 2025-07-08 10:09:54.688 [INFO][4394] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4da3234d9886716ec497f3cb3299c5c3e8466b7bad22d42212c43394a8aaf493" host="localhost" Jul 8 10:09:54.831211 containerd[1595]: 2025-07-08 10:09:54.709 [INFO][4394] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 8 10:09:54.831211 containerd[1595]: 2025-07-08 10:09:54.718 [INFO][4394] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 8 10:09:54.831211 containerd[1595]: 2025-07-08 10:09:54.724 [INFO][4394] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 8 10:09:54.831211 containerd[1595]: 2025-07-08 10:09:54.727 [INFO][4394] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 8 10:09:54.831211 containerd[1595]: 2025-07-08 10:09:54.728 [INFO][4394] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4da3234d9886716ec497f3cb3299c5c3e8466b7bad22d42212c43394a8aaf493" host="localhost" Jul 8 10:09:54.831211 containerd[1595]: 2025-07-08 10:09:54.730 [INFO][4394] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4da3234d9886716ec497f3cb3299c5c3e8466b7bad22d42212c43394a8aaf493 Jul 8 10:09:54.831211 containerd[1595]: 2025-07-08 10:09:54.740 [INFO][4394] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4da3234d9886716ec497f3cb3299c5c3e8466b7bad22d42212c43394a8aaf493" host="localhost" Jul 8 10:09:54.831211 containerd[1595]: 2025-07-08 10:09:54.749 [INFO][4394] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.4da3234d9886716ec497f3cb3299c5c3e8466b7bad22d42212c43394a8aaf493" host="localhost" Jul 8 10:09:54.831211 containerd[1595]: 2025-07-08 10:09:54.750 [INFO][4394] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.4da3234d9886716ec497f3cb3299c5c3e8466b7bad22d42212c43394a8aaf493" host="localhost" Jul 8 10:09:54.831211 containerd[1595]: 2025-07-08 10:09:54.751 [INFO][4394] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 8 10:09:54.831211 containerd[1595]: 2025-07-08 10:09:54.751 [INFO][4394] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="4da3234d9886716ec497f3cb3299c5c3e8466b7bad22d42212c43394a8aaf493" HandleID="k8s-pod-network.4da3234d9886716ec497f3cb3299c5c3e8466b7bad22d42212c43394a8aaf493" Workload="localhost-k8s-calico--apiserver--d589945c5--lcwrp-eth0" Jul 8 10:09:54.831851 containerd[1595]: 2025-07-08 10:09:54.757 [INFO][4331] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4da3234d9886716ec497f3cb3299c5c3e8466b7bad22d42212c43394a8aaf493" Namespace="calico-apiserver" Pod="calico-apiserver-d589945c5-lcwrp" WorkloadEndpoint="localhost-k8s-calico--apiserver--d589945c5--lcwrp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d589945c5--lcwrp-eth0", GenerateName:"calico-apiserver-d589945c5-", Namespace:"calico-apiserver", SelfLink:"", UID:"ebd03ea5-e1d3-4fb4-9b4b-f073c91f8b8a", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.July, 8, 10, 9, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d589945c5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-d589945c5-lcwrp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidaeffee611f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 8 10:09:54.831851 containerd[1595]: 2025-07-08 10:09:54.757 [INFO][4331] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="4da3234d9886716ec497f3cb3299c5c3e8466b7bad22d42212c43394a8aaf493" Namespace="calico-apiserver" Pod="calico-apiserver-d589945c5-lcwrp" WorkloadEndpoint="localhost-k8s-calico--apiserver--d589945c5--lcwrp-eth0" Jul 8 10:09:54.831851 containerd[1595]: 2025-07-08 10:09:54.757 [INFO][4331] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidaeffee611f ContainerID="4da3234d9886716ec497f3cb3299c5c3e8466b7bad22d42212c43394a8aaf493" Namespace="calico-apiserver" Pod="calico-apiserver-d589945c5-lcwrp" WorkloadEndpoint="localhost-k8s-calico--apiserver--d589945c5--lcwrp-eth0" Jul 8 10:09:54.831851 containerd[1595]: 2025-07-08 10:09:54.771 [INFO][4331] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4da3234d9886716ec497f3cb3299c5c3e8466b7bad22d42212c43394a8aaf493" Namespace="calico-apiserver" Pod="calico-apiserver-d589945c5-lcwrp" WorkloadEndpoint="localhost-k8s-calico--apiserver--d589945c5--lcwrp-eth0" Jul 8 10:09:54.831851 containerd[1595]: 2025-07-08 10:09:54.781 [INFO][4331] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4da3234d9886716ec497f3cb3299c5c3e8466b7bad22d42212c43394a8aaf493" Namespace="calico-apiserver" Pod="calico-apiserver-d589945c5-lcwrp" WorkloadEndpoint="localhost-k8s-calico--apiserver--d589945c5--lcwrp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d589945c5--lcwrp-eth0", GenerateName:"calico-apiserver-d589945c5-", Namespace:"calico-apiserver", SelfLink:"", UID:"ebd03ea5-e1d3-4fb4-9b4b-f073c91f8b8a", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.July, 8, 10, 9, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d589945c5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4da3234d9886716ec497f3cb3299c5c3e8466b7bad22d42212c43394a8aaf493", Pod:"calico-apiserver-d589945c5-lcwrp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidaeffee611f", MAC:"e2:0d:a7:a7:e7:d9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 8 10:09:54.831851 containerd[1595]: 2025-07-08 10:09:54.816 [INFO][4331] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4da3234d9886716ec497f3cb3299c5c3e8466b7bad22d42212c43394a8aaf493" Namespace="calico-apiserver" Pod="calico-apiserver-d589945c5-lcwrp" WorkloadEndpoint="localhost-k8s-calico--apiserver--d589945c5--lcwrp-eth0" Jul 8 10:09:54.937395 containerd[1595]: time="2025-07-08T10:09:54.937333868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-dz6z6,Uid:17edc73a-6465-4ae3-a2c7-29809051b166,Namespace:calico-system,Attempt:0,} returns sandbox id \"e880a05b23b002a3939d90fe3c046ea527b6018e3d956ce5d230e585740994a3\"" Jul 8 10:09:54.952418 systemd-networkd[1494]: vxlan.calico: Link UP Jul 8 10:09:54.952428 systemd-networkd[1494]: vxlan.calico: Gained carrier Jul 8 10:09:54.973226 containerd[1595]: time="2025-07-08T10:09:54.973087838Z" level=info msg="connecting to shim 4da3234d9886716ec497f3cb3299c5c3e8466b7bad22d42212c43394a8aaf493" address="unix:///run/containerd/s/1a7b1e5876bc2ddf60a837f14121b506e0c6ddfc2ab8efd5e059fda7a4887846" namespace=k8s.io protocol=ttrpc version=3 Jul 8 10:09:54.983810 sshd[4450]: Connection closed by 10.0.0.1 port 47062 Jul 8 10:09:54.984330 sshd-session[4420]: pam_unix(sshd:session): session closed for user core Jul 8 10:09:54.992596 systemd[1]: sshd@8-10.0.0.19:22-10.0.0.1:47062.service: Deactivated successfully. Jul 8 10:09:54.994836 systemd[1]: session-9.scope: Deactivated successfully. Jul 8 10:09:55.002242 systemd-logind[1579]: Session 9 logged out. Waiting for processes to exit. Jul 8 10:09:55.003572 systemd-logind[1579]: Removed session 9. Jul 8 10:09:55.018261 systemd[1]: Started cri-containerd-4da3234d9886716ec497f3cb3299c5c3e8466b7bad22d42212c43394a8aaf493.scope - libcontainer container 4da3234d9886716ec497f3cb3299c5c3e8466b7bad22d42212c43394a8aaf493. Jul 8 10:09:55.033210 systemd-resolved[1403]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 8 10:09:55.066713 containerd[1595]: time="2025-07-08T10:09:55.066403117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d589945c5-lcwrp,Uid:ebd03ea5-e1d3-4fb4-9b4b-f073c91f8b8a,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"4da3234d9886716ec497f3cb3299c5c3e8466b7bad22d42212c43394a8aaf493\"" Jul 8 10:09:55.282610 containerd[1595]: time="2025-07-08T10:09:55.281242758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d589945c5-wwn7p,Uid:2cff0e85-674f-4a7c-bb93-e9dd47a60308,Namespace:calico-apiserver,Attempt:0,}" Jul 8 10:09:55.284827 kubelet[2732]: I0708 10:09:55.284771 2732 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d86093fb-3f40-4474-985f-2d29a5aa42f9" path="/var/lib/kubelet/pods/d86093fb-3f40-4474-985f-2d29a5aa42f9/volumes" Jul 8 10:09:55.373899 containerd[1595]: time="2025-07-08T10:09:55.373783198Z" level=info msg="TaskExit event in podsandbox handler container_id:\"556c574613f9876731555a16fcda7a6fc473ec9868aeaede9f6e9c9e3adf7288\" id:\"83cd3f2183a0f1e14a142b801289d7afa39fb901057113b9305acb7f289ae96c\" pid:4654 exit_status:1 exited_at:{seconds:1751969395 nanos:373275875}" Jul 8 10:09:55.569210 systemd-networkd[1494]: cali9ebb405b357: Link UP Jul 8 10:09:55.572129 systemd-networkd[1494]: cali9ebb405b357: Gained carrier Jul 8 10:09:55.587986 containerd[1595]: 2025-07-08 10:09:55.329 [INFO][4632] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--d589945c5--wwn7p-eth0 calico-apiserver-d589945c5- calico-apiserver 2cff0e85-674f-4a7c-bb93-e9dd47a60308 810 0 2025-07-08 10:09:23 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:d589945c5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-d589945c5-wwn7p eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9ebb405b357 [] [] }} ContainerID="9e1fb1c9e8d7afff4523379535cff23b9047f0ea2331b0bc67d40a5c3a3778c3" Namespace="calico-apiserver" Pod="calico-apiserver-d589945c5-wwn7p" WorkloadEndpoint="localhost-k8s-calico--apiserver--d589945c5--wwn7p-" Jul 8 10:09:55.587986 containerd[1595]: 2025-07-08 10:09:55.330 [INFO][4632] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9e1fb1c9e8d7afff4523379535cff23b9047f0ea2331b0bc67d40a5c3a3778c3" Namespace="calico-apiserver" Pod="calico-apiserver-d589945c5-wwn7p" WorkloadEndpoint="localhost-k8s-calico--apiserver--d589945c5--wwn7p-eth0" Jul 8 10:09:55.587986 containerd[1595]: 2025-07-08 10:09:55.354 [INFO][4672] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9e1fb1c9e8d7afff4523379535cff23b9047f0ea2331b0bc67d40a5c3a3778c3" HandleID="k8s-pod-network.9e1fb1c9e8d7afff4523379535cff23b9047f0ea2331b0bc67d40a5c3a3778c3" Workload="localhost-k8s-calico--apiserver--d589945c5--wwn7p-eth0" Jul 8 10:09:55.587986 containerd[1595]: 2025-07-08 10:09:55.354 [INFO][4672] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9e1fb1c9e8d7afff4523379535cff23b9047f0ea2331b0bc67d40a5c3a3778c3" HandleID="k8s-pod-network.9e1fb1c9e8d7afff4523379535cff23b9047f0ea2331b0bc67d40a5c3a3778c3" Workload="localhost-k8s-calico--apiserver--d589945c5--wwn7p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000325490), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-d589945c5-wwn7p", "timestamp":"2025-07-08 10:09:55.354650253 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 8 10:09:55.587986 containerd[1595]: 2025-07-08 10:09:55.354 [INFO][4672] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 8 10:09:55.587986 containerd[1595]: 2025-07-08 10:09:55.355 [INFO][4672] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 8 10:09:55.587986 containerd[1595]: 2025-07-08 10:09:55.355 [INFO][4672] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 8 10:09:55.587986 containerd[1595]: 2025-07-08 10:09:55.361 [INFO][4672] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9e1fb1c9e8d7afff4523379535cff23b9047f0ea2331b0bc67d40a5c3a3778c3" host="localhost" Jul 8 10:09:55.587986 containerd[1595]: 2025-07-08 10:09:55.365 [INFO][4672] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 8 10:09:55.587986 containerd[1595]: 2025-07-08 10:09:55.370 [INFO][4672] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 8 10:09:55.587986 containerd[1595]: 2025-07-08 10:09:55.372 [INFO][4672] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 8 10:09:55.587986 containerd[1595]: 2025-07-08 10:09:55.374 [INFO][4672] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 8 10:09:55.587986 containerd[1595]: 2025-07-08 10:09:55.374 [INFO][4672] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9e1fb1c9e8d7afff4523379535cff23b9047f0ea2331b0bc67d40a5c3a3778c3" host="localhost" Jul 8 10:09:55.587986 containerd[1595]: 2025-07-08 10:09:55.375 [INFO][4672] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9e1fb1c9e8d7afff4523379535cff23b9047f0ea2331b0bc67d40a5c3a3778c3 Jul 8 10:09:55.587986 containerd[1595]: 2025-07-08 10:09:55.418 [INFO][4672] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9e1fb1c9e8d7afff4523379535cff23b9047f0ea2331b0bc67d40a5c3a3778c3" host="localhost" Jul 8 10:09:55.587986 containerd[1595]: 2025-07-08 10:09:55.561 [INFO][4672] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.9e1fb1c9e8d7afff4523379535cff23b9047f0ea2331b0bc67d40a5c3a3778c3" host="localhost" Jul 8 10:09:55.587986 containerd[1595]: 2025-07-08 10:09:55.562 [INFO][4672] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.9e1fb1c9e8d7afff4523379535cff23b9047f0ea2331b0bc67d40a5c3a3778c3" host="localhost" Jul 8 10:09:55.587986 containerd[1595]: 2025-07-08 10:09:55.562 [INFO][4672] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 8 10:09:55.587986 containerd[1595]: 2025-07-08 10:09:55.562 [INFO][4672] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="9e1fb1c9e8d7afff4523379535cff23b9047f0ea2331b0bc67d40a5c3a3778c3" HandleID="k8s-pod-network.9e1fb1c9e8d7afff4523379535cff23b9047f0ea2331b0bc67d40a5c3a3778c3" Workload="localhost-k8s-calico--apiserver--d589945c5--wwn7p-eth0" Jul 8 10:09:55.588586 containerd[1595]: 2025-07-08 10:09:55.565 [INFO][4632] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9e1fb1c9e8d7afff4523379535cff23b9047f0ea2331b0bc67d40a5c3a3778c3" Namespace="calico-apiserver" Pod="calico-apiserver-d589945c5-wwn7p" WorkloadEndpoint="localhost-k8s-calico--apiserver--d589945c5--wwn7p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d589945c5--wwn7p-eth0", GenerateName:"calico-apiserver-d589945c5-", Namespace:"calico-apiserver", SelfLink:"", UID:"2cff0e85-674f-4a7c-bb93-e9dd47a60308", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.July, 8, 10, 9, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d589945c5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-d589945c5-wwn7p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9ebb405b357", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 8 10:09:55.588586 containerd[1595]: 2025-07-08 10:09:55.565 [INFO][4632] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="9e1fb1c9e8d7afff4523379535cff23b9047f0ea2331b0bc67d40a5c3a3778c3" Namespace="calico-apiserver" Pod="calico-apiserver-d589945c5-wwn7p" WorkloadEndpoint="localhost-k8s-calico--apiserver--d589945c5--wwn7p-eth0" Jul 8 10:09:55.588586 containerd[1595]: 2025-07-08 10:09:55.565 [INFO][4632] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9ebb405b357 ContainerID="9e1fb1c9e8d7afff4523379535cff23b9047f0ea2331b0bc67d40a5c3a3778c3" Namespace="calico-apiserver" Pod="calico-apiserver-d589945c5-wwn7p" WorkloadEndpoint="localhost-k8s-calico--apiserver--d589945c5--wwn7p-eth0" Jul 8 10:09:55.588586 containerd[1595]: 2025-07-08 10:09:55.570 [INFO][4632] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9e1fb1c9e8d7afff4523379535cff23b9047f0ea2331b0bc67d40a5c3a3778c3" Namespace="calico-apiserver" Pod="calico-apiserver-d589945c5-wwn7p" WorkloadEndpoint="localhost-k8s-calico--apiserver--d589945c5--wwn7p-eth0" Jul 8 10:09:55.588586 containerd[1595]: 2025-07-08 10:09:55.571 [INFO][4632] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9e1fb1c9e8d7afff4523379535cff23b9047f0ea2331b0bc67d40a5c3a3778c3" Namespace="calico-apiserver" Pod="calico-apiserver-d589945c5-wwn7p" WorkloadEndpoint="localhost-k8s-calico--apiserver--d589945c5--wwn7p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d589945c5--wwn7p-eth0", GenerateName:"calico-apiserver-d589945c5-", Namespace:"calico-apiserver", SelfLink:"", UID:"2cff0e85-674f-4a7c-bb93-e9dd47a60308", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.July, 8, 10, 9, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d589945c5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9e1fb1c9e8d7afff4523379535cff23b9047f0ea2331b0bc67d40a5c3a3778c3", Pod:"calico-apiserver-d589945c5-wwn7p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9ebb405b357", MAC:"5e:ea:9f:8a:97:b3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 8 10:09:55.588586 containerd[1595]: 2025-07-08 10:09:55.584 [INFO][4632] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9e1fb1c9e8d7afff4523379535cff23b9047f0ea2331b0bc67d40a5c3a3778c3" Namespace="calico-apiserver" Pod="calico-apiserver-d589945c5-wwn7p" WorkloadEndpoint="localhost-k8s-calico--apiserver--d589945c5--wwn7p-eth0" Jul 8 10:09:55.624546 containerd[1595]: time="2025-07-08T10:09:55.624494635Z" level=info msg="connecting to shim 9e1fb1c9e8d7afff4523379535cff23b9047f0ea2331b0bc67d40a5c3a3778c3" address="unix:///run/containerd/s/e54259ae745b2d286527b21cf39da5c9867b6905c19c816f4f80f8ae9879e516" namespace=k8s.io protocol=ttrpc version=3 Jul 8 10:09:55.654216 systemd[1]: Started cri-containerd-9e1fb1c9e8d7afff4523379535cff23b9047f0ea2331b0bc67d40a5c3a3778c3.scope - libcontainer container 9e1fb1c9e8d7afff4523379535cff23b9047f0ea2331b0bc67d40a5c3a3778c3. Jul 8 10:09:55.666265 systemd-resolved[1403]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 8 10:09:55.696828 containerd[1595]: time="2025-07-08T10:09:55.696768642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d589945c5-wwn7p,Uid:2cff0e85-674f-4a7c-bb93-e9dd47a60308,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"9e1fb1c9e8d7afff4523379535cff23b9047f0ea2331b0bc67d40a5c3a3778c3\"" Jul 8 10:09:55.976253 systemd-networkd[1494]: cali0dcd05db5da: Gained IPv6LL Jul 8 10:09:56.104245 systemd-networkd[1494]: vxlan.calico: Gained IPv6LL Jul 8 10:09:56.168374 systemd-networkd[1494]: calidaeffee611f: Gained IPv6LL Jul 8 10:09:56.281316 containerd[1595]: time="2025-07-08T10:09:56.281266009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6fn56,Uid:9657a89f-7630-4cdb-b9bf-4bfd1c84e9b7,Namespace:calico-system,Attempt:0,}" Jul 8 10:09:56.296425 systemd-networkd[1494]: caliec595912c50: Gained IPv6LL Jul 8 10:09:56.375794 systemd-networkd[1494]: caliabc0c88db15: Link UP Jul 8 10:09:56.376347 systemd-networkd[1494]: caliabc0c88db15: Gained carrier Jul 8 10:09:56.392915 containerd[1595]: 2025-07-08 10:09:56.316 [INFO][4745] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--6fn56-eth0 csi-node-driver- calico-system 9657a89f-7630-4cdb-b9bf-4bfd1c84e9b7 703 0 2025-07-08 10:09:27 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-6fn56 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] caliabc0c88db15 [] [] }} ContainerID="960add990c6698ad1935571babf1ce15f513d107d2eb6a0599f92ecc4f924373" Namespace="calico-system" Pod="csi-node-driver-6fn56" WorkloadEndpoint="localhost-k8s-csi--node--driver--6fn56-" Jul 8 10:09:56.392915 containerd[1595]: 2025-07-08 10:09:56.316 [INFO][4745] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="960add990c6698ad1935571babf1ce15f513d107d2eb6a0599f92ecc4f924373" Namespace="calico-system" Pod="csi-node-driver-6fn56" WorkloadEndpoint="localhost-k8s-csi--node--driver--6fn56-eth0" Jul 8 10:09:56.392915 containerd[1595]: 2025-07-08 10:09:56.340 [INFO][4758] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="960add990c6698ad1935571babf1ce15f513d107d2eb6a0599f92ecc4f924373" HandleID="k8s-pod-network.960add990c6698ad1935571babf1ce15f513d107d2eb6a0599f92ecc4f924373" Workload="localhost-k8s-csi--node--driver--6fn56-eth0" Jul 8 10:09:56.392915 containerd[1595]: 2025-07-08 10:09:56.340 [INFO][4758] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="960add990c6698ad1935571babf1ce15f513d107d2eb6a0599f92ecc4f924373" HandleID="k8s-pod-network.960add990c6698ad1935571babf1ce15f513d107d2eb6a0599f92ecc4f924373" Workload="localhost-k8s-csi--node--driver--6fn56-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003ae020), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-6fn56", "timestamp":"2025-07-08 10:09:56.340400204 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 8 10:09:56.392915 containerd[1595]: 2025-07-08 10:09:56.340 [INFO][4758] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 8 10:09:56.392915 containerd[1595]: 2025-07-08 10:09:56.340 [INFO][4758] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 8 10:09:56.392915 containerd[1595]: 2025-07-08 10:09:56.340 [INFO][4758] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 8 10:09:56.392915 containerd[1595]: 2025-07-08 10:09:56.347 [INFO][4758] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.960add990c6698ad1935571babf1ce15f513d107d2eb6a0599f92ecc4f924373" host="localhost" Jul 8 10:09:56.392915 containerd[1595]: 2025-07-08 10:09:56.351 [INFO][4758] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 8 10:09:56.392915 containerd[1595]: 2025-07-08 10:09:56.354 [INFO][4758] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 8 10:09:56.392915 containerd[1595]: 2025-07-08 10:09:56.356 [INFO][4758] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 8 10:09:56.392915 containerd[1595]: 2025-07-08 10:09:56.358 [INFO][4758] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 8 10:09:56.392915 containerd[1595]: 2025-07-08 10:09:56.358 [INFO][4758] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.960add990c6698ad1935571babf1ce15f513d107d2eb6a0599f92ecc4f924373" host="localhost" Jul 8 10:09:56.392915 containerd[1595]: 2025-07-08 10:09:56.360 [INFO][4758] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.960add990c6698ad1935571babf1ce15f513d107d2eb6a0599f92ecc4f924373 Jul 8 10:09:56.392915 containerd[1595]: 2025-07-08 10:09:56.363 [INFO][4758] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.960add990c6698ad1935571babf1ce15f513d107d2eb6a0599f92ecc4f924373" host="localhost" Jul 8 10:09:56.392915 containerd[1595]: 2025-07-08 10:09:56.370 [INFO][4758] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.960add990c6698ad1935571babf1ce15f513d107d2eb6a0599f92ecc4f924373" host="localhost" Jul 8 10:09:56.392915 containerd[1595]: 2025-07-08 10:09:56.370 [INFO][4758] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.960add990c6698ad1935571babf1ce15f513d107d2eb6a0599f92ecc4f924373" host="localhost" Jul 8 10:09:56.392915 containerd[1595]: 2025-07-08 10:09:56.370 [INFO][4758] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 8 10:09:56.392915 containerd[1595]: 2025-07-08 10:09:56.370 [INFO][4758] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="960add990c6698ad1935571babf1ce15f513d107d2eb6a0599f92ecc4f924373" HandleID="k8s-pod-network.960add990c6698ad1935571babf1ce15f513d107d2eb6a0599f92ecc4f924373" Workload="localhost-k8s-csi--node--driver--6fn56-eth0" Jul 8 10:09:56.394008 containerd[1595]: 2025-07-08 10:09:56.373 [INFO][4745] cni-plugin/k8s.go 418: Populated endpoint ContainerID="960add990c6698ad1935571babf1ce15f513d107d2eb6a0599f92ecc4f924373" Namespace="calico-system" Pod="csi-node-driver-6fn56" WorkloadEndpoint="localhost-k8s-csi--node--driver--6fn56-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--6fn56-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9657a89f-7630-4cdb-b9bf-4bfd1c84e9b7", ResourceVersion:"703", Generation:0, CreationTimestamp:time.Date(2025, time.July, 8, 10, 9, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-6fn56", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliabc0c88db15", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 8 10:09:56.394008 containerd[1595]: 2025-07-08 10:09:56.374 [INFO][4745] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="960add990c6698ad1935571babf1ce15f513d107d2eb6a0599f92ecc4f924373" Namespace="calico-system" Pod="csi-node-driver-6fn56" WorkloadEndpoint="localhost-k8s-csi--node--driver--6fn56-eth0" Jul 8 10:09:56.394008 containerd[1595]: 2025-07-08 10:09:56.374 [INFO][4745] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliabc0c88db15 ContainerID="960add990c6698ad1935571babf1ce15f513d107d2eb6a0599f92ecc4f924373" Namespace="calico-system" Pod="csi-node-driver-6fn56" WorkloadEndpoint="localhost-k8s-csi--node--driver--6fn56-eth0" Jul 8 10:09:56.394008 containerd[1595]: 2025-07-08 10:09:56.376 [INFO][4745] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="960add990c6698ad1935571babf1ce15f513d107d2eb6a0599f92ecc4f924373" Namespace="calico-system" Pod="csi-node-driver-6fn56" WorkloadEndpoint="localhost-k8s-csi--node--driver--6fn56-eth0" Jul 8 10:09:56.394008 containerd[1595]: 2025-07-08 10:09:56.376 [INFO][4745] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="960add990c6698ad1935571babf1ce15f513d107d2eb6a0599f92ecc4f924373" Namespace="calico-system" Pod="csi-node-driver-6fn56" WorkloadEndpoint="localhost-k8s-csi--node--driver--6fn56-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--6fn56-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9657a89f-7630-4cdb-b9bf-4bfd1c84e9b7", ResourceVersion:"703", Generation:0, CreationTimestamp:time.Date(2025, time.July, 8, 10, 9, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"960add990c6698ad1935571babf1ce15f513d107d2eb6a0599f92ecc4f924373", Pod:"csi-node-driver-6fn56", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliabc0c88db15", MAC:"52:a8:7b:52:11:c9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 8 10:09:56.394008 containerd[1595]: 2025-07-08 10:09:56.388 [INFO][4745] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="960add990c6698ad1935571babf1ce15f513d107d2eb6a0599f92ecc4f924373" Namespace="calico-system" Pod="csi-node-driver-6fn56" WorkloadEndpoint="localhost-k8s-csi--node--driver--6fn56-eth0" Jul 8 10:09:56.417293 containerd[1595]: time="2025-07-08T10:09:56.417243930Z" level=info msg="connecting to shim 960add990c6698ad1935571babf1ce15f513d107d2eb6a0599f92ecc4f924373" address="unix:///run/containerd/s/09a85c1b6fac19af5b3d8e5e110ede81026a647c6062e033d40216cb670f7263" namespace=k8s.io protocol=ttrpc version=3 Jul 8 10:09:56.445210 systemd[1]: Started cri-containerd-960add990c6698ad1935571babf1ce15f513d107d2eb6a0599f92ecc4f924373.scope - libcontainer container 960add990c6698ad1935571babf1ce15f513d107d2eb6a0599f92ecc4f924373. Jul 8 10:09:56.458063 systemd-resolved[1403]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 8 10:09:56.472870 containerd[1595]: time="2025-07-08T10:09:56.472823472Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6fn56,Uid:9657a89f-7630-4cdb-b9bf-4bfd1c84e9b7,Namespace:calico-system,Attempt:0,} returns sandbox id \"960add990c6698ad1935571babf1ce15f513d107d2eb6a0599f92ecc4f924373\"" Jul 8 10:09:57.000315 systemd-networkd[1494]: cali9ebb405b357: Gained IPv6LL Jul 8 10:09:57.153594 containerd[1595]: time="2025-07-08T10:09:57.153537117Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:09:57.154296 containerd[1595]: time="2025-07-08T10:09:57.154241770Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 8 10:09:57.155320 containerd[1595]: time="2025-07-08T10:09:57.155282864Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:09:57.157218 containerd[1595]: time="2025-07-08T10:09:57.157180055Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:09:57.157813 containerd[1595]: time="2025-07-08T10:09:57.157773839Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 3.112891881s" Jul 8 10:09:57.157813 containerd[1595]: time="2025-07-08T10:09:57.157800679Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 8 10:09:57.158838 containerd[1595]: time="2025-07-08T10:09:57.158804243Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 8 10:09:57.166802 containerd[1595]: time="2025-07-08T10:09:57.159791196Z" level=info msg="CreateContainer within sandbox \"0eb9b22d7f0b798e03c2e587410e6997a7349a7f4976dde9eae7ac9a8aee8210\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 8 10:09:57.196182 containerd[1595]: time="2025-07-08T10:09:57.196111083Z" level=info msg="Container 6721abc271a707ad42d5fc0270d9237369a658780c60ebb62b957e56a8696519: CDI devices from CRI Config.CDIDevices: []" Jul 8 10:09:57.197903 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4013866580.mount: Deactivated successfully. Jul 8 10:09:57.205813 containerd[1595]: time="2025-07-08T10:09:57.205771404Z" level=info msg="CreateContainer within sandbox \"0eb9b22d7f0b798e03c2e587410e6997a7349a7f4976dde9eae7ac9a8aee8210\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"6721abc271a707ad42d5fc0270d9237369a658780c60ebb62b957e56a8696519\"" Jul 8 10:09:57.206449 containerd[1595]: time="2025-07-08T10:09:57.206360520Z" level=info msg="StartContainer for \"6721abc271a707ad42d5fc0270d9237369a658780c60ebb62b957e56a8696519\"" Jul 8 10:09:57.207561 containerd[1595]: time="2025-07-08T10:09:57.207534163Z" level=info msg="connecting to shim 6721abc271a707ad42d5fc0270d9237369a658780c60ebb62b957e56a8696519" address="unix:///run/containerd/s/1bd4010bf66f09d173131ca28db668959aeb04f54fcc5350cb4162115afd0d06" protocol=ttrpc version=3 Jul 8 10:09:57.231206 systemd[1]: Started cri-containerd-6721abc271a707ad42d5fc0270d9237369a658780c60ebb62b957e56a8696519.scope - libcontainer container 6721abc271a707ad42d5fc0270d9237369a658780c60ebb62b957e56a8696519. Jul 8 10:09:57.286026 containerd[1595]: time="2025-07-08T10:09:57.285406319Z" level=info msg="StartContainer for \"6721abc271a707ad42d5fc0270d9237369a658780c60ebb62b957e56a8696519\" returns successfully" Jul 8 10:09:58.152262 systemd-networkd[1494]: caliabc0c88db15: Gained IPv6LL Jul 8 10:09:59.995147 systemd[1]: Started sshd@9-10.0.0.19:22-10.0.0.1:42276.service - OpenSSH per-connection server daemon (10.0.0.1:42276). Jul 8 10:10:00.056459 sshd[4867]: Accepted publickey for core from 10.0.0.1 port 42276 ssh2: RSA SHA256:BQWGw9qDwwKm1WzdalDD28JNIdz3HBcaxKwPB41deh4 Jul 8 10:10:00.059589 sshd-session[4867]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 8 10:10:00.066341 systemd-logind[1579]: New session 10 of user core. Jul 8 10:10:00.077397 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 8 10:10:00.218198 sshd[4870]: Connection closed by 10.0.0.1 port 42276 Jul 8 10:10:00.220312 sshd-session[4867]: pam_unix(sshd:session): session closed for user core Jul 8 10:10:00.225408 systemd[1]: sshd@9-10.0.0.19:22-10.0.0.1:42276.service: Deactivated successfully. Jul 8 10:10:00.228363 systemd[1]: session-10.scope: Deactivated successfully. Jul 8 10:10:00.230247 systemd-logind[1579]: Session 10 logged out. Waiting for processes to exit. Jul 8 10:10:00.232793 systemd-logind[1579]: Removed session 10. Jul 8 10:10:00.248646 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3804676656.mount: Deactivated successfully. Jul 8 10:10:00.931282 containerd[1595]: time="2025-07-08T10:10:00.931211986Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:10:00.932040 containerd[1595]: time="2025-07-08T10:10:00.931973344Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 8 10:10:00.933393 containerd[1595]: time="2025-07-08T10:10:00.933341542Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:10:00.935308 containerd[1595]: time="2025-07-08T10:10:00.935273377Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:10:00.935884 containerd[1595]: time="2025-07-08T10:10:00.935856601Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 3.777015058s" Jul 8 10:10:00.935931 containerd[1595]: time="2025-07-08T10:10:00.935887619Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 8 10:10:00.937154 containerd[1595]: time="2025-07-08T10:10:00.936676860Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 8 10:10:00.937956 containerd[1595]: time="2025-07-08T10:10:00.937926485Z" level=info msg="CreateContainer within sandbox \"e880a05b23b002a3939d90fe3c046ea527b6018e3d956ce5d230e585740994a3\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 8 10:10:01.090431 containerd[1595]: time="2025-07-08T10:10:01.090333995Z" level=info msg="Container b24b7174551df6837c9c33840688bc700a95901dc7ff716b585b2f5002cae366: CDI devices from CRI Config.CDIDevices: []" Jul 8 10:10:01.331177 containerd[1595]: time="2025-07-08T10:10:01.331118785Z" level=info msg="CreateContainer within sandbox \"e880a05b23b002a3939d90fe3c046ea527b6018e3d956ce5d230e585740994a3\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"b24b7174551df6837c9c33840688bc700a95901dc7ff716b585b2f5002cae366\"" Jul 8 10:10:01.331838 containerd[1595]: time="2025-07-08T10:10:01.331687221Z" level=info msg="StartContainer for \"b24b7174551df6837c9c33840688bc700a95901dc7ff716b585b2f5002cae366\"" Jul 8 10:10:01.332744 containerd[1595]: time="2025-07-08T10:10:01.332717294Z" level=info msg="connecting to shim b24b7174551df6837c9c33840688bc700a95901dc7ff716b585b2f5002cae366" address="unix:///run/containerd/s/a1364fa2611ef7fa8c99ae9ee93125de1022fc9d23272e564f0b18f87e5f379b" protocol=ttrpc version=3 Jul 8 10:10:01.360247 systemd[1]: Started cri-containerd-b24b7174551df6837c9c33840688bc700a95901dc7ff716b585b2f5002cae366.scope - libcontainer container b24b7174551df6837c9c33840688bc700a95901dc7ff716b585b2f5002cae366. Jul 8 10:10:01.686710 containerd[1595]: time="2025-07-08T10:10:01.686521798Z" level=info msg="StartContainer for \"b24b7174551df6837c9c33840688bc700a95901dc7ff716b585b2f5002cae366\" returns successfully" Jul 8 10:10:02.413151 containerd[1595]: time="2025-07-08T10:10:02.413008372Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b24b7174551df6837c9c33840688bc700a95901dc7ff716b585b2f5002cae366\" id:\"308b335a074fb904ff5596918f8c1640fd9eb4a7422e04c75a22902717011aba\" pid:4946 exit_status:1 exited_at:{seconds:1751969402 nanos:412563417}" Jul 8 10:10:02.431004 kubelet[2732]: I0708 10:10:02.430902 2732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-dz6z6" podStartSLOduration=30.436218913 podStartE2EDuration="36.430885024s" podCreationTimestamp="2025-07-08 10:09:26 +0000 UTC" firstStartedPulling="2025-07-08 10:09:54.941904068 +0000 UTC m=+51.753727182" lastFinishedPulling="2025-07-08 10:10:00.93657018 +0000 UTC m=+57.748393293" observedRunningTime="2025-07-08 10:10:02.429453368 +0000 UTC m=+59.241276491" watchObservedRunningTime="2025-07-08 10:10:02.430885024 +0000 UTC m=+59.242708137" Jul 8 10:10:03.421933 containerd[1595]: time="2025-07-08T10:10:03.421878933Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b24b7174551df6837c9c33840688bc700a95901dc7ff716b585b2f5002cae366\" id:\"3c887a5bda27ec4ba82d6ccdd562d21ac28abefd45229f466bb49844672a3db5\" pid:4975 exit_status:1 exited_at:{seconds:1751969403 nanos:421541820}" Jul 8 10:10:04.441841 containerd[1595]: time="2025-07-08T10:10:04.441777170Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:10:04.442572 containerd[1595]: time="2025-07-08T10:10:04.442531245Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 8 10:10:04.443961 containerd[1595]: time="2025-07-08T10:10:04.443927374Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:10:04.446428 containerd[1595]: time="2025-07-08T10:10:04.446374716Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:10:04.447020 containerd[1595]: time="2025-07-08T10:10:04.446988347Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 3.510282823s" Jul 8 10:10:04.447061 containerd[1595]: time="2025-07-08T10:10:04.447019705Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 8 10:10:04.451797 containerd[1595]: time="2025-07-08T10:10:04.451768946Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 8 10:10:04.462232 containerd[1595]: time="2025-07-08T10:10:04.462179227Z" level=info msg="CreateContainer within sandbox \"4da3234d9886716ec497f3cb3299c5c3e8466b7bad22d42212c43394a8aaf493\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 8 10:10:04.471356 containerd[1595]: time="2025-07-08T10:10:04.471299779Z" level=info msg="Container 66c072c6d0256f6b8a1705ae621e2e296231b1464a9ef25bd084f17fc4d8e0b9: CDI devices from CRI Config.CDIDevices: []" Jul 8 10:10:04.478880 containerd[1595]: time="2025-07-08T10:10:04.478812645Z" level=info msg="CreateContainer within sandbox \"4da3234d9886716ec497f3cb3299c5c3e8466b7bad22d42212c43394a8aaf493\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"66c072c6d0256f6b8a1705ae621e2e296231b1464a9ef25bd084f17fc4d8e0b9\"" Jul 8 10:10:04.481998 containerd[1595]: time="2025-07-08T10:10:04.481964699Z" level=info msg="StartContainer for \"66c072c6d0256f6b8a1705ae621e2e296231b1464a9ef25bd084f17fc4d8e0b9\"" Jul 8 10:10:04.483824 containerd[1595]: time="2025-07-08T10:10:04.483786467Z" level=info msg="connecting to shim 66c072c6d0256f6b8a1705ae621e2e296231b1464a9ef25bd084f17fc4d8e0b9" address="unix:///run/containerd/s/1a7b1e5876bc2ddf60a837f14121b506e0c6ddfc2ab8efd5e059fda7a4887846" protocol=ttrpc version=3 Jul 8 10:10:04.535234 systemd[1]: Started cri-containerd-66c072c6d0256f6b8a1705ae621e2e296231b1464a9ef25bd084f17fc4d8e0b9.scope - libcontainer container 66c072c6d0256f6b8a1705ae621e2e296231b1464a9ef25bd084f17fc4d8e0b9. Jul 8 10:10:04.587401 containerd[1595]: time="2025-07-08T10:10:04.587351645Z" level=info msg="StartContainer for \"66c072c6d0256f6b8a1705ae621e2e296231b1464a9ef25bd084f17fc4d8e0b9\" returns successfully" Jul 8 10:10:04.823821 containerd[1595]: time="2025-07-08T10:10:04.823765011Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:10:04.824548 containerd[1595]: time="2025-07-08T10:10:04.824522803Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 8 10:10:04.826332 containerd[1595]: time="2025-07-08T10:10:04.826300468Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 374.505604ms" Jul 8 10:10:04.826332 containerd[1595]: time="2025-07-08T10:10:04.826326817Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 8 10:10:04.827257 containerd[1595]: time="2025-07-08T10:10:04.827220784Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 8 10:10:04.828777 containerd[1595]: time="2025-07-08T10:10:04.828751577Z" level=info msg="CreateContainer within sandbox \"9e1fb1c9e8d7afff4523379535cff23b9047f0ea2331b0bc67d40a5c3a3778c3\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 8 10:10:04.839314 containerd[1595]: time="2025-07-08T10:10:04.839260744Z" level=info msg="Container d06c343212c7498ccefbe111cd653b51988a2eb59077957ee8ec491b5a801545: CDI devices from CRI Config.CDIDevices: []" Jul 8 10:10:04.846851 containerd[1595]: time="2025-07-08T10:10:04.846823233Z" level=info msg="CreateContainer within sandbox \"9e1fb1c9e8d7afff4523379535cff23b9047f0ea2331b0bc67d40a5c3a3778c3\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d06c343212c7498ccefbe111cd653b51988a2eb59077957ee8ec491b5a801545\"" Jul 8 10:10:04.847350 containerd[1595]: time="2025-07-08T10:10:04.847322720Z" level=info msg="StartContainer for \"d06c343212c7498ccefbe111cd653b51988a2eb59077957ee8ec491b5a801545\"" Jul 8 10:10:04.848510 containerd[1595]: time="2025-07-08T10:10:04.848483879Z" level=info msg="connecting to shim d06c343212c7498ccefbe111cd653b51988a2eb59077957ee8ec491b5a801545" address="unix:///run/containerd/s/e54259ae745b2d286527b21cf39da5c9867b6905c19c816f4f80f8ae9879e516" protocol=ttrpc version=3 Jul 8 10:10:04.869261 systemd[1]: Started cri-containerd-d06c343212c7498ccefbe111cd653b51988a2eb59077957ee8ec491b5a801545.scope - libcontainer container d06c343212c7498ccefbe111cd653b51988a2eb59077957ee8ec491b5a801545. Jul 8 10:10:04.917694 containerd[1595]: time="2025-07-08T10:10:04.917644938Z" level=info msg="StartContainer for \"d06c343212c7498ccefbe111cd653b51988a2eb59077957ee8ec491b5a801545\" returns successfully" Jul 8 10:10:05.232747 systemd[1]: Started sshd@10-10.0.0.19:22-10.0.0.1:42282.service - OpenSSH per-connection server daemon (10.0.0.1:42282). Jul 8 10:10:05.302902 sshd[5066]: Accepted publickey for core from 10.0.0.1 port 42282 ssh2: RSA SHA256:BQWGw9qDwwKm1WzdalDD28JNIdz3HBcaxKwPB41deh4 Jul 8 10:10:05.304710 sshd-session[5066]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 8 10:10:05.310253 systemd-logind[1579]: New session 11 of user core. Jul 8 10:10:05.315518 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 8 10:10:05.604272 kubelet[2732]: I0708 10:10:05.604073 2732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-d589945c5-wwn7p" podStartSLOduration=33.475472514 podStartE2EDuration="42.604031003s" podCreationTimestamp="2025-07-08 10:09:23 +0000 UTC" firstStartedPulling="2025-07-08 10:09:55.698495092 +0000 UTC m=+52.510318205" lastFinishedPulling="2025-07-08 10:10:04.827053591 +0000 UTC m=+61.638876694" observedRunningTime="2025-07-08 10:10:05.603069609 +0000 UTC m=+62.414892732" watchObservedRunningTime="2025-07-08 10:10:05.604031003 +0000 UTC m=+62.415854116" Jul 8 10:10:05.613004 sshd[5072]: Connection closed by 10.0.0.1 port 42282 Jul 8 10:10:05.613359 sshd-session[5066]: pam_unix(sshd:session): session closed for user core Jul 8 10:10:05.624725 systemd[1]: sshd@10-10.0.0.19:22-10.0.0.1:42282.service: Deactivated successfully. Jul 8 10:10:05.627232 systemd[1]: session-11.scope: Deactivated successfully. Jul 8 10:10:05.628804 systemd-logind[1579]: Session 11 logged out. Waiting for processes to exit. Jul 8 10:10:05.631407 systemd[1]: Started sshd@11-10.0.0.19:22-10.0.0.1:42298.service - OpenSSH per-connection server daemon (10.0.0.1:42298). Jul 8 10:10:05.632638 systemd-logind[1579]: Removed session 11. Jul 8 10:10:05.678454 sshd[5089]: Accepted publickey for core from 10.0.0.1 port 42298 ssh2: RSA SHA256:BQWGw9qDwwKm1WzdalDD28JNIdz3HBcaxKwPB41deh4 Jul 8 10:10:05.679698 sshd-session[5089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 8 10:10:05.684052 systemd-logind[1579]: New session 12 of user core. Jul 8 10:10:05.691223 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 8 10:10:05.913903 kubelet[2732]: I0708 10:10:05.913508 2732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-d589945c5-lcwrp" podStartSLOduration=33.530580374 podStartE2EDuration="42.913485595s" podCreationTimestamp="2025-07-08 10:09:23 +0000 UTC" firstStartedPulling="2025-07-08 10:09:55.068619107 +0000 UTC m=+51.880442220" lastFinishedPulling="2025-07-08 10:10:04.451524328 +0000 UTC m=+61.263347441" observedRunningTime="2025-07-08 10:10:05.912448079 +0000 UTC m=+62.724271202" watchObservedRunningTime="2025-07-08 10:10:05.913485595 +0000 UTC m=+62.725308708" Jul 8 10:10:05.931654 sshd[5092]: Connection closed by 10.0.0.1 port 42298 Jul 8 10:10:05.932407 sshd-session[5089]: pam_unix(sshd:session): session closed for user core Jul 8 10:10:05.942532 systemd[1]: sshd@11-10.0.0.19:22-10.0.0.1:42298.service: Deactivated successfully. Jul 8 10:10:05.945957 systemd[1]: session-12.scope: Deactivated successfully. Jul 8 10:10:05.949344 systemd-logind[1579]: Session 12 logged out. Waiting for processes to exit. Jul 8 10:10:05.953375 systemd[1]: Started sshd@12-10.0.0.19:22-10.0.0.1:42302.service - OpenSSH per-connection server daemon (10.0.0.1:42302). Jul 8 10:10:05.958271 systemd-logind[1579]: Removed session 12. Jul 8 10:10:06.004036 sshd[5107]: Accepted publickey for core from 10.0.0.1 port 42302 ssh2: RSA SHA256:BQWGw9qDwwKm1WzdalDD28JNIdz3HBcaxKwPB41deh4 Jul 8 10:10:06.005375 sshd-session[5107]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 8 10:10:06.009826 systemd-logind[1579]: New session 13 of user core. Jul 8 10:10:06.021381 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 8 10:10:06.163475 sshd[5111]: Connection closed by 10.0.0.1 port 42302 Jul 8 10:10:06.164626 sshd-session[5107]: pam_unix(sshd:session): session closed for user core Jul 8 10:10:06.171304 systemd[1]: sshd@12-10.0.0.19:22-10.0.0.1:42302.service: Deactivated successfully. Jul 8 10:10:06.176125 systemd[1]: session-13.scope: Deactivated successfully. Jul 8 10:10:06.177556 systemd-logind[1579]: Session 13 logged out. Waiting for processes to exit. Jul 8 10:10:06.179366 systemd-logind[1579]: Removed session 13. Jul 8 10:10:06.282179 containerd[1595]: time="2025-07-08T10:10:06.282046258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-zr8q6,Uid:061d687d-cf7c-442d-bfd4-1574d84082c3,Namespace:kube-system,Attempt:0,}" Jul 8 10:10:06.492657 systemd-networkd[1494]: cali6427aa4f6be: Link UP Jul 8 10:10:06.492872 systemd-networkd[1494]: cali6427aa4f6be: Gained carrier Jul 8 10:10:06.506116 containerd[1595]: 2025-07-08 10:10:06.413 [INFO][5124] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--zr8q6-eth0 coredns-7c65d6cfc9- kube-system 061d687d-cf7c-442d-bfd4-1574d84082c3 805 0 2025-07-08 10:09:10 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-zr8q6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6427aa4f6be [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="b55f7dd8939afee4c846772d4c42371d7ce91ec5aff52b49779189a2b0010724" Namespace="kube-system" Pod="coredns-7c65d6cfc9-zr8q6" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--zr8q6-" Jul 8 10:10:06.506116 containerd[1595]: 2025-07-08 10:10:06.414 [INFO][5124] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b55f7dd8939afee4c846772d4c42371d7ce91ec5aff52b49779189a2b0010724" Namespace="kube-system" Pod="coredns-7c65d6cfc9-zr8q6" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--zr8q6-eth0" Jul 8 10:10:06.506116 containerd[1595]: 2025-07-08 10:10:06.448 [INFO][5140] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b55f7dd8939afee4c846772d4c42371d7ce91ec5aff52b49779189a2b0010724" HandleID="k8s-pod-network.b55f7dd8939afee4c846772d4c42371d7ce91ec5aff52b49779189a2b0010724" Workload="localhost-k8s-coredns--7c65d6cfc9--zr8q6-eth0" Jul 8 10:10:06.506116 containerd[1595]: 2025-07-08 10:10:06.448 [INFO][5140] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b55f7dd8939afee4c846772d4c42371d7ce91ec5aff52b49779189a2b0010724" HandleID="k8s-pod-network.b55f7dd8939afee4c846772d4c42371d7ce91ec5aff52b49779189a2b0010724" Workload="localhost-k8s-coredns--7c65d6cfc9--zr8q6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d7740), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-zr8q6", "timestamp":"2025-07-08 10:10:06.448342026 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 8 10:10:06.506116 containerd[1595]: 2025-07-08 10:10:06.448 [INFO][5140] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 8 10:10:06.506116 containerd[1595]: 2025-07-08 10:10:06.448 [INFO][5140] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 8 10:10:06.506116 containerd[1595]: 2025-07-08 10:10:06.448 [INFO][5140] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 8 10:10:06.506116 containerd[1595]: 2025-07-08 10:10:06.458 [INFO][5140] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b55f7dd8939afee4c846772d4c42371d7ce91ec5aff52b49779189a2b0010724" host="localhost" Jul 8 10:10:06.506116 containerd[1595]: 2025-07-08 10:10:06.464 [INFO][5140] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 8 10:10:06.506116 containerd[1595]: 2025-07-08 10:10:06.468 [INFO][5140] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 8 10:10:06.506116 containerd[1595]: 2025-07-08 10:10:06.470 [INFO][5140] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 8 10:10:06.506116 containerd[1595]: 2025-07-08 10:10:06.472 [INFO][5140] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 8 10:10:06.506116 containerd[1595]: 2025-07-08 10:10:06.472 [INFO][5140] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b55f7dd8939afee4c846772d4c42371d7ce91ec5aff52b49779189a2b0010724" host="localhost" Jul 8 10:10:06.506116 containerd[1595]: 2025-07-08 10:10:06.474 [INFO][5140] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b55f7dd8939afee4c846772d4c42371d7ce91ec5aff52b49779189a2b0010724 Jul 8 10:10:06.506116 containerd[1595]: 2025-07-08 10:10:06.478 [INFO][5140] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b55f7dd8939afee4c846772d4c42371d7ce91ec5aff52b49779189a2b0010724" host="localhost" Jul 8 10:10:06.506116 containerd[1595]: 2025-07-08 10:10:06.485 [INFO][5140] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.b55f7dd8939afee4c846772d4c42371d7ce91ec5aff52b49779189a2b0010724" host="localhost" Jul 8 10:10:06.506116 containerd[1595]: 2025-07-08 10:10:06.485 [INFO][5140] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.b55f7dd8939afee4c846772d4c42371d7ce91ec5aff52b49779189a2b0010724" host="localhost" Jul 8 10:10:06.506116 containerd[1595]: 2025-07-08 10:10:06.485 [INFO][5140] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 8 10:10:06.506116 containerd[1595]: 2025-07-08 10:10:06.486 [INFO][5140] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="b55f7dd8939afee4c846772d4c42371d7ce91ec5aff52b49779189a2b0010724" HandleID="k8s-pod-network.b55f7dd8939afee4c846772d4c42371d7ce91ec5aff52b49779189a2b0010724" Workload="localhost-k8s-coredns--7c65d6cfc9--zr8q6-eth0" Jul 8 10:10:06.506989 containerd[1595]: 2025-07-08 10:10:06.489 [INFO][5124] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b55f7dd8939afee4c846772d4c42371d7ce91ec5aff52b49779189a2b0010724" Namespace="kube-system" Pod="coredns-7c65d6cfc9-zr8q6" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--zr8q6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--zr8q6-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"061d687d-cf7c-442d-bfd4-1574d84082c3", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.July, 8, 10, 9, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-zr8q6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6427aa4f6be", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 8 10:10:06.506989 containerd[1595]: 2025-07-08 10:10:06.489 [INFO][5124] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="b55f7dd8939afee4c846772d4c42371d7ce91ec5aff52b49779189a2b0010724" Namespace="kube-system" Pod="coredns-7c65d6cfc9-zr8q6" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--zr8q6-eth0" Jul 8 10:10:06.506989 containerd[1595]: 2025-07-08 10:10:06.489 [INFO][5124] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6427aa4f6be ContainerID="b55f7dd8939afee4c846772d4c42371d7ce91ec5aff52b49779189a2b0010724" Namespace="kube-system" Pod="coredns-7c65d6cfc9-zr8q6" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--zr8q6-eth0" Jul 8 10:10:06.506989 containerd[1595]: 2025-07-08 10:10:06.493 [INFO][5124] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b55f7dd8939afee4c846772d4c42371d7ce91ec5aff52b49779189a2b0010724" Namespace="kube-system" Pod="coredns-7c65d6cfc9-zr8q6" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--zr8q6-eth0" Jul 8 10:10:06.506989 containerd[1595]: 2025-07-08 10:10:06.494 [INFO][5124] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b55f7dd8939afee4c846772d4c42371d7ce91ec5aff52b49779189a2b0010724" Namespace="kube-system" Pod="coredns-7c65d6cfc9-zr8q6" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--zr8q6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--zr8q6-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"061d687d-cf7c-442d-bfd4-1574d84082c3", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.July, 8, 10, 9, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b55f7dd8939afee4c846772d4c42371d7ce91ec5aff52b49779189a2b0010724", Pod:"coredns-7c65d6cfc9-zr8q6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6427aa4f6be", MAC:"ee:a1:d1:82:2b:a8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 8 10:10:06.506989 containerd[1595]: 2025-07-08 10:10:06.502 [INFO][5124] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b55f7dd8939afee4c846772d4c42371d7ce91ec5aff52b49779189a2b0010724" Namespace="kube-system" Pod="coredns-7c65d6cfc9-zr8q6" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--zr8q6-eth0" Jul 8 10:10:06.730174 containerd[1595]: time="2025-07-08T10:10:06.730111228Z" level=info msg="connecting to shim b55f7dd8939afee4c846772d4c42371d7ce91ec5aff52b49779189a2b0010724" address="unix:///run/containerd/s/3ecef60fa558e8f27d0a0f41975aef39f4dbe9ed79fb60bd4e335579e06b29e8" namespace=k8s.io protocol=ttrpc version=3 Jul 8 10:10:06.766207 systemd[1]: Started cri-containerd-b55f7dd8939afee4c846772d4c42371d7ce91ec5aff52b49779189a2b0010724.scope - libcontainer container b55f7dd8939afee4c846772d4c42371d7ce91ec5aff52b49779189a2b0010724. Jul 8 10:10:06.784576 systemd-resolved[1403]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 8 10:10:06.820323 containerd[1595]: time="2025-07-08T10:10:06.820266212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-zr8q6,Uid:061d687d-cf7c-442d-bfd4-1574d84082c3,Namespace:kube-system,Attempt:0,} returns sandbox id \"b55f7dd8939afee4c846772d4c42371d7ce91ec5aff52b49779189a2b0010724\"" Jul 8 10:10:06.825839 containerd[1595]: time="2025-07-08T10:10:06.825392991Z" level=info msg="CreateContainer within sandbox \"b55f7dd8939afee4c846772d4c42371d7ce91ec5aff52b49779189a2b0010724\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 8 10:10:06.841207 containerd[1595]: time="2025-07-08T10:10:06.841057758Z" level=info msg="Container f8003654d85fedca708a78cb6de7116fda968f36d4c6829f388c51577838044d: CDI devices from CRI Config.CDIDevices: []" Jul 8 10:10:06.850465 containerd[1595]: time="2025-07-08T10:10:06.850321137Z" level=info msg="CreateContainer within sandbox \"b55f7dd8939afee4c846772d4c42371d7ce91ec5aff52b49779189a2b0010724\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f8003654d85fedca708a78cb6de7116fda968f36d4c6829f388c51577838044d\"" Jul 8 10:10:06.852669 containerd[1595]: time="2025-07-08T10:10:06.852628135Z" level=info msg="StartContainer for \"f8003654d85fedca708a78cb6de7116fda968f36d4c6829f388c51577838044d\"" Jul 8 10:10:06.854117 containerd[1595]: time="2025-07-08T10:10:06.854061835Z" level=info msg="connecting to shim f8003654d85fedca708a78cb6de7116fda968f36d4c6829f388c51577838044d" address="unix:///run/containerd/s/3ecef60fa558e8f27d0a0f41975aef39f4dbe9ed79fb60bd4e335579e06b29e8" protocol=ttrpc version=3 Jul 8 10:10:06.879228 systemd[1]: Started cri-containerd-f8003654d85fedca708a78cb6de7116fda968f36d4c6829f388c51577838044d.scope - libcontainer container f8003654d85fedca708a78cb6de7116fda968f36d4c6829f388c51577838044d. Jul 8 10:10:06.911963 containerd[1595]: time="2025-07-08T10:10:06.911914879Z" level=info msg="StartContainer for \"f8003654d85fedca708a78cb6de7116fda968f36d4c6829f388c51577838044d\" returns successfully" Jul 8 10:10:07.282212 containerd[1595]: time="2025-07-08T10:10:07.282148516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b995ff85b-5jvv6,Uid:2f37cc6d-720d-4b49-b845-4d7d2188107a,Namespace:calico-system,Attempt:0,}" Jul 8 10:10:07.500038 kubelet[2732]: I0708 10:10:07.499867 2732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-zr8q6" podStartSLOduration=57.499842064 podStartE2EDuration="57.499842064s" podCreationTimestamp="2025-07-08 10:09:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-08 10:10:07.499508399 +0000 UTC m=+64.311331522" watchObservedRunningTime="2025-07-08 10:10:07.499842064 +0000 UTC m=+64.311665177" Jul 8 10:10:07.597566 systemd-networkd[1494]: cali854fd9de31a: Link UP Jul 8 10:10:07.598224 systemd-networkd[1494]: cali854fd9de31a: Gained carrier Jul 8 10:10:07.614256 containerd[1595]: 2025-07-08 10:10:07.518 [INFO][5247] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6b995ff85b--5jvv6-eth0 calico-kube-controllers-6b995ff85b- calico-system 2f37cc6d-720d-4b49-b845-4d7d2188107a 813 0 2025-07-08 10:09:28 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6b995ff85b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6b995ff85b-5jvv6 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali854fd9de31a [] [] }} ContainerID="a4aa3ccde6c54805c820f2ede190fd2998ffd3fc71546ec279931b02ca19d69f" Namespace="calico-system" Pod="calico-kube-controllers-6b995ff85b-5jvv6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6b995ff85b--5jvv6-" Jul 8 10:10:07.614256 containerd[1595]: 2025-07-08 10:10:07.518 [INFO][5247] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a4aa3ccde6c54805c820f2ede190fd2998ffd3fc71546ec279931b02ca19d69f" Namespace="calico-system" Pod="calico-kube-controllers-6b995ff85b-5jvv6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6b995ff85b--5jvv6-eth0" Jul 8 10:10:07.614256 containerd[1595]: 2025-07-08 10:10:07.546 [INFO][5261] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a4aa3ccde6c54805c820f2ede190fd2998ffd3fc71546ec279931b02ca19d69f" HandleID="k8s-pod-network.a4aa3ccde6c54805c820f2ede190fd2998ffd3fc71546ec279931b02ca19d69f" Workload="localhost-k8s-calico--kube--controllers--6b995ff85b--5jvv6-eth0" Jul 8 10:10:07.614256 containerd[1595]: 2025-07-08 10:10:07.546 [INFO][5261] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a4aa3ccde6c54805c820f2ede190fd2998ffd3fc71546ec279931b02ca19d69f" HandleID="k8s-pod-network.a4aa3ccde6c54805c820f2ede190fd2998ffd3fc71546ec279931b02ca19d69f" Workload="localhost-k8s-calico--kube--controllers--6b995ff85b--5jvv6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000c16f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6b995ff85b-5jvv6", "timestamp":"2025-07-08 10:10:07.546481058 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 8 10:10:07.614256 containerd[1595]: 2025-07-08 10:10:07.546 [INFO][5261] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 8 10:10:07.614256 containerd[1595]: 2025-07-08 10:10:07.546 [INFO][5261] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 8 10:10:07.614256 containerd[1595]: 2025-07-08 10:10:07.546 [INFO][5261] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 8 10:10:07.614256 containerd[1595]: 2025-07-08 10:10:07.553 [INFO][5261] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a4aa3ccde6c54805c820f2ede190fd2998ffd3fc71546ec279931b02ca19d69f" host="localhost" Jul 8 10:10:07.614256 containerd[1595]: 2025-07-08 10:10:07.557 [INFO][5261] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 8 10:10:07.614256 containerd[1595]: 2025-07-08 10:10:07.561 [INFO][5261] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 8 10:10:07.614256 containerd[1595]: 2025-07-08 10:10:07.562 [INFO][5261] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 8 10:10:07.614256 containerd[1595]: 2025-07-08 10:10:07.564 [INFO][5261] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 8 10:10:07.614256 containerd[1595]: 2025-07-08 10:10:07.564 [INFO][5261] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a4aa3ccde6c54805c820f2ede190fd2998ffd3fc71546ec279931b02ca19d69f" host="localhost" Jul 8 10:10:07.614256 containerd[1595]: 2025-07-08 10:10:07.566 [INFO][5261] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a4aa3ccde6c54805c820f2ede190fd2998ffd3fc71546ec279931b02ca19d69f Jul 8 10:10:07.614256 containerd[1595]: 2025-07-08 10:10:07.586 [INFO][5261] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a4aa3ccde6c54805c820f2ede190fd2998ffd3fc71546ec279931b02ca19d69f" host="localhost" Jul 8 10:10:07.614256 containerd[1595]: 2025-07-08 10:10:07.592 [INFO][5261] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.a4aa3ccde6c54805c820f2ede190fd2998ffd3fc71546ec279931b02ca19d69f" host="localhost" Jul 8 10:10:07.614256 containerd[1595]: 2025-07-08 10:10:07.593 [INFO][5261] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.a4aa3ccde6c54805c820f2ede190fd2998ffd3fc71546ec279931b02ca19d69f" host="localhost" Jul 8 10:10:07.614256 containerd[1595]: 2025-07-08 10:10:07.593 [INFO][5261] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 8 10:10:07.614256 containerd[1595]: 2025-07-08 10:10:07.593 [INFO][5261] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="a4aa3ccde6c54805c820f2ede190fd2998ffd3fc71546ec279931b02ca19d69f" HandleID="k8s-pod-network.a4aa3ccde6c54805c820f2ede190fd2998ffd3fc71546ec279931b02ca19d69f" Workload="localhost-k8s-calico--kube--controllers--6b995ff85b--5jvv6-eth0" Jul 8 10:10:07.614865 containerd[1595]: 2025-07-08 10:10:07.595 [INFO][5247] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a4aa3ccde6c54805c820f2ede190fd2998ffd3fc71546ec279931b02ca19d69f" Namespace="calico-system" Pod="calico-kube-controllers-6b995ff85b-5jvv6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6b995ff85b--5jvv6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6b995ff85b--5jvv6-eth0", GenerateName:"calico-kube-controllers-6b995ff85b-", Namespace:"calico-system", SelfLink:"", UID:"2f37cc6d-720d-4b49-b845-4d7d2188107a", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.July, 8, 10, 9, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6b995ff85b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6b995ff85b-5jvv6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali854fd9de31a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 8 10:10:07.614865 containerd[1595]: 2025-07-08 10:10:07.595 [INFO][5247] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="a4aa3ccde6c54805c820f2ede190fd2998ffd3fc71546ec279931b02ca19d69f" Namespace="calico-system" Pod="calico-kube-controllers-6b995ff85b-5jvv6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6b995ff85b--5jvv6-eth0" Jul 8 10:10:07.614865 containerd[1595]: 2025-07-08 10:10:07.595 [INFO][5247] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali854fd9de31a ContainerID="a4aa3ccde6c54805c820f2ede190fd2998ffd3fc71546ec279931b02ca19d69f" Namespace="calico-system" Pod="calico-kube-controllers-6b995ff85b-5jvv6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6b995ff85b--5jvv6-eth0" Jul 8 10:10:07.614865 containerd[1595]: 2025-07-08 10:10:07.598 [INFO][5247] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a4aa3ccde6c54805c820f2ede190fd2998ffd3fc71546ec279931b02ca19d69f" Namespace="calico-system" Pod="calico-kube-controllers-6b995ff85b-5jvv6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6b995ff85b--5jvv6-eth0" Jul 8 10:10:07.614865 containerd[1595]: 2025-07-08 10:10:07.598 [INFO][5247] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a4aa3ccde6c54805c820f2ede190fd2998ffd3fc71546ec279931b02ca19d69f" Namespace="calico-system" Pod="calico-kube-controllers-6b995ff85b-5jvv6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6b995ff85b--5jvv6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6b995ff85b--5jvv6-eth0", GenerateName:"calico-kube-controllers-6b995ff85b-", Namespace:"calico-system", SelfLink:"", UID:"2f37cc6d-720d-4b49-b845-4d7d2188107a", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.July, 8, 10, 9, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6b995ff85b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a4aa3ccde6c54805c820f2ede190fd2998ffd3fc71546ec279931b02ca19d69f", Pod:"calico-kube-controllers-6b995ff85b-5jvv6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali854fd9de31a", MAC:"86:4a:c5:c1:49:0e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 8 10:10:07.614865 containerd[1595]: 2025-07-08 10:10:07.606 [INFO][5247] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a4aa3ccde6c54805c820f2ede190fd2998ffd3fc71546ec279931b02ca19d69f" Namespace="calico-system" Pod="calico-kube-controllers-6b995ff85b-5jvv6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6b995ff85b--5jvv6-eth0" Jul 8 10:10:07.647717 containerd[1595]: time="2025-07-08T10:10:07.646969232Z" level=info msg="connecting to shim a4aa3ccde6c54805c820f2ede190fd2998ffd3fc71546ec279931b02ca19d69f" address="unix:///run/containerd/s/0db347fafc35cc904ba472f9415f1015b37a4f74135b1ffd9cc0a3159e7b728e" namespace=k8s.io protocol=ttrpc version=3 Jul 8 10:10:07.671266 systemd[1]: Started cri-containerd-a4aa3ccde6c54805c820f2ede190fd2998ffd3fc71546ec279931b02ca19d69f.scope - libcontainer container a4aa3ccde6c54805c820f2ede190fd2998ffd3fc71546ec279931b02ca19d69f. Jul 8 10:10:07.687476 systemd-resolved[1403]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 8 10:10:07.727065 containerd[1595]: time="2025-07-08T10:10:07.726984719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b995ff85b-5jvv6,Uid:2f37cc6d-720d-4b49-b845-4d7d2188107a,Namespace:calico-system,Attempt:0,} returns sandbox id \"a4aa3ccde6c54805c820f2ede190fd2998ffd3fc71546ec279931b02ca19d69f\"" Jul 8 10:10:08.078102 containerd[1595]: time="2025-07-08T10:10:08.078024067Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:10:08.078821 containerd[1595]: time="2025-07-08T10:10:08.078746854Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 8 10:10:08.079952 containerd[1595]: time="2025-07-08T10:10:08.079910757Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:10:08.081892 containerd[1595]: time="2025-07-08T10:10:08.081849835Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:10:08.082390 containerd[1595]: time="2025-07-08T10:10:08.082357657Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 3.255112306s" Jul 8 10:10:08.082390 containerd[1595]: time="2025-07-08T10:10:08.082385900Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 8 10:10:08.083737 containerd[1595]: time="2025-07-08T10:10:08.083482687Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 8 10:10:08.084905 containerd[1595]: time="2025-07-08T10:10:08.084847637Z" level=info msg="CreateContainer within sandbox \"960add990c6698ad1935571babf1ce15f513d107d2eb6a0599f92ecc4f924373\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 8 10:10:08.097655 containerd[1595]: time="2025-07-08T10:10:08.096758962Z" level=info msg="Container 0fb6c531476d63abd80ce4cb28b668a767d1ae3256acc86fc56998b50199a0f8: CDI devices from CRI Config.CDIDevices: []" Jul 8 10:10:08.106565 containerd[1595]: time="2025-07-08T10:10:08.106514282Z" level=info msg="CreateContainer within sandbox \"960add990c6698ad1935571babf1ce15f513d107d2eb6a0599f92ecc4f924373\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"0fb6c531476d63abd80ce4cb28b668a767d1ae3256acc86fc56998b50199a0f8\"" Jul 8 10:10:08.107027 containerd[1595]: time="2025-07-08T10:10:08.106997550Z" level=info msg="StartContainer for \"0fb6c531476d63abd80ce4cb28b668a767d1ae3256acc86fc56998b50199a0f8\"" Jul 8 10:10:08.108669 containerd[1595]: time="2025-07-08T10:10:08.108643196Z" level=info msg="connecting to shim 0fb6c531476d63abd80ce4cb28b668a767d1ae3256acc86fc56998b50199a0f8" address="unix:///run/containerd/s/09a85c1b6fac19af5b3d8e5e110ede81026a647c6062e033d40216cb670f7263" protocol=ttrpc version=3 Jul 8 10:10:08.139243 systemd[1]: Started cri-containerd-0fb6c531476d63abd80ce4cb28b668a767d1ae3256acc86fc56998b50199a0f8.scope - libcontainer container 0fb6c531476d63abd80ce4cb28b668a767d1ae3256acc86fc56998b50199a0f8. Jul 8 10:10:08.183203 containerd[1595]: time="2025-07-08T10:10:08.183159974Z" level=info msg="StartContainer for \"0fb6c531476d63abd80ce4cb28b668a767d1ae3256acc86fc56998b50199a0f8\" returns successfully" Jul 8 10:10:08.328370 systemd-networkd[1494]: cali6427aa4f6be: Gained IPv6LL Jul 8 10:10:09.544362 systemd-networkd[1494]: cali854fd9de31a: Gained IPv6LL Jul 8 10:10:11.179481 systemd[1]: Started sshd@13-10.0.0.19:22-10.0.0.1:46096.service - OpenSSH per-connection server daemon (10.0.0.1:46096). Jul 8 10:10:11.246370 sshd[5363]: Accepted publickey for core from 10.0.0.1 port 46096 ssh2: RSA SHA256:BQWGw9qDwwKm1WzdalDD28JNIdz3HBcaxKwPB41deh4 Jul 8 10:10:11.248438 sshd-session[5363]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 8 10:10:11.253414 systemd-logind[1579]: New session 14 of user core. Jul 8 10:10:11.262222 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 8 10:10:11.405161 sshd[5366]: Connection closed by 10.0.0.1 port 46096 Jul 8 10:10:11.407273 sshd-session[5363]: pam_unix(sshd:session): session closed for user core Jul 8 10:10:11.410839 systemd-logind[1579]: Session 14 logged out. Waiting for processes to exit. Jul 8 10:10:11.411248 systemd[1]: sshd@13-10.0.0.19:22-10.0.0.1:46096.service: Deactivated successfully. Jul 8 10:10:11.414022 systemd[1]: session-14.scope: Deactivated successfully. Jul 8 10:10:11.418123 systemd-logind[1579]: Removed session 14. Jul 8 10:10:11.460975 containerd[1595]: time="2025-07-08T10:10:11.460603361Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b24b7174551df6837c9c33840688bc700a95901dc7ff716b585b2f5002cae366\" id:\"1ff206bebac229699886b528b2e9ecb32764f4261d0b3673908cb18284bcecb1\" pid:5389 exit_status:1 exited_at:{seconds:1751969411 nanos:460254646}" Jul 8 10:10:13.021405 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4078615499.mount: Deactivated successfully. Jul 8 10:10:13.104219 containerd[1595]: time="2025-07-08T10:10:13.104157761Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:10:13.105007 containerd[1595]: time="2025-07-08T10:10:13.104941376Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 8 10:10:13.106133 containerd[1595]: time="2025-07-08T10:10:13.106043145Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:10:13.108223 containerd[1595]: time="2025-07-08T10:10:13.108191216Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:10:13.108694 containerd[1595]: time="2025-07-08T10:10:13.108664560Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 5.025141648s" Jul 8 10:10:13.108694 containerd[1595]: time="2025-07-08T10:10:13.108693196Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 8 10:10:13.109727 containerd[1595]: time="2025-07-08T10:10:13.109694180Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 8 10:10:13.113868 containerd[1595]: time="2025-07-08T10:10:13.113841854Z" level=info msg="CreateContainer within sandbox \"0eb9b22d7f0b798e03c2e587410e6997a7349a7f4976dde9eae7ac9a8aee8210\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 8 10:10:13.121465 containerd[1595]: time="2025-07-08T10:10:13.121434977Z" level=info msg="Container 7a4a6e6144db71809fb2aee9111ddff9d82665283f918cc7b322755f90e826f5: CDI devices from CRI Config.CDIDevices: []" Jul 8 10:10:13.131136 containerd[1595]: time="2025-07-08T10:10:13.130791336Z" level=info msg="CreateContainer within sandbox \"0eb9b22d7f0b798e03c2e587410e6997a7349a7f4976dde9eae7ac9a8aee8210\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"7a4a6e6144db71809fb2aee9111ddff9d82665283f918cc7b322755f90e826f5\"" Jul 8 10:10:13.131914 containerd[1595]: time="2025-07-08T10:10:13.131888647Z" level=info msg="StartContainer for \"7a4a6e6144db71809fb2aee9111ddff9d82665283f918cc7b322755f90e826f5\"" Jul 8 10:10:13.133187 containerd[1595]: time="2025-07-08T10:10:13.133138172Z" level=info msg="connecting to shim 7a4a6e6144db71809fb2aee9111ddff9d82665283f918cc7b322755f90e826f5" address="unix:///run/containerd/s/1bd4010bf66f09d173131ca28db668959aeb04f54fcc5350cb4162115afd0d06" protocol=ttrpc version=3 Jul 8 10:10:13.159225 systemd[1]: Started cri-containerd-7a4a6e6144db71809fb2aee9111ddff9d82665283f918cc7b322755f90e826f5.scope - libcontainer container 7a4a6e6144db71809fb2aee9111ddff9d82665283f918cc7b322755f90e826f5. Jul 8 10:10:13.207153 containerd[1595]: time="2025-07-08T10:10:13.207071265Z" level=info msg="StartContainer for \"7a4a6e6144db71809fb2aee9111ddff9d82665283f918cc7b322755f90e826f5\" returns successfully" Jul 8 10:10:16.427149 systemd[1]: Started sshd@14-10.0.0.19:22-10.0.0.1:46106.service - OpenSSH per-connection server daemon (10.0.0.1:46106). Jul 8 10:10:16.533289 sshd[5466]: Accepted publickey for core from 10.0.0.1 port 46106 ssh2: RSA SHA256:BQWGw9qDwwKm1WzdalDD28JNIdz3HBcaxKwPB41deh4 Jul 8 10:10:16.535562 sshd-session[5466]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 8 10:10:16.541458 systemd-logind[1579]: New session 15 of user core. Jul 8 10:10:16.552248 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 8 10:10:16.681720 sshd[5470]: Connection closed by 10.0.0.1 port 46106 Jul 8 10:10:16.682328 sshd-session[5466]: pam_unix(sshd:session): session closed for user core Jul 8 10:10:16.686889 systemd[1]: sshd@14-10.0.0.19:22-10.0.0.1:46106.service: Deactivated successfully. Jul 8 10:10:16.689158 systemd[1]: session-15.scope: Deactivated successfully. Jul 8 10:10:16.692007 systemd-logind[1579]: Session 15 logged out. Waiting for processes to exit. Jul 8 10:10:16.693227 systemd-logind[1579]: Removed session 15. Jul 8 10:10:18.407055 containerd[1595]: time="2025-07-08T10:10:18.406874749Z" level=info msg="TaskExit event in podsandbox handler container_id:\"556c574613f9876731555a16fcda7a6fc473ec9868aeaede9f6e9c9e3adf7288\" id:\"46e7b181b9d48a3f9a5e258eeb4a4b227ab7c0cdaa5e04da5b12430891e75118\" pid:5501 exited_at:{seconds:1751969418 nanos:405972612}" Jul 8 10:10:18.574788 kubelet[2732]: I0708 10:10:18.574665 2732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6768bdd548-mmrtt" podStartSLOduration=6.509452782 podStartE2EDuration="25.574638107s" podCreationTimestamp="2025-07-08 10:09:53 +0000 UTC" firstStartedPulling="2025-07-08 10:09:54.044362883 +0000 UTC m=+50.856185996" lastFinishedPulling="2025-07-08 10:10:13.109548218 +0000 UTC m=+69.921371321" observedRunningTime="2025-07-08 10:10:13.387837165 +0000 UTC m=+70.199660278" watchObservedRunningTime="2025-07-08 10:10:18.574638107 +0000 UTC m=+75.386461230" Jul 8 10:10:18.693302 containerd[1595]: time="2025-07-08T10:10:18.693128971Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:10:18.698484 containerd[1595]: time="2025-07-08T10:10:18.698441137Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 8 10:10:18.701548 containerd[1595]: time="2025-07-08T10:10:18.701488563Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:10:18.704493 containerd[1595]: time="2025-07-08T10:10:18.704450115Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:10:18.705207 containerd[1595]: time="2025-07-08T10:10:18.705105776Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 5.595357132s" Jul 8 10:10:18.705207 containerd[1595]: time="2025-07-08T10:10:18.705142186Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 8 10:10:18.706490 containerd[1595]: time="2025-07-08T10:10:18.706334311Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 8 10:10:18.723415 containerd[1595]: time="2025-07-08T10:10:18.723350366Z" level=info msg="CreateContainer within sandbox \"a4aa3ccde6c54805c820f2ede190fd2998ffd3fc71546ec279931b02ca19d69f\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 8 10:10:18.736717 containerd[1595]: time="2025-07-08T10:10:18.736660436Z" level=info msg="Container c79ed4e096ca910bdfbf438d8ef6eabd827ecdcefa62fd68d5cac0014fcb8683: CDI devices from CRI Config.CDIDevices: []" Jul 8 10:10:18.745266 containerd[1595]: time="2025-07-08T10:10:18.745215975Z" level=info msg="CreateContainer within sandbox \"a4aa3ccde6c54805c820f2ede190fd2998ffd3fc71546ec279931b02ca19d69f\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"c79ed4e096ca910bdfbf438d8ef6eabd827ecdcefa62fd68d5cac0014fcb8683\"" Jul 8 10:10:18.747100 containerd[1595]: time="2025-07-08T10:10:18.745742508Z" level=info msg="StartContainer for \"c79ed4e096ca910bdfbf438d8ef6eabd827ecdcefa62fd68d5cac0014fcb8683\"" Jul 8 10:10:18.747100 containerd[1595]: time="2025-07-08T10:10:18.746943099Z" level=info msg="connecting to shim c79ed4e096ca910bdfbf438d8ef6eabd827ecdcefa62fd68d5cac0014fcb8683" address="unix:///run/containerd/s/0db347fafc35cc904ba472f9415f1015b37a4f74135b1ffd9cc0a3159e7b728e" protocol=ttrpc version=3 Jul 8 10:10:18.777368 systemd[1]: Started cri-containerd-c79ed4e096ca910bdfbf438d8ef6eabd827ecdcefa62fd68d5cac0014fcb8683.scope - libcontainer container c79ed4e096ca910bdfbf438d8ef6eabd827ecdcefa62fd68d5cac0014fcb8683. Jul 8 10:10:18.827704 containerd[1595]: time="2025-07-08T10:10:18.827658398Z" level=info msg="StartContainer for \"c79ed4e096ca910bdfbf438d8ef6eabd827ecdcefa62fd68d5cac0014fcb8683\" returns successfully" Jul 8 10:10:19.401562 kubelet[2732]: I0708 10:10:19.401485 2732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6b995ff85b-5jvv6" podStartSLOduration=40.426241526 podStartE2EDuration="51.401457341s" podCreationTimestamp="2025-07-08 10:09:28 +0000 UTC" firstStartedPulling="2025-07-08 10:10:07.730888403 +0000 UTC m=+64.542711516" lastFinishedPulling="2025-07-08 10:10:18.706104218 +0000 UTC m=+75.517927331" observedRunningTime="2025-07-08 10:10:19.400779406 +0000 UTC m=+76.212602520" watchObservedRunningTime="2025-07-08 10:10:19.401457341 +0000 UTC m=+76.213280464" Jul 8 10:10:19.460673 containerd[1595]: time="2025-07-08T10:10:19.460607178Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c79ed4e096ca910bdfbf438d8ef6eabd827ecdcefa62fd68d5cac0014fcb8683\" id:\"64cd50ab720fb522f8fe03064357361938c65f214d88f743ff64b808900d5cd9\" pid:5571 exited_at:{seconds:1751969419 nanos:460143998}" Jul 8 10:10:21.697830 systemd[1]: Started sshd@15-10.0.0.19:22-10.0.0.1:43360.service - OpenSSH per-connection server daemon (10.0.0.1:43360). Jul 8 10:10:21.793397 sshd[5582]: Accepted publickey for core from 10.0.0.1 port 43360 ssh2: RSA SHA256:BQWGw9qDwwKm1WzdalDD28JNIdz3HBcaxKwPB41deh4 Jul 8 10:10:21.794934 sshd-session[5582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 8 10:10:21.801237 systemd-logind[1579]: New session 16 of user core. Jul 8 10:10:21.812198 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 8 10:10:21.945937 sshd[5585]: Connection closed by 10.0.0.1 port 43360 Jul 8 10:10:21.946286 sshd-session[5582]: pam_unix(sshd:session): session closed for user core Jul 8 10:10:21.951982 systemd[1]: sshd@15-10.0.0.19:22-10.0.0.1:43360.service: Deactivated successfully. Jul 8 10:10:21.953870 systemd[1]: session-16.scope: Deactivated successfully. Jul 8 10:10:21.954587 systemd-logind[1579]: Session 16 logged out. Waiting for processes to exit. Jul 8 10:10:21.955678 systemd-logind[1579]: Removed session 16. Jul 8 10:10:22.945213 containerd[1595]: time="2025-07-08T10:10:22.945164627Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b24b7174551df6837c9c33840688bc700a95901dc7ff716b585b2f5002cae366\" id:\"073ff89c404cd5c7c5c866e0ab0d37243bb9b92cd26a97b5af88df5e200d765e\" pid:5610 exited_at:{seconds:1751969422 nanos:944712068}" Jul 8 10:10:23.159587 containerd[1595]: time="2025-07-08T10:10:23.159525220Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:10:23.160403 containerd[1595]: time="2025-07-08T10:10:23.160366022Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 8 10:10:23.161640 containerd[1595]: time="2025-07-08T10:10:23.161586144Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:10:23.163491 containerd[1595]: time="2025-07-08T10:10:23.163457423Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 8 10:10:23.164100 containerd[1595]: time="2025-07-08T10:10:23.164024532Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 4.457660544s" Jul 8 10:10:23.164100 containerd[1595]: time="2025-07-08T10:10:23.164069157Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 8 10:10:23.166023 containerd[1595]: time="2025-07-08T10:10:23.165992257Z" level=info msg="CreateContainer within sandbox \"960add990c6698ad1935571babf1ce15f513d107d2eb6a0599f92ecc4f924373\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 8 10:10:23.175025 containerd[1595]: time="2025-07-08T10:10:23.174989379Z" level=info msg="Container 7241f5add0378ee0d3b9f22422a1a2040519a3cdd168a57ac3e942acedeb7457: CDI devices from CRI Config.CDIDevices: []" Jul 8 10:10:23.184826 containerd[1595]: time="2025-07-08T10:10:23.184799180Z" level=info msg="CreateContainer within sandbox \"960add990c6698ad1935571babf1ce15f513d107d2eb6a0599f92ecc4f924373\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"7241f5add0378ee0d3b9f22422a1a2040519a3cdd168a57ac3e942acedeb7457\"" Jul 8 10:10:23.185352 containerd[1595]: time="2025-07-08T10:10:23.185301594Z" level=info msg="StartContainer for \"7241f5add0378ee0d3b9f22422a1a2040519a3cdd168a57ac3e942acedeb7457\"" Jul 8 10:10:23.186652 containerd[1595]: time="2025-07-08T10:10:23.186622637Z" level=info msg="connecting to shim 7241f5add0378ee0d3b9f22422a1a2040519a3cdd168a57ac3e942acedeb7457" address="unix:///run/containerd/s/09a85c1b6fac19af5b3d8e5e110ede81026a647c6062e033d40216cb670f7263" protocol=ttrpc version=3 Jul 8 10:10:23.218268 systemd[1]: Started cri-containerd-7241f5add0378ee0d3b9f22422a1a2040519a3cdd168a57ac3e942acedeb7457.scope - libcontainer container 7241f5add0378ee0d3b9f22422a1a2040519a3cdd168a57ac3e942acedeb7457. Jul 8 10:10:23.349294 containerd[1595]: time="2025-07-08T10:10:23.349243869Z" level=info msg="StartContainer for \"7241f5add0378ee0d3b9f22422a1a2040519a3cdd168a57ac3e942acedeb7457\" returns successfully" Jul 8 10:10:23.362386 kubelet[2732]: I0708 10:10:23.362331 2732 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 8 10:10:23.362982 kubelet[2732]: I0708 10:10:23.362924 2732 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 8 10:10:26.957652 systemd[1]: Started sshd@16-10.0.0.19:22-10.0.0.1:43374.service - OpenSSH per-connection server daemon (10.0.0.1:43374). Jul 8 10:10:27.021905 sshd[5660]: Accepted publickey for core from 10.0.0.1 port 43374 ssh2: RSA SHA256:BQWGw9qDwwKm1WzdalDD28JNIdz3HBcaxKwPB41deh4 Jul 8 10:10:27.023343 sshd-session[5660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 8 10:10:27.027603 systemd-logind[1579]: New session 17 of user core. Jul 8 10:10:27.035208 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 8 10:10:27.262688 sshd[5664]: Connection closed by 10.0.0.1 port 43374 Jul 8 10:10:27.263141 sshd-session[5660]: pam_unix(sshd:session): session closed for user core Jul 8 10:10:27.272553 systemd[1]: sshd@16-10.0.0.19:22-10.0.0.1:43374.service: Deactivated successfully. Jul 8 10:10:27.274364 systemd[1]: session-17.scope: Deactivated successfully. Jul 8 10:10:27.275186 systemd-logind[1579]: Session 17 logged out. Waiting for processes to exit. Jul 8 10:10:27.277867 systemd[1]: Started sshd@17-10.0.0.19:22-10.0.0.1:43386.service - OpenSSH per-connection server daemon (10.0.0.1:43386). Jul 8 10:10:27.278989 systemd-logind[1579]: Removed session 17. Jul 8 10:10:27.328509 sshd[5677]: Accepted publickey for core from 10.0.0.1 port 43386 ssh2: RSA SHA256:BQWGw9qDwwKm1WzdalDD28JNIdz3HBcaxKwPB41deh4 Jul 8 10:10:27.329845 sshd-session[5677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 8 10:10:27.334381 systemd-logind[1579]: New session 18 of user core. Jul 8 10:10:27.341233 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 8 10:10:28.479268 sshd[5680]: Connection closed by 10.0.0.1 port 43386 Jul 8 10:10:28.491984 systemd[1]: sshd@17-10.0.0.19:22-10.0.0.1:43386.service: Deactivated successfully. Jul 8 10:10:28.479942 sshd-session[5677]: pam_unix(sshd:session): session closed for user core Jul 8 10:10:28.493985 systemd[1]: session-18.scope: Deactivated successfully. Jul 8 10:10:28.496439 systemd-logind[1579]: Session 18 logged out. Waiting for processes to exit. Jul 8 10:10:28.499637 systemd[1]: Started sshd@18-10.0.0.19:22-10.0.0.1:51602.service - OpenSSH per-connection server daemon (10.0.0.1:51602). Jul 8 10:10:28.500520 systemd-logind[1579]: Removed session 18. Jul 8 10:10:28.573817 sshd[5694]: Accepted publickey for core from 10.0.0.1 port 51602 ssh2: RSA SHA256:BQWGw9qDwwKm1WzdalDD28JNIdz3HBcaxKwPB41deh4 Jul 8 10:10:28.575593 sshd-session[5694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 8 10:10:28.581191 systemd-logind[1579]: New session 19 of user core. Jul 8 10:10:28.589259 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 8 10:10:30.701200 sshd[5697]: Connection closed by 10.0.0.1 port 51602 Jul 8 10:10:30.700996 sshd-session[5694]: pam_unix(sshd:session): session closed for user core Jul 8 10:10:30.715255 systemd[1]: sshd@18-10.0.0.19:22-10.0.0.1:51602.service: Deactivated successfully. Jul 8 10:10:30.717581 systemd[1]: session-19.scope: Deactivated successfully. Jul 8 10:10:30.717873 systemd[1]: session-19.scope: Consumed 639ms CPU time, 72.4M memory peak. Jul 8 10:10:30.718411 systemd-logind[1579]: Session 19 logged out. Waiting for processes to exit. Jul 8 10:10:30.723194 systemd[1]: Started sshd@19-10.0.0.19:22-10.0.0.1:51614.service - OpenSSH per-connection server daemon (10.0.0.1:51614). Jul 8 10:10:30.724694 systemd-logind[1579]: Removed session 19. Jul 8 10:10:30.778193 sshd[5717]: Accepted publickey for core from 10.0.0.1 port 51614 ssh2: RSA SHA256:BQWGw9qDwwKm1WzdalDD28JNIdz3HBcaxKwPB41deh4 Jul 8 10:10:30.779927 sshd-session[5717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 8 10:10:30.785211 systemd-logind[1579]: New session 20 of user core. Jul 8 10:10:30.791615 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 8 10:10:31.239302 sshd[5720]: Connection closed by 10.0.0.1 port 51614 Jul 8 10:10:31.240319 sshd-session[5717]: pam_unix(sshd:session): session closed for user core Jul 8 10:10:31.253299 systemd[1]: sshd@19-10.0.0.19:22-10.0.0.1:51614.service: Deactivated successfully. Jul 8 10:10:31.256959 systemd[1]: session-20.scope: Deactivated successfully. Jul 8 10:10:31.258469 systemd-logind[1579]: Session 20 logged out. Waiting for processes to exit. Jul 8 10:10:31.260940 systemd[1]: Started sshd@20-10.0.0.19:22-10.0.0.1:51624.service - OpenSSH per-connection server daemon (10.0.0.1:51624). Jul 8 10:10:31.261904 systemd-logind[1579]: Removed session 20. Jul 8 10:10:31.321717 sshd[5732]: Accepted publickey for core from 10.0.0.1 port 51624 ssh2: RSA SHA256:BQWGw9qDwwKm1WzdalDD28JNIdz3HBcaxKwPB41deh4 Jul 8 10:10:31.323256 sshd-session[5732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 8 10:10:31.328136 systemd-logind[1579]: New session 21 of user core. Jul 8 10:10:31.342243 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 8 10:10:31.513845 sshd[5735]: Connection closed by 10.0.0.1 port 51624 Jul 8 10:10:31.514463 sshd-session[5732]: pam_unix(sshd:session): session closed for user core Jul 8 10:10:31.519548 systemd[1]: sshd@20-10.0.0.19:22-10.0.0.1:51624.service: Deactivated successfully. Jul 8 10:10:31.521812 systemd[1]: session-21.scope: Deactivated successfully. Jul 8 10:10:31.522673 systemd-logind[1579]: Session 21 logged out. Waiting for processes to exit. Jul 8 10:10:31.524158 systemd-logind[1579]: Removed session 21. Jul 8 10:10:36.528151 systemd[1]: Started sshd@21-10.0.0.19:22-10.0.0.1:51632.service - OpenSSH per-connection server daemon (10.0.0.1:51632). Jul 8 10:10:36.588836 sshd[5757]: Accepted publickey for core from 10.0.0.1 port 51632 ssh2: RSA SHA256:BQWGw9qDwwKm1WzdalDD28JNIdz3HBcaxKwPB41deh4 Jul 8 10:10:36.590434 sshd-session[5757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 8 10:10:36.594876 systemd-logind[1579]: New session 22 of user core. Jul 8 10:10:36.602255 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 8 10:10:36.711116 sshd[5760]: Connection closed by 10.0.0.1 port 51632 Jul 8 10:10:36.711899 sshd-session[5757]: pam_unix(sshd:session): session closed for user core Jul 8 10:10:36.717230 systemd[1]: sshd@21-10.0.0.19:22-10.0.0.1:51632.service: Deactivated successfully. Jul 8 10:10:36.719417 systemd[1]: session-22.scope: Deactivated successfully. Jul 8 10:10:36.722451 systemd-logind[1579]: Session 22 logged out. Waiting for processes to exit. Jul 8 10:10:36.723726 systemd-logind[1579]: Removed session 22. Jul 8 10:10:41.470149 containerd[1595]: time="2025-07-08T10:10:41.470066655Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c79ed4e096ca910bdfbf438d8ef6eabd827ecdcefa62fd68d5cac0014fcb8683\" id:\"d8f8457c37eb4db0609073f626a57b462550b61df20e8d29ffea3b8e27bc7867\" pid:5806 exited_at:{seconds:1751969441 nanos:469549902}" Jul 8 10:10:41.523257 containerd[1595]: time="2025-07-08T10:10:41.523194163Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b24b7174551df6837c9c33840688bc700a95901dc7ff716b585b2f5002cae366\" id:\"9b34b4c88c7d99707df003a32326fa2651e8c8d40f42aaee98fc9682c3b49827\" pid:5792 exited_at:{seconds:1751969441 nanos:522700633}" Jul 8 10:10:41.631465 kubelet[2732]: I0708 10:10:41.630805 2732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-6fn56" podStartSLOduration=47.940130951 podStartE2EDuration="1m14.630780069s" podCreationTimestamp="2025-07-08 10:09:27 +0000 UTC" firstStartedPulling="2025-07-08 10:09:56.4741709 +0000 UTC m=+53.285994013" lastFinishedPulling="2025-07-08 10:10:23.164820018 +0000 UTC m=+79.976643131" observedRunningTime="2025-07-08 10:10:23.409153109 +0000 UTC m=+80.220976222" watchObservedRunningTime="2025-07-08 10:10:41.630780069 +0000 UTC m=+98.442603182" Jul 8 10:10:41.734292 systemd[1]: Started sshd@22-10.0.0.19:22-10.0.0.1:34032.service - OpenSSH per-connection server daemon (10.0.0.1:34032). Jul 8 10:10:41.797868 sshd[5824]: Accepted publickey for core from 10.0.0.1 port 34032 ssh2: RSA SHA256:BQWGw9qDwwKm1WzdalDD28JNIdz3HBcaxKwPB41deh4 Jul 8 10:10:41.799870 sshd-session[5824]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 8 10:10:41.804887 systemd-logind[1579]: New session 23 of user core. Jul 8 10:10:41.815250 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 8 10:10:41.941545 sshd[5827]: Connection closed by 10.0.0.1 port 34032 Jul 8 10:10:41.941895 sshd-session[5824]: pam_unix(sshd:session): session closed for user core Jul 8 10:10:41.945352 systemd[1]: sshd@22-10.0.0.19:22-10.0.0.1:34032.service: Deactivated successfully. Jul 8 10:10:41.947403 systemd[1]: session-23.scope: Deactivated successfully. Jul 8 10:10:41.948881 systemd-logind[1579]: Session 23 logged out. Waiting for processes to exit. Jul 8 10:10:41.950469 systemd-logind[1579]: Removed session 23. Jul 8 10:10:46.954848 systemd[1]: Started sshd@23-10.0.0.19:22-10.0.0.1:34040.service - OpenSSH per-connection server daemon (10.0.0.1:34040). Jul 8 10:10:47.006937 sshd[5843]: Accepted publickey for core from 10.0.0.1 port 34040 ssh2: RSA SHA256:BQWGw9qDwwKm1WzdalDD28JNIdz3HBcaxKwPB41deh4 Jul 8 10:10:47.008911 sshd-session[5843]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 8 10:10:47.014272 systemd-logind[1579]: New session 24 of user core. Jul 8 10:10:47.022300 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 8 10:10:47.266677 sshd[5846]: Connection closed by 10.0.0.1 port 34040 Jul 8 10:10:47.267065 sshd-session[5843]: pam_unix(sshd:session): session closed for user core Jul 8 10:10:47.271269 systemd[1]: sshd@23-10.0.0.19:22-10.0.0.1:34040.service: Deactivated successfully. Jul 8 10:10:47.274516 systemd[1]: session-24.scope: Deactivated successfully. Jul 8 10:10:47.275750 systemd-logind[1579]: Session 24 logged out. Waiting for processes to exit. Jul 8 10:10:47.278858 systemd-logind[1579]: Removed session 24. Jul 8 10:10:47.848213 containerd[1595]: time="2025-07-08T10:10:47.848145546Z" level=info msg="TaskExit event in podsandbox handler container_id:\"556c574613f9876731555a16fcda7a6fc473ec9868aeaede9f6e9c9e3adf7288\" id:\"d2da7d1d819f9a6d12948f05bebe9c16b8528359e2456d902ac45412542812d0\" pid:5873 exited_at:{seconds:1751969447 nanos:847219177}" Jul 8 10:10:50.011179 containerd[1595]: time="2025-07-08T10:10:50.011127949Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c79ed4e096ca910bdfbf438d8ef6eabd827ecdcefa62fd68d5cac0014fcb8683\" id:\"9498ed7898f50f75b9a3a3292eb6963a868285108a5db634fe629d37c522cbe4\" pid:5897 exited_at:{seconds:1751969450 nanos:10909755}" Jul 8 10:10:52.280734 systemd[1]: Started sshd@24-10.0.0.19:22-10.0.0.1:46460.service - OpenSSH per-connection server daemon (10.0.0.1:46460). Jul 8 10:10:52.330322 sshd[5908]: Accepted publickey for core from 10.0.0.1 port 46460 ssh2: RSA SHA256:BQWGw9qDwwKm1WzdalDD28JNIdz3HBcaxKwPB41deh4 Jul 8 10:10:52.332508 sshd-session[5908]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 8 10:10:52.337243 systemd-logind[1579]: New session 25 of user core. Jul 8 10:10:52.344251 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 8 10:10:52.499532 sshd[5911]: Connection closed by 10.0.0.1 port 46460 Jul 8 10:10:52.500238 sshd-session[5908]: pam_unix(sshd:session): session closed for user core Jul 8 10:10:52.504985 systemd-logind[1579]: Session 25 logged out. Waiting for processes to exit. Jul 8 10:10:52.505383 systemd[1]: sshd@24-10.0.0.19:22-10.0.0.1:46460.service: Deactivated successfully. Jul 8 10:10:52.507907 systemd[1]: session-25.scope: Deactivated successfully. Jul 8 10:10:52.511568 systemd-logind[1579]: Removed session 25.