Jul 10 00:12:57.524293 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Wed Jul 9 22:15:30 -00 2025 Jul 10 00:12:57.524329 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=844005237fb9709f65a093d5533c4229fb6c54e8e257736d9c3d041b6d3080ea Jul 10 00:12:57.524346 kernel: BIOS-provided physical RAM map: Jul 10 00:12:57.524365 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 10 00:12:57.524374 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jul 10 00:12:57.524384 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jul 10 00:12:57.524395 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jul 10 00:12:57.524406 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jul 10 00:12:57.524424 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Jul 10 00:12:57.524435 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jul 10 00:12:57.524445 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Jul 10 00:12:57.524454 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jul 10 00:12:57.524465 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jul 10 00:12:57.524475 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jul 10 00:12:57.524491 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jul 10 00:12:57.524502 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jul 10 00:12:57.524517 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jul 10 00:12:57.524528 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jul 10 00:12:57.524538 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jul 10 00:12:57.524548 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jul 10 00:12:57.524559 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jul 10 00:12:57.524569 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jul 10 00:12:57.524579 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jul 10 00:12:57.524589 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 10 00:12:57.524600 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jul 10 00:12:57.524614 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 10 00:12:57.524626 kernel: NX (Execute Disable) protection: active Jul 10 00:12:57.524637 kernel: APIC: Static calls initialized Jul 10 00:12:57.524649 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Jul 10 00:12:57.524659 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Jul 10 00:12:57.524669 kernel: extended physical RAM map: Jul 10 00:12:57.524680 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 10 00:12:57.524690 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Jul 10 00:12:57.524701 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jul 10 00:12:57.524711 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Jul 10 00:12:57.524722 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jul 10 00:12:57.524737 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Jul 10 00:12:57.524747 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jul 10 00:12:57.524758 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Jul 10 00:12:57.524768 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Jul 10 00:12:57.524785 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Jul 10 00:12:57.524796 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Jul 10 00:12:57.524810 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Jul 10 00:12:57.524822 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jul 10 00:12:57.524833 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jul 10 00:12:57.524844 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jul 10 00:12:57.524855 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jul 10 00:12:57.524867 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jul 10 00:12:57.524877 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jul 10 00:12:57.524887 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jul 10 00:12:57.524897 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jul 10 00:12:57.524912 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jul 10 00:12:57.524940 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jul 10 00:12:57.524952 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jul 10 00:12:57.524963 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jul 10 00:12:57.524974 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 10 00:12:57.524985 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jul 10 00:12:57.524996 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 10 00:12:57.525012 kernel: efi: EFI v2.7 by EDK II Jul 10 00:12:57.525023 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Jul 10 00:12:57.525034 kernel: random: crng init done Jul 10 00:12:57.525049 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Jul 10 00:12:57.525060 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Jul 10 00:12:57.525079 kernel: secureboot: Secure boot disabled Jul 10 00:12:57.525090 kernel: SMBIOS 2.8 present. Jul 10 00:12:57.525101 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Jul 10 00:12:57.525112 kernel: DMI: Memory slots populated: 1/1 Jul 10 00:12:57.525123 kernel: Hypervisor detected: KVM Jul 10 00:12:57.525134 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 10 00:12:57.525145 kernel: kvm-clock: using sched offset of 9284066258 cycles Jul 10 00:12:57.525158 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 10 00:12:57.525169 kernel: tsc: Detected 2794.750 MHz processor Jul 10 00:12:57.525181 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 10 00:12:57.525196 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 10 00:12:57.525207 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jul 10 00:12:57.525219 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jul 10 00:12:57.525231 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 10 00:12:57.525242 kernel: Using GB pages for direct mapping Jul 10 00:12:57.525253 kernel: ACPI: Early table checksum verification disabled Jul 10 00:12:57.525265 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jul 10 00:12:57.525276 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jul 10 00:12:57.525288 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:12:57.525303 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:12:57.525314 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jul 10 00:12:57.525325 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:12:57.525337 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:12:57.525348 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:12:57.528434 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:12:57.528450 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jul 10 00:12:57.528462 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jul 10 00:12:57.528474 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Jul 10 00:12:57.528492 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jul 10 00:12:57.528503 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jul 10 00:12:57.528515 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jul 10 00:12:57.528526 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jul 10 00:12:57.528537 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jul 10 00:12:57.528549 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jul 10 00:12:57.528560 kernel: No NUMA configuration found Jul 10 00:12:57.528571 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Jul 10 00:12:57.528582 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Jul 10 00:12:57.528599 kernel: Zone ranges: Jul 10 00:12:57.528611 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 10 00:12:57.528623 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Jul 10 00:12:57.528634 kernel: Normal empty Jul 10 00:12:57.528645 kernel: Device empty Jul 10 00:12:57.528655 kernel: Movable zone start for each node Jul 10 00:12:57.528666 kernel: Early memory node ranges Jul 10 00:12:57.528678 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jul 10 00:12:57.528689 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jul 10 00:12:57.528707 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jul 10 00:12:57.528723 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Jul 10 00:12:57.528734 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Jul 10 00:12:57.528746 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Jul 10 00:12:57.528757 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Jul 10 00:12:57.528769 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Jul 10 00:12:57.528780 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Jul 10 00:12:57.528796 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 10 00:12:57.528808 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jul 10 00:12:57.528835 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jul 10 00:12:57.528847 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 10 00:12:57.528858 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Jul 10 00:12:57.528870 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Jul 10 00:12:57.528885 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jul 10 00:12:57.528896 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Jul 10 00:12:57.528907 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Jul 10 00:12:57.528937 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 10 00:12:57.528951 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 10 00:12:57.528968 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 10 00:12:57.528980 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 10 00:12:57.528992 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 10 00:12:57.529004 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 10 00:12:57.529016 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 10 00:12:57.529027 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 10 00:12:57.529039 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 10 00:12:57.529051 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 10 00:12:57.529067 kernel: TSC deadline timer available Jul 10 00:12:57.529079 kernel: CPU topo: Max. logical packages: 1 Jul 10 00:12:57.529090 kernel: CPU topo: Max. logical dies: 1 Jul 10 00:12:57.529102 kernel: CPU topo: Max. dies per package: 1 Jul 10 00:12:57.529114 kernel: CPU topo: Max. threads per core: 1 Jul 10 00:12:57.529126 kernel: CPU topo: Num. cores per package: 4 Jul 10 00:12:57.529138 kernel: CPU topo: Num. threads per package: 4 Jul 10 00:12:57.529149 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jul 10 00:12:57.529161 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 10 00:12:57.529173 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 10 00:12:57.529189 kernel: kvm-guest: setup PV sched yield Jul 10 00:12:57.530414 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Jul 10 00:12:57.530428 kernel: Booting paravirtualized kernel on KVM Jul 10 00:12:57.530440 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 10 00:12:57.530453 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jul 10 00:12:57.530465 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jul 10 00:12:57.530476 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jul 10 00:12:57.530489 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 10 00:12:57.530501 kernel: kvm-guest: PV spinlocks enabled Jul 10 00:12:57.530519 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 10 00:12:57.530533 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=844005237fb9709f65a093d5533c4229fb6c54e8e257736d9c3d041b6d3080ea Jul 10 00:12:57.530551 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 10 00:12:57.530563 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 10 00:12:57.530576 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 10 00:12:57.530588 kernel: Fallback order for Node 0: 0 Jul 10 00:12:57.530600 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Jul 10 00:12:57.530612 kernel: Policy zone: DMA32 Jul 10 00:12:57.530629 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 10 00:12:57.530641 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 10 00:12:57.530653 kernel: ftrace: allocating 40095 entries in 157 pages Jul 10 00:12:57.530665 kernel: ftrace: allocated 157 pages with 5 groups Jul 10 00:12:57.530677 kernel: Dynamic Preempt: voluntary Jul 10 00:12:57.530689 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 10 00:12:57.530702 kernel: rcu: RCU event tracing is enabled. Jul 10 00:12:57.530715 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 10 00:12:57.530727 kernel: Trampoline variant of Tasks RCU enabled. Jul 10 00:12:57.530744 kernel: Rude variant of Tasks RCU enabled. Jul 10 00:12:57.530756 kernel: Tracing variant of Tasks RCU enabled. Jul 10 00:12:57.530768 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 10 00:12:57.530785 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 10 00:12:57.530797 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 10 00:12:57.530809 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 10 00:12:57.530821 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 10 00:12:57.530832 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 10 00:12:57.530844 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 10 00:12:57.530860 kernel: Console: colour dummy device 80x25 Jul 10 00:12:57.530872 kernel: printk: legacy console [ttyS0] enabled Jul 10 00:12:57.530883 kernel: ACPI: Core revision 20240827 Jul 10 00:12:57.530894 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 10 00:12:57.530905 kernel: APIC: Switch to symmetric I/O mode setup Jul 10 00:12:57.530921 kernel: x2apic enabled Jul 10 00:12:57.530948 kernel: APIC: Switched APIC routing to: physical x2apic Jul 10 00:12:57.530961 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jul 10 00:12:57.530973 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jul 10 00:12:57.530989 kernel: kvm-guest: setup PV IPIs Jul 10 00:12:57.531000 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 10 00:12:57.531012 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Jul 10 00:12:57.531024 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Jul 10 00:12:57.531037 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 10 00:12:57.531049 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 10 00:12:57.531061 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 10 00:12:57.531072 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 10 00:12:57.532409 kernel: Spectre V2 : Mitigation: Retpolines Jul 10 00:12:57.532431 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 10 00:12:57.532443 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 10 00:12:57.532455 kernel: RETBleed: Mitigation: untrained return thunk Jul 10 00:12:57.532468 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 10 00:12:57.532485 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 10 00:12:57.532497 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jul 10 00:12:57.532511 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jul 10 00:12:57.532523 kernel: x86/bugs: return thunk changed Jul 10 00:12:57.532539 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jul 10 00:12:57.532552 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 10 00:12:57.532564 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 10 00:12:57.532576 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 10 00:12:57.532589 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 10 00:12:57.532601 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 10 00:12:57.532613 kernel: Freeing SMP alternatives memory: 32K Jul 10 00:12:57.532625 kernel: pid_max: default: 32768 minimum: 301 Jul 10 00:12:57.532637 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 10 00:12:57.532653 kernel: landlock: Up and running. Jul 10 00:12:57.532665 kernel: SELinux: Initializing. Jul 10 00:12:57.532677 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 10 00:12:57.532690 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 10 00:12:57.532703 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 10 00:12:57.532715 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 10 00:12:57.532727 kernel: ... version: 0 Jul 10 00:12:57.532739 kernel: ... bit width: 48 Jul 10 00:12:57.532750 kernel: ... generic registers: 6 Jul 10 00:12:57.532767 kernel: ... value mask: 0000ffffffffffff Jul 10 00:12:57.532779 kernel: ... max period: 00007fffffffffff Jul 10 00:12:57.532791 kernel: ... fixed-purpose events: 0 Jul 10 00:12:57.532803 kernel: ... event mask: 000000000000003f Jul 10 00:12:57.532815 kernel: signal: max sigframe size: 1776 Jul 10 00:12:57.532827 kernel: rcu: Hierarchical SRCU implementation. Jul 10 00:12:57.532839 kernel: rcu: Max phase no-delay instances is 400. Jul 10 00:12:57.532856 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 10 00:12:57.532869 kernel: smp: Bringing up secondary CPUs ... Jul 10 00:12:57.532885 kernel: smpboot: x86: Booting SMP configuration: Jul 10 00:12:57.532896 kernel: .... node #0, CPUs: #1 #2 #3 Jul 10 00:12:57.532906 kernel: smp: Brought up 1 node, 4 CPUs Jul 10 00:12:57.532920 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Jul 10 00:12:57.532949 kernel: Memory: 2422668K/2565800K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54420K init, 2548K bss, 137196K reserved, 0K cma-reserved) Jul 10 00:12:57.532961 kernel: devtmpfs: initialized Jul 10 00:12:57.532973 kernel: x86/mm: Memory block size: 128MB Jul 10 00:12:57.532984 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jul 10 00:12:57.532996 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jul 10 00:12:57.533013 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Jul 10 00:12:57.533025 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jul 10 00:12:57.533037 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Jul 10 00:12:57.533050 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jul 10 00:12:57.533062 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 10 00:12:57.533075 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 10 00:12:57.533086 kernel: pinctrl core: initialized pinctrl subsystem Jul 10 00:12:57.533098 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 10 00:12:57.533110 kernel: audit: initializing netlink subsys (disabled) Jul 10 00:12:57.533126 kernel: audit: type=2000 audit(1752106371.039:1): state=initialized audit_enabled=0 res=1 Jul 10 00:12:57.533139 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 10 00:12:57.533150 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 10 00:12:57.533162 kernel: cpuidle: using governor menu Jul 10 00:12:57.533174 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 10 00:12:57.533186 kernel: dca service started, version 1.12.1 Jul 10 00:12:57.533198 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Jul 10 00:12:57.533210 kernel: PCI: Using configuration type 1 for base access Jul 10 00:12:57.533221 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 10 00:12:57.533235 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 10 00:12:57.533246 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 10 00:12:57.533256 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 10 00:12:57.533265 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 10 00:12:57.533275 kernel: ACPI: Added _OSI(Module Device) Jul 10 00:12:57.533285 kernel: ACPI: Added _OSI(Processor Device) Jul 10 00:12:57.533295 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 10 00:12:57.533305 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 10 00:12:57.533315 kernel: ACPI: Interpreter enabled Jul 10 00:12:57.533328 kernel: ACPI: PM: (supports S0 S3 S5) Jul 10 00:12:57.533338 kernel: ACPI: Using IOAPIC for interrupt routing Jul 10 00:12:57.533348 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 10 00:12:57.533369 kernel: PCI: Using E820 reservations for host bridge windows Jul 10 00:12:57.533379 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 10 00:12:57.533389 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 10 00:12:57.534701 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 10 00:12:57.534859 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jul 10 00:12:57.535039 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jul 10 00:12:57.535056 kernel: PCI host bridge to bus 0000:00 Jul 10 00:12:57.535225 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 10 00:12:57.535351 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 10 00:12:57.536577 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 10 00:12:57.536720 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Jul 10 00:12:57.536864 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jul 10 00:12:57.537038 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Jul 10 00:12:57.537183 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 10 00:12:57.537440 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jul 10 00:12:57.537639 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jul 10 00:12:57.537797 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Jul 10 00:12:57.537992 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Jul 10 00:12:57.538157 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Jul 10 00:12:57.538324 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 10 00:12:57.538530 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jul 10 00:12:57.538700 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Jul 10 00:12:57.538869 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Jul 10 00:12:57.539069 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Jul 10 00:12:57.539252 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jul 10 00:12:57.542512 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Jul 10 00:12:57.542685 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Jul 10 00:12:57.542843 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Jul 10 00:12:57.543050 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jul 10 00:12:57.543208 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Jul 10 00:12:57.543373 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Jul 10 00:12:57.543534 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Jul 10 00:12:57.543703 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Jul 10 00:12:57.543885 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jul 10 00:12:57.544064 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 10 00:12:57.544263 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jul 10 00:12:57.544439 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Jul 10 00:12:57.544601 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Jul 10 00:12:57.544787 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jul 10 00:12:57.544965 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Jul 10 00:12:57.544982 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 10 00:12:57.544994 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 10 00:12:57.545005 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 10 00:12:57.545016 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 10 00:12:57.545027 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 10 00:12:57.545037 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 10 00:12:57.545053 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 10 00:12:57.545064 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 10 00:12:57.545075 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 10 00:12:57.545086 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 10 00:12:57.545098 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 10 00:12:57.545109 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 10 00:12:57.545120 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 10 00:12:57.545131 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 10 00:12:57.545143 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 10 00:12:57.545157 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 10 00:12:57.545168 kernel: iommu: Default domain type: Translated Jul 10 00:12:57.545179 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 10 00:12:57.545190 kernel: efivars: Registered efivars operations Jul 10 00:12:57.545202 kernel: PCI: Using ACPI for IRQ routing Jul 10 00:12:57.545213 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 10 00:12:57.545224 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jul 10 00:12:57.545235 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Jul 10 00:12:57.545246 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Jul 10 00:12:57.545260 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Jul 10 00:12:57.545271 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Jul 10 00:12:57.545282 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Jul 10 00:12:57.545293 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Jul 10 00:12:57.545305 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Jul 10 00:12:57.548577 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 10 00:12:57.548754 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 10 00:12:57.548942 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 10 00:12:57.548967 kernel: vgaarb: loaded Jul 10 00:12:57.548979 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 10 00:12:57.548991 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 10 00:12:57.549003 kernel: clocksource: Switched to clocksource kvm-clock Jul 10 00:12:57.549015 kernel: VFS: Disk quotas dquot_6.6.0 Jul 10 00:12:57.549027 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 10 00:12:57.549039 kernel: pnp: PnP ACPI init Jul 10 00:12:57.549283 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Jul 10 00:12:57.549323 kernel: pnp: PnP ACPI: found 6 devices Jul 10 00:12:57.549339 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 10 00:12:57.549360 kernel: NET: Registered PF_INET protocol family Jul 10 00:12:57.549373 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 10 00:12:57.549385 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 10 00:12:57.549397 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 10 00:12:57.549410 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 10 00:12:57.549422 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 10 00:12:57.549434 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 10 00:12:57.549449 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 10 00:12:57.549461 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 10 00:12:57.549473 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 10 00:12:57.549485 kernel: NET: Registered PF_XDP protocol family Jul 10 00:12:57.549655 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Jul 10 00:12:57.549818 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Jul 10 00:12:57.549996 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 10 00:12:57.550141 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 10 00:12:57.550285 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 10 00:12:57.550430 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Jul 10 00:12:57.550563 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jul 10 00:12:57.550700 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Jul 10 00:12:57.550715 kernel: PCI: CLS 0 bytes, default 64 Jul 10 00:12:57.550727 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Jul 10 00:12:57.550739 kernel: Initialise system trusted keyrings Jul 10 00:12:57.550751 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 10 00:12:57.550763 kernel: Key type asymmetric registered Jul 10 00:12:57.550778 kernel: Asymmetric key parser 'x509' registered Jul 10 00:12:57.550790 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 10 00:12:57.550802 kernel: io scheduler mq-deadline registered Jul 10 00:12:57.550816 kernel: io scheduler kyber registered Jul 10 00:12:57.550828 kernel: io scheduler bfq registered Jul 10 00:12:57.550842 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 10 00:12:57.550855 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 10 00:12:57.550867 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 10 00:12:57.550878 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jul 10 00:12:57.550889 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 10 00:12:57.550901 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 10 00:12:57.550912 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 10 00:12:57.550938 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 10 00:12:57.550951 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 10 00:12:57.551126 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 10 00:12:57.551298 kernel: rtc_cmos 00:04: registered as rtc0 Jul 10 00:12:57.554535 kernel: rtc_cmos 00:04: setting system clock to 2025-07-10T00:12:56 UTC (1752106376) Jul 10 00:12:57.554694 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jul 10 00:12:57.554711 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 10 00:12:57.554725 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Jul 10 00:12:57.554737 kernel: efifb: probing for efifb Jul 10 00:12:57.554749 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Jul 10 00:12:57.554768 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jul 10 00:12:57.554780 kernel: efifb: scrolling: redraw Jul 10 00:12:57.554793 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 10 00:12:57.554805 kernel: Console: switching to colour frame buffer device 160x50 Jul 10 00:12:57.554816 kernel: fb0: EFI VGA frame buffer device Jul 10 00:12:57.554828 kernel: pstore: Using crash dump compression: deflate Jul 10 00:12:57.554841 kernel: pstore: Registered efi_pstore as persistent store backend Jul 10 00:12:57.554853 kernel: NET: Registered PF_INET6 protocol family Jul 10 00:12:57.554864 kernel: Segment Routing with IPv6 Jul 10 00:12:57.554879 kernel: In-situ OAM (IOAM) with IPv6 Jul 10 00:12:57.554890 kernel: NET: Registered PF_PACKET protocol family Jul 10 00:12:57.554902 kernel: Key type dns_resolver registered Jul 10 00:12:57.554921 kernel: IPI shorthand broadcast: enabled Jul 10 00:12:57.554947 kernel: sched_clock: Marking stable (7704005313, 298015955)->(8205458263, -203436995) Jul 10 00:12:57.554959 kernel: registered taskstats version 1 Jul 10 00:12:57.554971 kernel: Loading compiled-in X.509 certificates Jul 10 00:12:57.554984 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: f515550de55d4e43b2ea11ae212aa0cb3a4e55cf' Jul 10 00:12:57.554996 kernel: Demotion targets for Node 0: null Jul 10 00:12:57.555007 kernel: Key type .fscrypt registered Jul 10 00:12:57.555023 kernel: Key type fscrypt-provisioning registered Jul 10 00:12:57.555035 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 10 00:12:57.555047 kernel: ima: Allocated hash algorithm: sha1 Jul 10 00:12:57.555058 kernel: ima: No architecture policies found Jul 10 00:12:57.555071 kernel: clk: Disabling unused clocks Jul 10 00:12:57.555083 kernel: Warning: unable to open an initial console. Jul 10 00:12:57.555096 kernel: Freeing unused kernel image (initmem) memory: 54420K Jul 10 00:12:57.555107 kernel: Write protecting the kernel read-only data: 24576k Jul 10 00:12:57.555122 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 10 00:12:57.555134 kernel: Run /init as init process Jul 10 00:12:57.555146 kernel: with arguments: Jul 10 00:12:57.555158 kernel: /init Jul 10 00:12:57.555170 kernel: with environment: Jul 10 00:12:57.555182 kernel: HOME=/ Jul 10 00:12:57.555194 kernel: TERM=linux Jul 10 00:12:57.555205 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 10 00:12:57.555218 systemd[1]: Successfully made /usr/ read-only. Jul 10 00:12:57.555238 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 10 00:12:57.555253 systemd[1]: Detected virtualization kvm. Jul 10 00:12:57.555265 systemd[1]: Detected architecture x86-64. Jul 10 00:12:57.555277 systemd[1]: Running in initrd. Jul 10 00:12:57.555289 systemd[1]: No hostname configured, using default hostname. Jul 10 00:12:57.555302 systemd[1]: Hostname set to . Jul 10 00:12:57.555315 systemd[1]: Initializing machine ID from VM UUID. Jul 10 00:12:57.555330 systemd[1]: Queued start job for default target initrd.target. Jul 10 00:12:57.555343 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 00:12:57.555366 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 00:12:57.555380 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 10 00:12:57.555392 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 10 00:12:57.555405 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 10 00:12:57.555419 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 10 00:12:57.555437 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 10 00:12:57.555451 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 10 00:12:57.555464 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 00:12:57.555476 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 10 00:12:57.555488 systemd[1]: Reached target paths.target - Path Units. Jul 10 00:12:57.555501 systemd[1]: Reached target slices.target - Slice Units. Jul 10 00:12:57.555513 systemd[1]: Reached target swap.target - Swaps. Jul 10 00:12:57.555526 systemd[1]: Reached target timers.target - Timer Units. Jul 10 00:12:57.555538 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 10 00:12:57.555554 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 10 00:12:57.555567 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 10 00:12:57.555579 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 10 00:12:57.555592 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 10 00:12:57.555605 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 10 00:12:57.555617 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 00:12:57.555630 systemd[1]: Reached target sockets.target - Socket Units. Jul 10 00:12:57.555642 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 10 00:12:57.555658 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 10 00:12:57.555670 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 10 00:12:57.555683 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 10 00:12:57.555696 systemd[1]: Starting systemd-fsck-usr.service... Jul 10 00:12:57.555708 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 10 00:12:57.555720 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 10 00:12:57.555733 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:12:57.555745 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 10 00:12:57.555761 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 00:12:57.555774 systemd[1]: Finished systemd-fsck-usr.service. Jul 10 00:12:57.555786 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 10 00:12:57.555847 systemd-journald[222]: Collecting audit messages is disabled. Jul 10 00:12:57.555883 systemd-journald[222]: Journal started Jul 10 00:12:57.555910 systemd-journald[222]: Runtime Journal (/run/log/journal/00881a4f81e54ff0a2471453bffdac86) is 6M, max 48.5M, 42.4M free. Jul 10 00:12:57.574282 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:12:57.559652 systemd-modules-load[224]: Inserted module 'overlay' Jul 10 00:12:57.586992 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 10 00:12:57.600073 systemd[1]: Started systemd-journald.service - Journal Service. Jul 10 00:12:57.609419 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 10 00:12:57.620556 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 10 00:12:57.699121 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 10 00:12:57.709037 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 10 00:12:57.726083 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 10 00:12:57.739123 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 00:12:57.746838 systemd-tmpfiles[251]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 10 00:12:57.767023 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 10 00:12:57.769793 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 00:12:57.772698 kernel: Bridge firewalling registered Jul 10 00:12:57.770425 systemd-modules-load[224]: Inserted module 'br_netfilter' Jul 10 00:12:57.772516 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 10 00:12:57.779367 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 10 00:12:57.794079 dracut-cmdline[253]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=844005237fb9709f65a093d5533c4229fb6c54e8e257736d9c3d041b6d3080ea Jul 10 00:12:57.837678 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 10 00:12:57.850478 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 10 00:12:57.993564 systemd-resolved[290]: Positive Trust Anchors: Jul 10 00:12:57.993587 systemd-resolved[290]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 10 00:12:57.993625 systemd-resolved[290]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 10 00:12:57.998609 systemd-resolved[290]: Defaulting to hostname 'linux'. Jul 10 00:12:58.002614 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 10 00:12:58.017130 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 10 00:12:58.094428 kernel: SCSI subsystem initialized Jul 10 00:12:58.114385 kernel: Loading iSCSI transport class v2.0-870. Jul 10 00:12:58.160212 kernel: iscsi: registered transport (tcp) Jul 10 00:12:58.221052 kernel: iscsi: registered transport (qla4xxx) Jul 10 00:12:58.221150 kernel: QLogic iSCSI HBA Driver Jul 10 00:12:58.282226 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 10 00:12:58.342379 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 10 00:12:58.345615 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 10 00:12:58.503571 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 10 00:12:58.514503 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 10 00:12:58.625972 kernel: raid6: avx2x4 gen() 17683 MB/s Jul 10 00:12:58.642969 kernel: raid6: avx2x2 gen() 16631 MB/s Jul 10 00:12:58.662736 kernel: raid6: avx2x1 gen() 12599 MB/s Jul 10 00:12:58.662854 kernel: raid6: using algorithm avx2x4 gen() 17683 MB/s Jul 10 00:12:58.685382 kernel: raid6: .... xor() 3804 MB/s, rmw enabled Jul 10 00:12:58.685469 kernel: raid6: using avx2x2 recovery algorithm Jul 10 00:12:58.773381 kernel: xor: automatically using best checksumming function avx Jul 10 00:12:59.211706 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 10 00:12:59.235708 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 10 00:12:59.248635 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 00:12:59.321429 systemd-udevd[472]: Using default interface naming scheme 'v255'. Jul 10 00:12:59.335020 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 00:12:59.354320 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 10 00:12:59.413155 dracut-pre-trigger[482]: rd.md=0: removing MD RAID activation Jul 10 00:12:59.502432 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 10 00:12:59.510432 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 10 00:12:59.718396 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 00:12:59.733530 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 10 00:12:59.893812 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 00:12:59.894569 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:12:59.938905 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:12:59.947841 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:12:59.951817 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 10 00:12:59.966994 kernel: cryptd: max_cpu_qlen set to 1000 Jul 10 00:13:00.002078 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jul 10 00:13:00.006335 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 10 00:13:00.012225 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 10 00:13:00.012313 kernel: GPT:9289727 != 19775487 Jul 10 00:13:00.012330 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 10 00:13:00.013672 kernel: GPT:9289727 != 19775487 Jul 10 00:13:00.013722 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 10 00:13:00.013738 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 00:13:00.055770 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:13:00.121006 kernel: libata version 3.00 loaded. Jul 10 00:13:00.221093 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 10 00:13:00.254419 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 10 00:13:00.266956 kernel: AES CTR mode by8 optimization enabled Jul 10 00:13:00.267027 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jul 10 00:13:00.287917 kernel: ahci 0000:00:1f.2: version 3.0 Jul 10 00:13:00.288385 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 10 00:13:00.328475 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jul 10 00:13:00.328764 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jul 10 00:13:00.328951 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 10 00:13:00.333100 kernel: scsi host0: ahci Jul 10 00:13:00.333392 kernel: scsi host1: ahci Jul 10 00:13:00.338810 kernel: scsi host2: ahci Jul 10 00:13:00.339507 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 10 00:13:00.352545 kernel: scsi host3: ahci Jul 10 00:13:00.344542 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 10 00:13:00.356327 kernel: scsi host4: ahci Jul 10 00:13:00.364331 kernel: scsi host5: ahci Jul 10 00:13:00.378677 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 31 lpm-pol 0 Jul 10 00:13:00.378765 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 31 lpm-pol 0 Jul 10 00:13:00.378783 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 31 lpm-pol 0 Jul 10 00:13:00.378799 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 31 lpm-pol 0 Jul 10 00:13:00.378814 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 31 lpm-pol 0 Jul 10 00:13:00.378829 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 31 lpm-pol 0 Jul 10 00:13:00.379828 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 10 00:13:00.393127 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 10 00:13:00.435687 disk-uuid[633]: Primary Header is updated. Jul 10 00:13:00.435687 disk-uuid[633]: Secondary Entries is updated. Jul 10 00:13:00.435687 disk-uuid[633]: Secondary Header is updated. Jul 10 00:13:00.441984 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 00:13:00.457108 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 00:13:00.694977 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 10 00:13:00.702670 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jul 10 00:13:00.702770 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 10 00:13:00.702788 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 10 00:13:00.702803 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 10 00:13:00.704506 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 10 00:13:00.704545 kernel: ata3.00: applying bridge limits Jul 10 00:13:00.705971 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 10 00:13:00.706977 kernel: ata3.00: configured for UDMA/100 Jul 10 00:13:00.708997 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 10 00:13:00.804414 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 10 00:13:00.804911 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 10 00:13:00.835040 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 10 00:13:01.447318 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 10 00:13:01.466790 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 10 00:13:01.471150 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 00:13:01.474123 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 10 00:13:01.475778 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 10 00:13:01.577036 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 00:13:01.583777 disk-uuid[634]: The operation has completed successfully. Jul 10 00:13:01.612422 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 10 00:13:01.729478 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 10 00:13:01.729687 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 10 00:13:01.786322 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 10 00:13:01.824648 sh[664]: Success Jul 10 00:13:01.860313 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 10 00:13:01.860411 kernel: device-mapper: uevent: version 1.0.3 Jul 10 00:13:01.862071 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 10 00:13:01.913979 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jul 10 00:13:02.004023 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 10 00:13:02.035395 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 10 00:13:02.074820 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 10 00:13:02.168974 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 10 00:13:02.172146 kernel: BTRFS: device fsid c4cb30b0-bb74-4f98-aab6-7a1c6f47edee devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (676) Jul 10 00:13:02.176995 kernel: BTRFS info (device dm-0): first mount of filesystem c4cb30b0-bb74-4f98-aab6-7a1c6f47edee Jul 10 00:13:02.177076 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 10 00:13:02.177115 kernel: BTRFS info (device dm-0): using free-space-tree Jul 10 00:13:02.243076 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 10 00:13:02.245048 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 10 00:13:02.317460 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 10 00:13:02.318873 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 10 00:13:02.327454 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 10 00:13:02.385030 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (709) Jul 10 00:13:02.388863 kernel: BTRFS info (device vda6): first mount of filesystem 66535909-6865-4f30-ad42-a3000fffd5f6 Jul 10 00:13:02.388960 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 10 00:13:02.388977 kernel: BTRFS info (device vda6): using free-space-tree Jul 10 00:13:02.449237 kernel: BTRFS info (device vda6): last unmount of filesystem 66535909-6865-4f30-ad42-a3000fffd5f6 Jul 10 00:13:02.496573 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 10 00:13:02.505510 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 10 00:13:02.640031 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 10 00:13:02.809192 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 10 00:13:03.073830 systemd-networkd[846]: lo: Link UP Jul 10 00:13:03.073847 systemd-networkd[846]: lo: Gained carrier Jul 10 00:13:03.076293 systemd-networkd[846]: Enumeration completed Jul 10 00:13:03.076844 systemd-networkd[846]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 00:13:03.076850 systemd-networkd[846]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 10 00:13:03.077388 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 10 00:13:03.080773 systemd-networkd[846]: eth0: Link UP Jul 10 00:13:03.080778 systemd-networkd[846]: eth0: Gained carrier Jul 10 00:13:03.080789 systemd-networkd[846]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 00:13:03.081957 systemd[1]: Reached target network.target - Network. Jul 10 00:13:03.139559 ignition[784]: Ignition 2.21.0 Jul 10 00:13:03.139578 ignition[784]: Stage: fetch-offline Jul 10 00:13:03.140098 ignition[784]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:13:03.140114 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:13:03.160620 systemd-networkd[846]: eth0: DHCPv4 address 10.0.0.15/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 10 00:13:03.140257 ignition[784]: parsed url from cmdline: "" Jul 10 00:13:03.140264 ignition[784]: no config URL provided Jul 10 00:13:03.140273 ignition[784]: reading system config file "/usr/lib/ignition/user.ign" Jul 10 00:13:03.140287 ignition[784]: no config at "/usr/lib/ignition/user.ign" Jul 10 00:13:03.140328 ignition[784]: op(1): [started] loading QEMU firmware config module Jul 10 00:13:03.140336 ignition[784]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 10 00:13:03.190260 ignition[784]: op(1): [finished] loading QEMU firmware config module Jul 10 00:13:03.236996 ignition[784]: parsing config with SHA512: a057c9cba774090d517eebfa2ee05885eab7583fa40351c901594f414ec850833b6a421a09f9d0db806b75a9cc4191824b00ec7af172abcd730add98c277054d Jul 10 00:13:03.246306 unknown[784]: fetched base config from "system" Jul 10 00:13:03.246763 unknown[784]: fetched user config from "qemu" Jul 10 00:13:03.253007 ignition[784]: fetch-offline: fetch-offline passed Jul 10 00:13:03.253188 ignition[784]: Ignition finished successfully Jul 10 00:13:03.261053 systemd-resolved[290]: Detected conflict on linux IN A 10.0.0.15 Jul 10 00:13:03.261067 systemd-resolved[290]: Hostname conflict, changing published hostname from 'linux' to 'linux10'. Jul 10 00:13:03.264891 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 10 00:13:03.275312 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 10 00:13:03.280652 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 10 00:13:03.401523 ignition[859]: Ignition 2.21.0 Jul 10 00:13:03.404027 ignition[859]: Stage: kargs Jul 10 00:13:03.404436 ignition[859]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:13:03.404458 ignition[859]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:13:03.409369 ignition[859]: kargs: kargs passed Jul 10 00:13:03.449353 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 10 00:13:03.409612 ignition[859]: Ignition finished successfully Jul 10 00:13:03.457310 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 10 00:13:03.589914 ignition[867]: Ignition 2.21.0 Jul 10 00:13:03.590015 ignition[867]: Stage: disks Jul 10 00:13:03.602341 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 10 00:13:03.590240 ignition[867]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:13:03.606731 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 10 00:13:03.590255 ignition[867]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:13:03.606815 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 10 00:13:03.591794 ignition[867]: disks: disks passed Jul 10 00:13:03.606872 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 10 00:13:03.591867 ignition[867]: Ignition finished successfully Jul 10 00:13:03.606952 systemd[1]: Reached target sysinit.target - System Initialization. Jul 10 00:13:03.607012 systemd[1]: Reached target basic.target - Basic System. Jul 10 00:13:03.611570 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 10 00:13:03.668264 systemd-fsck[877]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 10 00:13:04.281957 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 10 00:13:04.293402 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 10 00:13:04.480302 systemd-networkd[846]: eth0: Gained IPv6LL Jul 10 00:13:04.758999 kernel: EXT4-fs (vda9): mounted filesystem a310c019-7915-47f5-9fce-db4a09ac26c2 r/w with ordered data mode. Quota mode: none. Jul 10 00:13:04.761058 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 10 00:13:04.764531 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 10 00:13:04.784331 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 10 00:13:04.787589 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 10 00:13:04.798658 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 10 00:13:04.798735 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 10 00:13:04.798777 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 10 00:13:04.858957 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 10 00:13:04.865320 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (885) Jul 10 00:13:04.865355 kernel: BTRFS info (device vda6): first mount of filesystem 66535909-6865-4f30-ad42-a3000fffd5f6 Jul 10 00:13:04.865382 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 10 00:13:04.865396 kernel: BTRFS info (device vda6): using free-space-tree Jul 10 00:13:04.877265 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 10 00:13:04.886538 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 10 00:13:05.091233 initrd-setup-root[909]: cut: /sysroot/etc/passwd: No such file or directory Jul 10 00:13:05.109753 initrd-setup-root[916]: cut: /sysroot/etc/group: No such file or directory Jul 10 00:13:05.119465 initrd-setup-root[923]: cut: /sysroot/etc/shadow: No such file or directory Jul 10 00:13:05.133012 initrd-setup-root[930]: cut: /sysroot/etc/gshadow: No such file or directory Jul 10 00:13:05.457993 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 10 00:13:05.467203 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 10 00:13:05.473027 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 10 00:13:05.534499 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 10 00:13:05.544650 kernel: BTRFS info (device vda6): last unmount of filesystem 66535909-6865-4f30-ad42-a3000fffd5f6 Jul 10 00:13:05.727188 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 10 00:13:05.796114 ignition[999]: INFO : Ignition 2.21.0 Jul 10 00:13:05.796114 ignition[999]: INFO : Stage: mount Jul 10 00:13:05.808040 ignition[999]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 00:13:05.808040 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:13:05.935206 ignition[999]: INFO : mount: mount passed Jul 10 00:13:05.935206 ignition[999]: INFO : Ignition finished successfully Jul 10 00:13:05.949703 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 10 00:13:05.959300 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 10 00:13:06.024352 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 10 00:13:06.083545 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1012) Jul 10 00:13:06.089206 kernel: BTRFS info (device vda6): first mount of filesystem 66535909-6865-4f30-ad42-a3000fffd5f6 Jul 10 00:13:06.089302 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 10 00:13:06.089322 kernel: BTRFS info (device vda6): using free-space-tree Jul 10 00:13:06.125781 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 10 00:13:06.268979 ignition[1029]: INFO : Ignition 2.21.0 Jul 10 00:13:06.268979 ignition[1029]: INFO : Stage: files Jul 10 00:13:06.268979 ignition[1029]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 00:13:06.268979 ignition[1029]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:13:06.280992 ignition[1029]: DEBUG : files: compiled without relabeling support, skipping Jul 10 00:13:06.289272 ignition[1029]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 10 00:13:06.289272 ignition[1029]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 10 00:13:06.296938 ignition[1029]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 10 00:13:06.306145 ignition[1029]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 10 00:13:06.309305 unknown[1029]: wrote ssh authorized keys file for user: core Jul 10 00:13:06.319437 ignition[1029]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 10 00:13:06.329036 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 10 00:13:06.329036 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 10 00:13:06.414058 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 10 00:13:06.718607 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 10 00:13:06.718607 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 10 00:13:06.718607 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 10 00:13:06.718607 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 10 00:13:06.718607 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 10 00:13:06.718607 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 10 00:13:06.718607 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 10 00:13:06.718607 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 10 00:13:06.718607 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 10 00:13:06.784221 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 10 00:13:06.784221 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 10 00:13:06.784221 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 10 00:13:06.784221 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 10 00:13:06.784221 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 10 00:13:06.784221 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jul 10 00:13:07.551304 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 10 00:13:09.344901 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 10 00:13:09.344901 ignition[1029]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 10 00:13:09.357823 ignition[1029]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 10 00:13:09.455679 ignition[1029]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 10 00:13:09.455679 ignition[1029]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 10 00:13:09.455679 ignition[1029]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jul 10 00:13:09.480246 ignition[1029]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 10 00:13:09.480246 ignition[1029]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 10 00:13:09.480246 ignition[1029]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jul 10 00:13:09.480246 ignition[1029]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jul 10 00:13:09.617979 ignition[1029]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 10 00:13:09.654549 ignition[1029]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 10 00:13:09.660222 ignition[1029]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jul 10 00:13:09.660222 ignition[1029]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jul 10 00:13:09.660222 ignition[1029]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jul 10 00:13:09.660222 ignition[1029]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 10 00:13:09.660222 ignition[1029]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 10 00:13:09.660222 ignition[1029]: INFO : files: files passed Jul 10 00:13:09.660222 ignition[1029]: INFO : Ignition finished successfully Jul 10 00:13:09.707368 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 10 00:13:09.719202 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 10 00:13:09.740426 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 10 00:13:09.778255 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 10 00:13:09.778435 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 10 00:13:09.798736 initrd-setup-root-after-ignition[1058]: grep: /sysroot/oem/oem-release: No such file or directory Jul 10 00:13:09.804102 initrd-setup-root-after-ignition[1060]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 10 00:13:09.809985 initrd-setup-root-after-ignition[1060]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 10 00:13:09.814471 initrd-setup-root-after-ignition[1064]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 10 00:13:09.818999 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 10 00:13:09.831424 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 10 00:13:09.841351 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 10 00:13:09.971059 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 10 00:13:09.971250 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 10 00:13:09.973592 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 10 00:13:09.976610 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 10 00:13:09.984493 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 10 00:13:09.986341 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 10 00:13:10.054043 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 10 00:13:10.061816 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 10 00:13:10.129706 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 10 00:13:10.133203 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 00:13:10.135245 systemd[1]: Stopped target timers.target - Timer Units. Jul 10 00:13:10.141348 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 10 00:13:10.141600 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 10 00:13:10.162801 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 10 00:13:10.168891 systemd[1]: Stopped target basic.target - Basic System. Jul 10 00:13:10.169309 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 10 00:13:10.170526 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 10 00:13:10.171073 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 10 00:13:10.171403 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 10 00:13:10.171804 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 10 00:13:10.181608 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 10 00:13:10.187229 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 10 00:13:10.192307 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 10 00:13:10.204785 systemd[1]: Stopped target swap.target - Swaps. Jul 10 00:13:10.208409 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 10 00:13:10.208649 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 10 00:13:10.214328 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 10 00:13:10.216159 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 00:13:10.227492 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 10 00:13:10.231653 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 00:13:10.277865 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 10 00:13:10.280289 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 10 00:13:10.293818 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 10 00:13:10.294047 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 10 00:13:10.294263 systemd[1]: Stopped target paths.target - Path Units. Jul 10 00:13:10.294359 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 10 00:13:10.303174 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 00:13:10.306127 systemd[1]: Stopped target slices.target - Slice Units. Jul 10 00:13:10.311850 systemd[1]: Stopped target sockets.target - Socket Units. Jul 10 00:13:10.314429 systemd[1]: iscsid.socket: Deactivated successfully. Jul 10 00:13:10.314604 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 10 00:13:10.318460 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 10 00:13:10.318626 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 10 00:13:10.320634 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 10 00:13:10.320820 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 10 00:13:10.325887 systemd[1]: ignition-files.service: Deactivated successfully. Jul 10 00:13:10.326138 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 10 00:13:10.330702 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 10 00:13:10.335438 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 10 00:13:10.335664 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 00:13:10.338757 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 10 00:13:10.351266 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 10 00:13:10.356112 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 00:13:10.367603 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 10 00:13:10.367773 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 10 00:13:10.390849 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 10 00:13:10.391105 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 10 00:13:10.400606 ignition[1084]: INFO : Ignition 2.21.0 Jul 10 00:13:10.400606 ignition[1084]: INFO : Stage: umount Jul 10 00:13:10.400606 ignition[1084]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 00:13:10.400606 ignition[1084]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:13:10.410174 ignition[1084]: INFO : umount: umount passed Jul 10 00:13:10.410174 ignition[1084]: INFO : Ignition finished successfully Jul 10 00:13:10.414205 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 10 00:13:10.414407 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 10 00:13:10.420529 systemd[1]: Stopped target network.target - Network. Jul 10 00:13:10.444957 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 10 00:13:10.449033 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 10 00:13:10.449683 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 10 00:13:10.449757 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 10 00:13:10.452150 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 10 00:13:10.452224 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 10 00:13:10.467252 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 10 00:13:10.467357 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 10 00:13:10.469607 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 10 00:13:10.481309 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 10 00:13:10.485499 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 10 00:13:10.495535 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 10 00:13:10.495755 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 10 00:13:10.512553 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 10 00:13:10.513739 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 10 00:13:10.513898 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 00:13:10.538696 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 10 00:13:10.539140 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 10 00:13:10.539657 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 10 00:13:10.551399 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 10 00:13:10.553062 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 10 00:13:10.557810 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 10 00:13:10.557910 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 10 00:13:10.562864 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 10 00:13:10.568549 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 10 00:13:10.568680 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 10 00:13:10.571275 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 10 00:13:10.571362 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 10 00:13:10.581831 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 10 00:13:10.582943 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 10 00:13:10.583390 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 00:13:10.596612 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 10 00:13:10.602082 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 10 00:13:10.602311 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 00:13:10.606625 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 10 00:13:10.606702 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 10 00:13:10.613249 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 10 00:13:10.613331 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 00:13:10.616596 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 10 00:13:10.616721 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 10 00:13:10.622532 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 10 00:13:10.622626 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 10 00:13:10.624875 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 10 00:13:10.624993 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 10 00:13:10.635014 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 10 00:13:10.655633 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 10 00:13:10.656585 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 10 00:13:10.671590 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 10 00:13:10.671700 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 00:13:10.687278 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 00:13:10.687411 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:13:10.699354 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 10 00:13:10.699518 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 10 00:13:10.783279 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 10 00:13:10.783469 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 10 00:13:11.323451 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 10 00:13:11.326035 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 10 00:13:11.332366 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 10 00:13:11.337478 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 10 00:13:11.337662 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 10 00:13:11.361717 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 10 00:13:11.407887 systemd[1]: Switching root. Jul 10 00:13:11.481211 systemd-journald[222]: Received SIGTERM from PID 1 (systemd). Jul 10 00:13:11.481304 systemd-journald[222]: Journal stopped Jul 10 00:13:14.980378 kernel: SELinux: policy capability network_peer_controls=1 Jul 10 00:13:14.980474 kernel: SELinux: policy capability open_perms=1 Jul 10 00:13:14.980493 kernel: SELinux: policy capability extended_socket_class=1 Jul 10 00:13:14.980514 kernel: SELinux: policy capability always_check_network=0 Jul 10 00:13:14.980584 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 10 00:13:14.980600 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 10 00:13:14.980617 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 10 00:13:14.980644 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 10 00:13:14.980666 kernel: SELinux: policy capability userspace_initial_context=0 Jul 10 00:13:14.980688 kernel: audit: type=1403 audit(1752106392.370:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 10 00:13:14.980706 systemd[1]: Successfully loaded SELinux policy in 102.266ms. Jul 10 00:13:14.980740 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 33.572ms. Jul 10 00:13:14.980759 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 10 00:13:14.980787 systemd[1]: Detected virtualization kvm. Jul 10 00:13:14.980805 systemd[1]: Detected architecture x86-64. Jul 10 00:13:14.980823 systemd[1]: Detected first boot. Jul 10 00:13:14.980840 systemd[1]: Initializing machine ID from VM UUID. Jul 10 00:13:14.980857 zram_generator::config[1129]: No configuration found. Jul 10 00:13:14.981934 kernel: Guest personality initialized and is inactive Jul 10 00:13:14.981956 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 10 00:13:14.981974 kernel: Initialized host personality Jul 10 00:13:14.982006 kernel: NET: Registered PF_VSOCK protocol family Jul 10 00:13:14.982025 systemd[1]: Populated /etc with preset unit settings. Jul 10 00:13:14.982045 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 10 00:13:14.982062 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 10 00:13:14.982087 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 10 00:13:14.982105 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 10 00:13:14.982123 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 10 00:13:14.982153 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 10 00:13:14.982174 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 10 00:13:14.982202 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 10 00:13:14.982221 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 10 00:13:14.982238 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 10 00:13:14.982256 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 10 00:13:14.982273 systemd[1]: Created slice user.slice - User and Session Slice. Jul 10 00:13:14.982291 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 00:13:14.982309 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 00:13:14.982330 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 10 00:13:14.982348 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 10 00:13:14.982375 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 10 00:13:14.982393 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 10 00:13:14.982408 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 10 00:13:14.982423 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 00:13:14.982438 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 10 00:13:14.982452 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 10 00:13:14.982467 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 10 00:13:14.982491 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 10 00:13:14.982506 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 10 00:13:14.982522 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 00:13:14.982549 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 10 00:13:14.982567 systemd[1]: Reached target slices.target - Slice Units. Jul 10 00:13:14.982585 systemd[1]: Reached target swap.target - Swaps. Jul 10 00:13:14.982601 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 10 00:13:14.982619 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 10 00:13:14.982637 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 10 00:13:14.982672 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 10 00:13:14.982691 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 10 00:13:14.982707 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 00:13:14.982725 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 10 00:13:14.982742 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 10 00:13:14.982764 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 10 00:13:14.982785 systemd[1]: Mounting media.mount - External Media Directory... Jul 10 00:13:14.982803 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:13:14.982822 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 10 00:13:14.982852 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 10 00:13:14.982885 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 10 00:13:14.982904 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 10 00:13:14.982937 systemd[1]: Reached target machines.target - Containers. Jul 10 00:13:14.982955 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 10 00:13:14.982977 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 00:13:14.983005 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 10 00:13:14.983023 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 10 00:13:14.983040 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 00:13:14.983069 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 10 00:13:14.983087 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 00:13:14.983104 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 10 00:13:14.983122 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 00:13:14.983141 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 10 00:13:14.983158 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 10 00:13:14.983177 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 10 00:13:14.983194 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 10 00:13:14.983222 systemd[1]: Stopped systemd-fsck-usr.service. Jul 10 00:13:14.983241 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 10 00:13:14.983259 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 10 00:13:14.983277 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 10 00:13:14.983294 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 10 00:13:14.983312 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 10 00:13:14.983331 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 10 00:13:14.983359 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 10 00:13:14.983377 systemd[1]: verity-setup.service: Deactivated successfully. Jul 10 00:13:14.983394 systemd[1]: Stopped verity-setup.service. Jul 10 00:13:14.983422 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:13:14.983440 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 10 00:13:14.983457 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 10 00:13:14.983485 kernel: loop: module loaded Jul 10 00:13:14.983503 systemd[1]: Mounted media.mount - External Media Directory. Jul 10 00:13:14.983520 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 10 00:13:14.983538 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 10 00:13:14.983556 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 10 00:13:14.983592 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 00:13:14.983619 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 10 00:13:14.983637 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 10 00:13:14.983656 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 10 00:13:14.983673 kernel: fuse: init (API version 7.41) Jul 10 00:13:14.983690 kernel: ACPI: bus type drm_connector registered Jul 10 00:13:14.983707 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:13:14.983724 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 00:13:14.983742 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:13:14.983760 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 00:13:14.983829 systemd-journald[1211]: Collecting audit messages is disabled. Jul 10 00:13:14.983884 systemd-journald[1211]: Journal started Jul 10 00:13:14.983915 systemd-journald[1211]: Runtime Journal (/run/log/journal/00881a4f81e54ff0a2471453bffdac86) is 6M, max 48.5M, 42.4M free. Jul 10 00:13:13.847081 systemd[1]: Queued start job for default target multi-user.target. Jul 10 00:13:13.877772 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 10 00:13:13.881143 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 10 00:13:13.885026 systemd[1]: systemd-journald.service: Consumed 1.204s CPU time. Jul 10 00:13:14.994670 systemd[1]: Started systemd-journald.service - Journal Service. Jul 10 00:13:14.996293 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 10 00:13:14.996662 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 10 00:13:15.005325 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 10 00:13:15.005712 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 10 00:13:15.014087 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:13:15.014390 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 00:13:15.024193 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 10 00:13:15.028039 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 10 00:13:15.032712 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 10 00:13:15.036975 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 10 00:13:15.039616 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 00:13:15.087501 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 10 00:13:15.100151 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 10 00:13:15.107115 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 10 00:13:15.110232 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 10 00:13:15.110303 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 10 00:13:15.118972 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 10 00:13:15.133953 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 10 00:13:15.142225 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 00:13:15.152388 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 10 00:13:15.158734 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 10 00:13:15.164276 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 00:13:15.175070 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 10 00:13:15.179133 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 10 00:13:15.182769 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 10 00:13:15.193100 systemd-journald[1211]: Time spent on flushing to /var/log/journal/00881a4f81e54ff0a2471453bffdac86 is 54.807ms for 1061 entries. Jul 10 00:13:15.193100 systemd-journald[1211]: System Journal (/var/log/journal/00881a4f81e54ff0a2471453bffdac86) is 8M, max 195.6M, 187.6M free. Jul 10 00:13:15.392690 systemd-journald[1211]: Received client request to flush runtime journal. Jul 10 00:13:15.392767 kernel: loop0: detected capacity change from 0 to 113872 Jul 10 00:13:15.195695 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 10 00:13:15.204162 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 10 00:13:15.222087 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 10 00:13:15.226551 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 10 00:13:15.354644 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 10 00:13:15.361358 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 10 00:13:15.369978 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 10 00:13:15.399226 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 10 00:13:15.408741 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 10 00:13:15.506569 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 10 00:13:15.516819 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 10 00:13:15.521080 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 10 00:13:15.523142 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 10 00:13:15.537193 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 10 00:13:15.563981 kernel: loop1: detected capacity change from 0 to 221472 Jul 10 00:13:15.683716 kernel: loop2: detected capacity change from 0 to 146240 Jul 10 00:13:15.705258 systemd-tmpfiles[1266]: ACLs are not supported, ignoring. Jul 10 00:13:15.705285 systemd-tmpfiles[1266]: ACLs are not supported, ignoring. Jul 10 00:13:15.721304 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 00:13:15.760981 kernel: loop3: detected capacity change from 0 to 113872 Jul 10 00:13:15.813152 kernel: loop4: detected capacity change from 0 to 221472 Jul 10 00:13:15.922854 kernel: loop5: detected capacity change from 0 to 146240 Jul 10 00:13:16.081025 (sd-merge)[1271]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 10 00:13:16.085118 (sd-merge)[1271]: Merged extensions into '/usr'. Jul 10 00:13:16.094642 systemd[1]: Reload requested from client PID 1250 ('systemd-sysext') (unit systemd-sysext.service)... Jul 10 00:13:16.094691 systemd[1]: Reloading... Jul 10 00:13:16.266518 zram_generator::config[1297]: No configuration found. Jul 10 00:13:16.528213 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:13:16.675780 systemd[1]: Reloading finished in 580 ms. Jul 10 00:13:16.735623 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 10 00:13:16.758970 systemd[1]: Starting ensure-sysext.service... Jul 10 00:13:16.763153 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 10 00:13:16.831554 systemd[1]: Reload requested from client PID 1333 ('systemctl') (unit ensure-sysext.service)... Jul 10 00:13:16.831774 systemd[1]: Reloading... Jul 10 00:13:16.895863 systemd-tmpfiles[1334]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 10 00:13:16.896350 systemd-tmpfiles[1334]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 10 00:13:16.896711 systemd-tmpfiles[1334]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 10 00:13:16.899418 systemd-tmpfiles[1334]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 10 00:13:16.903854 systemd-tmpfiles[1334]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 10 00:13:16.904297 systemd-tmpfiles[1334]: ACLs are not supported, ignoring. Jul 10 00:13:16.904390 systemd-tmpfiles[1334]: ACLs are not supported, ignoring. Jul 10 00:13:16.925295 systemd-tmpfiles[1334]: Detected autofs mount point /boot during canonicalization of boot. Jul 10 00:13:16.925477 systemd-tmpfiles[1334]: Skipping /boot Jul 10 00:13:16.997962 zram_generator::config[1361]: No configuration found. Jul 10 00:13:17.092492 systemd-tmpfiles[1334]: Detected autofs mount point /boot during canonicalization of boot. Jul 10 00:13:17.092517 systemd-tmpfiles[1334]: Skipping /boot Jul 10 00:13:17.225283 ldconfig[1245]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 10 00:13:17.386545 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:13:17.545226 systemd[1]: Reloading finished in 712 ms. Jul 10 00:13:17.608114 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 00:13:17.664619 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 10 00:13:17.711403 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 10 00:13:17.748734 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 10 00:13:17.792278 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 10 00:13:17.811268 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 10 00:13:17.849626 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 10 00:13:17.877170 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:13:17.879460 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 00:13:17.882452 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 00:13:17.887154 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 10 00:13:17.899332 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 00:13:17.922402 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 00:13:17.932669 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 00:13:17.935122 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 10 00:13:17.951614 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 10 00:13:17.953838 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:13:17.957836 systemd[1]: Finished ensure-sysext.service. Jul 10 00:13:17.961037 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 10 00:13:17.963859 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:13:17.975184 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 00:13:17.983437 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 10 00:13:17.983818 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 10 00:13:17.987974 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:13:17.988564 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 00:13:17.997251 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:13:17.997604 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 00:13:18.012938 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 10 00:13:18.014904 augenrules[1430]: No rules Jul 10 00:13:18.021945 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 10 00:13:18.024726 systemd[1]: audit-rules.service: Deactivated successfully. Jul 10 00:13:18.025197 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 10 00:13:18.028621 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 10 00:13:18.043470 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 00:13:18.043610 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 10 00:13:18.049125 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 10 00:13:18.056136 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 00:13:18.064162 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 10 00:13:18.065881 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 10 00:13:18.096373 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 10 00:13:18.111575 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 10 00:13:18.168298 systemd-udevd[1446]: Using default interface naming scheme 'v255'. Jul 10 00:13:18.215009 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 00:13:18.224053 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 10 00:13:18.587439 systemd-resolved[1409]: Positive Trust Anchors: Jul 10 00:13:18.587746 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 10 00:13:18.588211 systemd-resolved[1409]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 10 00:13:18.588331 systemd-resolved[1409]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 10 00:13:18.601511 systemd[1]: Reached target time-set.target - System Time Set. Jul 10 00:13:18.617979 systemd-resolved[1409]: Defaulting to hostname 'linux'. Jul 10 00:13:18.626273 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 10 00:13:18.637901 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 10 00:13:18.650430 systemd[1]: Reached target sysinit.target - System Initialization. Jul 10 00:13:18.655475 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 10 00:13:18.660878 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 10 00:13:18.662954 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 10 00:13:18.664787 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 10 00:13:18.666300 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 10 00:13:18.669296 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 10 00:13:18.671095 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 10 00:13:18.671147 systemd[1]: Reached target paths.target - Path Units. Jul 10 00:13:18.674409 systemd[1]: Reached target timers.target - Timer Units. Jul 10 00:13:18.677208 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 10 00:13:18.682617 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 10 00:13:18.703055 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 10 00:13:18.704985 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 10 00:13:18.706699 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 10 00:13:18.747324 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 10 00:13:18.752794 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 10 00:13:18.755372 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 10 00:13:18.764752 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 10 00:13:18.781774 systemd[1]: Reached target sockets.target - Socket Units. Jul 10 00:13:18.784209 systemd[1]: Reached target basic.target - Basic System. Jul 10 00:13:18.785896 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 10 00:13:18.785971 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 10 00:13:18.815875 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 10 00:13:18.830477 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 10 00:13:18.841827 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 10 00:13:18.850373 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 10 00:13:18.857262 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 10 00:13:18.870195 jq[1497]: false Jul 10 00:13:18.877179 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 10 00:13:18.908143 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 10 00:13:18.910087 extend-filesystems[1498]: Found /dev/vda6 Jul 10 00:13:18.922075 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 10 00:13:18.930344 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 10 00:13:18.940283 extend-filesystems[1498]: Found /dev/vda9 Jul 10 00:13:18.957878 extend-filesystems[1498]: Checking size of /dev/vda9 Jul 10 00:13:18.956583 oslogin_cache_refresh[1499]: Refreshing passwd entry cache Jul 10 00:13:18.965311 google_oslogin_nss_cache[1499]: oslogin_cache_refresh[1499]: Refreshing passwd entry cache Jul 10 00:13:18.965311 google_oslogin_nss_cache[1499]: oslogin_cache_refresh[1499]: Failure getting users, quitting Jul 10 00:13:18.965311 google_oslogin_nss_cache[1499]: oslogin_cache_refresh[1499]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 10 00:13:18.965311 google_oslogin_nss_cache[1499]: oslogin_cache_refresh[1499]: Refreshing group entry cache Jul 10 00:13:18.965311 google_oslogin_nss_cache[1499]: oslogin_cache_refresh[1499]: Failure getting groups, quitting Jul 10 00:13:18.965311 google_oslogin_nss_cache[1499]: oslogin_cache_refresh[1499]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 10 00:13:18.960806 oslogin_cache_refresh[1499]: Failure getting users, quitting Jul 10 00:13:18.960834 oslogin_cache_refresh[1499]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 10 00:13:18.960906 oslogin_cache_refresh[1499]: Refreshing group entry cache Jul 10 00:13:18.961597 oslogin_cache_refresh[1499]: Failure getting groups, quitting Jul 10 00:13:18.961610 oslogin_cache_refresh[1499]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 10 00:13:18.978232 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 10 00:13:18.987807 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 10 00:13:18.991160 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 10 00:13:18.993058 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 10 00:13:19.005976 extend-filesystems[1498]: Resized partition /dev/vda9 Jul 10 00:13:18.999421 systemd[1]: Starting update-engine.service - Update Engine... Jul 10 00:13:19.012362 extend-filesystems[1518]: resize2fs 1.47.2 (1-Jan-2025) Jul 10 00:13:19.030103 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 10 00:13:19.017836 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 10 00:13:19.038177 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 10 00:13:19.057138 update_engine[1516]: I20250710 00:13:19.043574 1516 main.cc:92] Flatcar Update Engine starting Jul 10 00:13:19.040381 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 10 00:13:19.040758 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 10 00:13:19.041192 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 10 00:13:19.041474 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 10 00:13:19.055593 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 10 00:13:19.056735 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 10 00:13:19.077833 jq[1522]: true Jul 10 00:13:19.070897 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 10 00:13:19.086851 systemd[1]: motdgen.service: Deactivated successfully. Jul 10 00:13:19.087311 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 10 00:13:19.134106 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 10 00:13:19.150431 jq[1526]: true Jul 10 00:13:19.176055 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 10 00:13:19.219534 systemd-networkd[1459]: lo: Link UP Jul 10 00:13:19.219547 systemd-networkd[1459]: lo: Gained carrier Jul 10 00:13:19.221952 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jul 10 00:13:19.225593 extend-filesystems[1518]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 10 00:13:19.225593 extend-filesystems[1518]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 10 00:13:19.225593 extend-filesystems[1518]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 10 00:13:19.231487 extend-filesystems[1498]: Resized filesystem in /dev/vda9 Jul 10 00:13:19.228119 systemd-networkd[1459]: Enumeration completed Jul 10 00:13:19.228248 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 10 00:13:19.230803 systemd-networkd[1459]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 00:13:19.230809 systemd-networkd[1459]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 10 00:13:19.235834 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 10 00:13:19.236527 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 10 00:13:19.237947 kernel: ACPI: button: Power Button [PWRF] Jul 10 00:13:19.241273 kernel: mousedev: PS/2 mouse device common for all mice Jul 10 00:13:19.243463 systemd[1]: Reached target network.target - Network. Jul 10 00:13:19.244430 systemd-networkd[1459]: eth0: Link UP Jul 10 00:13:19.248525 systemd-networkd[1459]: eth0: Gained carrier Jul 10 00:13:19.248573 systemd-networkd[1459]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 00:13:19.258054 systemd[1]: Starting containerd.service - containerd container runtime... Jul 10 00:13:19.265489 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 10 00:13:19.273981 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jul 10 00:13:19.274444 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 10 00:13:19.274668 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 10 00:13:19.275193 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 10 00:13:19.290014 tar[1523]: linux-amd64/helm Jul 10 00:13:19.292954 systemd-networkd[1459]: eth0: DHCPv4 address 10.0.0.15/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 10 00:13:19.294983 dbus-daemon[1495]: [system] SELinux support is enabled Jul 10 00:13:19.296967 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 10 00:13:19.301816 systemd-timesyncd[1445]: Network configuration changed, trying to establish connection. Jul 10 00:13:19.305523 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 10 00:13:19.305567 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 10 00:13:19.305691 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 10 00:13:19.305715 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 10 00:13:21.225139 systemd-timesyncd[1445]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 10 00:13:21.225229 systemd-timesyncd[1445]: Initial clock synchronization to Thu 2025-07-10 00:13:21.224927 UTC. Jul 10 00:13:21.229130 systemd-resolved[1409]: Clock change detected. Flushing caches. Jul 10 00:13:21.254293 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 10 00:13:21.257219 update_engine[1516]: I20250710 00:13:21.255506 1516 update_check_scheduler.cc:74] Next update check in 2m14s Jul 10 00:13:21.274020 systemd[1]: Started update-engine.service - Update Engine. Jul 10 00:13:21.302092 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 10 00:13:21.327025 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 10 00:13:21.357563 (ntainerd)[1577]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 10 00:13:21.365932 systemd-logind[1514]: New seat seat0. Jul 10 00:13:21.367705 systemd[1]: Started systemd-logind.service - User Login Management. Jul 10 00:13:21.371064 bash[1570]: Updated "/home/core/.ssh/authorized_keys" Jul 10 00:13:21.376817 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 10 00:13:21.379498 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 10 00:13:21.380084 sshd_keygen[1529]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 10 00:13:21.432294 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:13:21.466731 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 10 00:13:21.473444 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 10 00:13:21.522794 systemd[1]: issuegen.service: Deactivated successfully. Jul 10 00:13:21.524288 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 10 00:13:21.531412 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 10 00:13:21.752473 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 00:13:21.753056 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:13:21.761374 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 10 00:13:21.770347 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:13:21.783213 systemd-logind[1514]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 10 00:13:22.040668 systemd-logind[1514]: Watching system buttons on /dev/input/event2 (Power Button) Jul 10 00:13:22.092856 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 10 00:13:22.105557 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 10 00:13:22.116637 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 10 00:13:22.117329 systemd[1]: Reached target getty.target - Login Prompts. Jul 10 00:13:22.118542 locksmithd[1573]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 10 00:13:22.336464 kernel: kvm_amd: TSC scaling supported Jul 10 00:13:22.336575 kernel: kvm_amd: Nested Virtualization enabled Jul 10 00:13:22.336593 kernel: kvm_amd: Nested Paging enabled Jul 10 00:13:22.336610 kernel: kvm_amd: LBR virtualization supported Jul 10 00:13:22.341918 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jul 10 00:13:22.342076 kernel: kvm_amd: Virtual GIF supported Jul 10 00:13:22.341418 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:13:22.527445 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 10 00:13:22.536132 systemd[1]: Started sshd@0-10.0.0.15:22-10.0.0.1:48398.service - OpenSSH per-connection server daemon (10.0.0.1:48398). Jul 10 00:13:22.612215 containerd[1577]: time="2025-07-10T00:13:22Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 10 00:13:22.616947 containerd[1577]: time="2025-07-10T00:13:22.616902574Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 10 00:13:22.670124 containerd[1577]: time="2025-07-10T00:13:22.670054384Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="12.915µs" Jul 10 00:13:22.670282 containerd[1577]: time="2025-07-10T00:13:22.670264979Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 10 00:13:22.670344 containerd[1577]: time="2025-07-10T00:13:22.670331273Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 10 00:13:22.670684 containerd[1577]: time="2025-07-10T00:13:22.670664778Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 10 00:13:22.670770 containerd[1577]: time="2025-07-10T00:13:22.670755218Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 10 00:13:22.670854 containerd[1577]: time="2025-07-10T00:13:22.670841019Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 10 00:13:22.671024 containerd[1577]: time="2025-07-10T00:13:22.670995729Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 10 00:13:22.671093 containerd[1577]: time="2025-07-10T00:13:22.671079005Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 10 00:13:22.671582 containerd[1577]: time="2025-07-10T00:13:22.671540520Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 10 00:13:22.671675 containerd[1577]: time="2025-07-10T00:13:22.671659964Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 10 00:13:22.671734 containerd[1577]: time="2025-07-10T00:13:22.671720217Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 10 00:13:22.671800 containerd[1577]: time="2025-07-10T00:13:22.671787023Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 10 00:13:22.672015 containerd[1577]: time="2025-07-10T00:13:22.671989642Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 10 00:13:22.672604 containerd[1577]: time="2025-07-10T00:13:22.672569209Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 10 00:13:22.675845 containerd[1577]: time="2025-07-10T00:13:22.675810376Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 10 00:13:22.675953 containerd[1577]: time="2025-07-10T00:13:22.675936934Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 10 00:13:22.676147 containerd[1577]: time="2025-07-10T00:13:22.676083869Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 10 00:13:22.676639 containerd[1577]: time="2025-07-10T00:13:22.676617780Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 10 00:13:22.676895 containerd[1577]: time="2025-07-10T00:13:22.676875253Z" level=info msg="metadata content store policy set" policy=shared Jul 10 00:13:22.698786 containerd[1577]: time="2025-07-10T00:13:22.698646033Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 10 00:13:22.699043 containerd[1577]: time="2025-07-10T00:13:22.699018922Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 10 00:13:22.699306 containerd[1577]: time="2025-07-10T00:13:22.699283649Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 10 00:13:22.699380 containerd[1577]: time="2025-07-10T00:13:22.699365312Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 10 00:13:22.699469 containerd[1577]: time="2025-07-10T00:13:22.699443428Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 10 00:13:22.699561 containerd[1577]: time="2025-07-10T00:13:22.699542925Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 10 00:13:22.699642 containerd[1577]: time="2025-07-10T00:13:22.699627884Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 10 00:13:22.699722 containerd[1577]: time="2025-07-10T00:13:22.699708065Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 10 00:13:22.699814 containerd[1577]: time="2025-07-10T00:13:22.699799065Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 10 00:13:22.699907 containerd[1577]: time="2025-07-10T00:13:22.699891749Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 10 00:13:22.700071 containerd[1577]: time="2025-07-10T00:13:22.700055706Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 10 00:13:22.700196 containerd[1577]: time="2025-07-10T00:13:22.700181703Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 10 00:13:22.711808 containerd[1577]: time="2025-07-10T00:13:22.709317803Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 10 00:13:22.711808 containerd[1577]: time="2025-07-10T00:13:22.709404956Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 10 00:13:22.711808 containerd[1577]: time="2025-07-10T00:13:22.709442657Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 10 00:13:22.711808 containerd[1577]: time="2025-07-10T00:13:22.709456292Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 10 00:13:22.711808 containerd[1577]: time="2025-07-10T00:13:22.709470469Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 10 00:13:22.711808 containerd[1577]: time="2025-07-10T00:13:22.709482381Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 10 00:13:22.711808 containerd[1577]: time="2025-07-10T00:13:22.709497059Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 10 00:13:22.711808 containerd[1577]: time="2025-07-10T00:13:22.709525121Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 10 00:13:22.711808 containerd[1577]: time="2025-07-10T00:13:22.709540290Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 10 00:13:22.711808 containerd[1577]: time="2025-07-10T00:13:22.709552393Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 10 00:13:22.711808 containerd[1577]: time="2025-07-10T00:13:22.709564846Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 10 00:13:22.711808 containerd[1577]: time="2025-07-10T00:13:22.709776262Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 10 00:13:22.711808 containerd[1577]: time="2025-07-10T00:13:22.709800277Z" level=info msg="Start snapshots syncer" Jul 10 00:13:22.711808 containerd[1577]: time="2025-07-10T00:13:22.709860510Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 10 00:13:22.712987 containerd[1577]: time="2025-07-10T00:13:22.710363523Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 10 00:13:22.712987 containerd[1577]: time="2025-07-10T00:13:22.710463872Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 10 00:13:22.713214 containerd[1577]: time="2025-07-10T00:13:22.710578076Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 10 00:13:22.713214 containerd[1577]: time="2025-07-10T00:13:22.710835358Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 10 00:13:22.713214 containerd[1577]: time="2025-07-10T00:13:22.710876926Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 10 00:13:22.713214 containerd[1577]: time="2025-07-10T00:13:22.710904668Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 10 00:13:22.713214 containerd[1577]: time="2025-07-10T00:13:22.710918183Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 10 00:13:22.713214 containerd[1577]: time="2025-07-10T00:13:22.710948170Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 10 00:13:22.713214 containerd[1577]: time="2025-07-10T00:13:22.710959481Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 10 00:13:22.713214 containerd[1577]: time="2025-07-10T00:13:22.710993745Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 10 00:13:22.713214 containerd[1577]: time="2025-07-10T00:13:22.711022118Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 10 00:13:22.713214 containerd[1577]: time="2025-07-10T00:13:22.711034281Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 10 00:13:22.713214 containerd[1577]: time="2025-07-10T00:13:22.711062775Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 10 00:13:22.713214 containerd[1577]: time="2025-07-10T00:13:22.711194441Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 10 00:13:22.713214 containerd[1577]: time="2025-07-10T00:13:22.711236651Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 10 00:13:22.713214 containerd[1577]: time="2025-07-10T00:13:22.711247140Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 10 00:13:22.713576 containerd[1577]: time="2025-07-10T00:13:22.711257650Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 10 00:13:22.713576 containerd[1577]: time="2025-07-10T00:13:22.711266647Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 10 00:13:22.713576 containerd[1577]: time="2025-07-10T00:13:22.711278158Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 10 00:13:22.713576 containerd[1577]: time="2025-07-10T00:13:22.711309006Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 10 00:13:22.713576 containerd[1577]: time="2025-07-10T00:13:22.711334735Z" level=info msg="runtime interface created" Jul 10 00:13:22.713576 containerd[1577]: time="2025-07-10T00:13:22.711340846Z" level=info msg="created NRI interface" Jul 10 00:13:22.713576 containerd[1577]: time="2025-07-10T00:13:22.711350704Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 10 00:13:22.713576 containerd[1577]: time="2025-07-10T00:13:22.711378507Z" level=info msg="Connect containerd service" Jul 10 00:13:22.713576 containerd[1577]: time="2025-07-10T00:13:22.711416869Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 10 00:13:22.717803 containerd[1577]: time="2025-07-10T00:13:22.717019393Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 10 00:13:23.022499 systemd-networkd[1459]: eth0: Gained IPv6LL Jul 10 00:13:23.059344 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 10 00:13:23.065221 systemd[1]: Reached target network-online.target - Network is Online. Jul 10 00:13:23.075076 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 10 00:13:23.140029 tar[1523]: linux-amd64/LICENSE Jul 10 00:13:23.140029 tar[1523]: linux-amd64/README.md Jul 10 00:13:23.161482 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:13:23.194469 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 10 00:13:23.272104 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 10 00:13:23.315823 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 10 00:13:23.316384 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 10 00:13:23.322433 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 10 00:13:23.403962 sshd[1623]: Accepted publickey for core from 10.0.0.1 port 48398 ssh2: RSA SHA256:CN83gutZb/k5+6WAkn10Pe0824AMOrEDH4+5h0rggeY Jul 10 00:13:23.401263 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 10 00:13:23.390529 sshd-session[1623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:13:23.420639 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 10 00:13:23.425797 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 10 00:13:23.448819 systemd-logind[1514]: New session 1 of user core. Jul 10 00:13:23.546096 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 10 00:13:23.568809 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 10 00:13:23.628884 (systemd)[1662]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:13:23.631304 containerd[1577]: time="2025-07-10T00:13:23.629517457Z" level=info msg="Start subscribing containerd event" Jul 10 00:13:23.631304 containerd[1577]: time="2025-07-10T00:13:23.629593900Z" level=info msg="Start recovering state" Jul 10 00:13:23.637645 containerd[1577]: time="2025-07-10T00:13:23.636675458Z" level=info msg="Start event monitor" Jul 10 00:13:23.637645 containerd[1577]: time="2025-07-10T00:13:23.636731894Z" level=info msg="Start cni network conf syncer for default" Jul 10 00:13:23.637645 containerd[1577]: time="2025-07-10T00:13:23.636755839Z" level=info msg="Start streaming server" Jul 10 00:13:23.637645 containerd[1577]: time="2025-07-10T00:13:23.636774103Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 10 00:13:23.637645 containerd[1577]: time="2025-07-10T00:13:23.636784943Z" level=info msg="runtime interface starting up..." Jul 10 00:13:23.637645 containerd[1577]: time="2025-07-10T00:13:23.636793680Z" level=info msg="starting plugins..." Jul 10 00:13:23.637645 containerd[1577]: time="2025-07-10T00:13:23.636818566Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 10 00:13:23.637953 containerd[1577]: time="2025-07-10T00:13:23.637729394Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 10 00:13:23.637953 containerd[1577]: time="2025-07-10T00:13:23.637802652Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 10 00:13:23.639421 systemd-logind[1514]: New session c1 of user core. Jul 10 00:13:23.651496 containerd[1577]: time="2025-07-10T00:13:23.649749943Z" level=info msg="containerd successfully booted in 1.038913s" Jul 10 00:13:23.649908 systemd[1]: Started containerd.service - containerd container runtime. Jul 10 00:13:24.054088 kernel: EDAC MC: Ver: 3.0.0 Jul 10 00:13:24.283681 systemd[1662]: Queued start job for default target default.target. Jul 10 00:13:24.314118 systemd[1662]: Created slice app.slice - User Application Slice. Jul 10 00:13:24.314257 systemd[1662]: Reached target paths.target - Paths. Jul 10 00:13:24.314565 systemd[1662]: Reached target timers.target - Timers. Jul 10 00:13:24.317337 systemd[1662]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 10 00:13:24.345170 systemd[1662]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 10 00:13:24.345367 systemd[1662]: Reached target sockets.target - Sockets. Jul 10 00:13:24.345427 systemd[1662]: Reached target basic.target - Basic System. Jul 10 00:13:24.345483 systemd[1662]: Reached target default.target - Main User Target. Jul 10 00:13:24.345528 systemd[1662]: Startup finished in 680ms. Jul 10 00:13:24.345764 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 10 00:13:24.354256 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 10 00:13:24.449813 systemd[1]: Started sshd@1-10.0.0.15:22-10.0.0.1:48410.service - OpenSSH per-connection server daemon (10.0.0.1:48410). Jul 10 00:13:24.545334 sshd[1675]: Accepted publickey for core from 10.0.0.1 port 48410 ssh2: RSA SHA256:CN83gutZb/k5+6WAkn10Pe0824AMOrEDH4+5h0rggeY Jul 10 00:13:24.548777 sshd-session[1675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:13:24.577648 systemd-logind[1514]: New session 2 of user core. Jul 10 00:13:24.605145 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 10 00:13:24.717887 sshd[1677]: Connection closed by 10.0.0.1 port 48410 Jul 10 00:13:24.720241 sshd-session[1675]: pam_unix(sshd:session): session closed for user core Jul 10 00:13:24.733732 systemd[1]: sshd@1-10.0.0.15:22-10.0.0.1:48410.service: Deactivated successfully. Jul 10 00:13:24.738906 systemd[1]: session-2.scope: Deactivated successfully. Jul 10 00:13:24.742006 systemd-logind[1514]: Session 2 logged out. Waiting for processes to exit. Jul 10 00:13:24.755671 systemd[1]: Started sshd@2-10.0.0.15:22-10.0.0.1:48422.service - OpenSSH per-connection server daemon (10.0.0.1:48422). Jul 10 00:13:24.768037 systemd-logind[1514]: Removed session 2. Jul 10 00:13:24.903366 sshd[1683]: Accepted publickey for core from 10.0.0.1 port 48422 ssh2: RSA SHA256:CN83gutZb/k5+6WAkn10Pe0824AMOrEDH4+5h0rggeY Jul 10 00:13:24.908188 sshd-session[1683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:13:24.926241 systemd-logind[1514]: New session 3 of user core. Jul 10 00:13:24.956140 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 10 00:13:25.051430 sshd[1685]: Connection closed by 10.0.0.1 port 48422 Jul 10 00:13:25.051878 sshd-session[1683]: pam_unix(sshd:session): session closed for user core Jul 10 00:13:25.067704 systemd[1]: sshd@2-10.0.0.15:22-10.0.0.1:48422.service: Deactivated successfully. Jul 10 00:13:25.074712 systemd[1]: session-3.scope: Deactivated successfully. Jul 10 00:13:25.082128 systemd-logind[1514]: Session 3 logged out. Waiting for processes to exit. Jul 10 00:13:25.097361 systemd-logind[1514]: Removed session 3. Jul 10 00:13:27.628400 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:13:27.633475 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 10 00:13:27.637843 systemd[1]: Startup finished in 7.907s (kernel) + 15.523s (initrd) + 13.455s (userspace) = 36.886s. Jul 10 00:13:27.646723 (kubelet)[1698]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 00:13:29.676786 kubelet[1698]: E0710 00:13:29.676687 1698 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 00:13:29.687159 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 00:13:29.687430 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 00:13:29.687944 systemd[1]: kubelet.service: Consumed 3.366s CPU time, 268.6M memory peak. Jul 10 00:13:35.073555 systemd[1]: Started sshd@3-10.0.0.15:22-10.0.0.1:59302.service - OpenSSH per-connection server daemon (10.0.0.1:59302). Jul 10 00:13:35.155850 sshd[1708]: Accepted publickey for core from 10.0.0.1 port 59302 ssh2: RSA SHA256:CN83gutZb/k5+6WAkn10Pe0824AMOrEDH4+5h0rggeY Jul 10 00:13:35.158187 sshd-session[1708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:13:35.168367 systemd-logind[1514]: New session 4 of user core. Jul 10 00:13:35.174315 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 10 00:13:35.245321 sshd[1710]: Connection closed by 10.0.0.1 port 59302 Jul 10 00:13:35.245933 sshd-session[1708]: pam_unix(sshd:session): session closed for user core Jul 10 00:13:35.265793 systemd[1]: sshd@3-10.0.0.15:22-10.0.0.1:59302.service: Deactivated successfully. Jul 10 00:13:35.271305 systemd[1]: session-4.scope: Deactivated successfully. Jul 10 00:13:35.272477 systemd-logind[1514]: Session 4 logged out. Waiting for processes to exit. Jul 10 00:13:35.282850 systemd[1]: Started sshd@4-10.0.0.15:22-10.0.0.1:59312.service - OpenSSH per-connection server daemon (10.0.0.1:59312). Jul 10 00:13:35.284184 systemd-logind[1514]: Removed session 4. Jul 10 00:13:35.374491 sshd[1716]: Accepted publickey for core from 10.0.0.1 port 59312 ssh2: RSA SHA256:CN83gutZb/k5+6WAkn10Pe0824AMOrEDH4+5h0rggeY Jul 10 00:13:35.378305 sshd-session[1716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:13:35.387907 systemd-logind[1514]: New session 5 of user core. Jul 10 00:13:35.411700 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 10 00:13:35.479688 sshd[1718]: Connection closed by 10.0.0.1 port 59312 Jul 10 00:13:35.479758 sshd-session[1716]: pam_unix(sshd:session): session closed for user core Jul 10 00:13:35.495862 systemd[1]: sshd@4-10.0.0.15:22-10.0.0.1:59312.service: Deactivated successfully. Jul 10 00:13:35.498517 systemd[1]: session-5.scope: Deactivated successfully. Jul 10 00:13:35.500635 systemd-logind[1514]: Session 5 logged out. Waiting for processes to exit. Jul 10 00:13:35.507922 systemd[1]: Started sshd@5-10.0.0.15:22-10.0.0.1:59318.service - OpenSSH per-connection server daemon (10.0.0.1:59318). Jul 10 00:13:35.509516 systemd-logind[1514]: Removed session 5. Jul 10 00:13:35.575171 sshd[1724]: Accepted publickey for core from 10.0.0.1 port 59318 ssh2: RSA SHA256:CN83gutZb/k5+6WAkn10Pe0824AMOrEDH4+5h0rggeY Jul 10 00:13:35.578069 sshd-session[1724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:13:35.589738 systemd-logind[1514]: New session 6 of user core. Jul 10 00:13:35.606939 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 10 00:13:35.675237 sshd[1726]: Connection closed by 10.0.0.1 port 59318 Jul 10 00:13:35.675052 sshd-session[1724]: pam_unix(sshd:session): session closed for user core Jul 10 00:13:35.694779 systemd[1]: sshd@5-10.0.0.15:22-10.0.0.1:59318.service: Deactivated successfully. Jul 10 00:13:35.697616 systemd[1]: session-6.scope: Deactivated successfully. Jul 10 00:13:35.699835 systemd-logind[1514]: Session 6 logged out. Waiting for processes to exit. Jul 10 00:13:35.703266 systemd[1]: Started sshd@6-10.0.0.15:22-10.0.0.1:59328.service - OpenSSH per-connection server daemon (10.0.0.1:59328). Jul 10 00:13:35.704840 systemd-logind[1514]: Removed session 6. Jul 10 00:13:35.774896 sshd[1732]: Accepted publickey for core from 10.0.0.1 port 59328 ssh2: RSA SHA256:CN83gutZb/k5+6WAkn10Pe0824AMOrEDH4+5h0rggeY Jul 10 00:13:35.777112 sshd-session[1732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:13:35.784309 systemd-logind[1514]: New session 7 of user core. Jul 10 00:13:35.794397 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 10 00:13:35.869671 sudo[1735]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 10 00:13:35.870175 sudo[1735]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 00:13:35.890329 sudo[1735]: pam_unix(sudo:session): session closed for user root Jul 10 00:13:35.893095 sshd[1734]: Connection closed by 10.0.0.1 port 59328 Jul 10 00:13:35.893598 sshd-session[1732]: pam_unix(sshd:session): session closed for user core Jul 10 00:13:35.910177 systemd[1]: sshd@6-10.0.0.15:22-10.0.0.1:59328.service: Deactivated successfully. Jul 10 00:13:35.913130 systemd[1]: session-7.scope: Deactivated successfully. Jul 10 00:13:35.914276 systemd-logind[1514]: Session 7 logged out. Waiting for processes to exit. Jul 10 00:13:35.918782 systemd[1]: Started sshd@7-10.0.0.15:22-10.0.0.1:59334.service - OpenSSH per-connection server daemon (10.0.0.1:59334). Jul 10 00:13:35.919940 systemd-logind[1514]: Removed session 7. Jul 10 00:13:35.994224 sshd[1741]: Accepted publickey for core from 10.0.0.1 port 59334 ssh2: RSA SHA256:CN83gutZb/k5+6WAkn10Pe0824AMOrEDH4+5h0rggeY Jul 10 00:13:35.996342 sshd-session[1741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:13:36.002215 systemd-logind[1514]: New session 8 of user core. Jul 10 00:13:36.021332 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 10 00:13:36.078843 sudo[1745]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 10 00:13:36.079260 sudo[1745]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 00:13:36.223380 sudo[1745]: pam_unix(sudo:session): session closed for user root Jul 10 00:13:36.230727 sudo[1744]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 10 00:13:36.231168 sudo[1744]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 00:13:36.243411 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 10 00:13:36.303457 augenrules[1767]: No rules Jul 10 00:13:36.305515 systemd[1]: audit-rules.service: Deactivated successfully. Jul 10 00:13:36.305831 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 10 00:13:36.307320 sudo[1744]: pam_unix(sudo:session): session closed for user root Jul 10 00:13:36.309245 sshd[1743]: Connection closed by 10.0.0.1 port 59334 Jul 10 00:13:36.309579 sshd-session[1741]: pam_unix(sshd:session): session closed for user core Jul 10 00:13:36.328044 systemd[1]: sshd@7-10.0.0.15:22-10.0.0.1:59334.service: Deactivated successfully. Jul 10 00:13:36.330573 systemd[1]: session-8.scope: Deactivated successfully. Jul 10 00:13:36.331503 systemd-logind[1514]: Session 8 logged out. Waiting for processes to exit. Jul 10 00:13:36.335401 systemd[1]: Started sshd@8-10.0.0.15:22-10.0.0.1:59336.service - OpenSSH per-connection server daemon (10.0.0.1:59336). Jul 10 00:13:36.336583 systemd-logind[1514]: Removed session 8. Jul 10 00:13:36.390721 sshd[1776]: Accepted publickey for core from 10.0.0.1 port 59336 ssh2: RSA SHA256:CN83gutZb/k5+6WAkn10Pe0824AMOrEDH4+5h0rggeY Jul 10 00:13:36.392670 sshd-session[1776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:13:36.399071 systemd-logind[1514]: New session 9 of user core. Jul 10 00:13:36.406164 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 10 00:13:36.463632 sudo[1779]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 10 00:13:36.463955 sudo[1779]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 00:13:37.258012 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 10 00:13:37.273401 (dockerd)[1800]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 10 00:13:38.083455 dockerd[1800]: time="2025-07-10T00:13:38.083363554Z" level=info msg="Starting up" Jul 10 00:13:38.084453 dockerd[1800]: time="2025-07-10T00:13:38.084426888Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 10 00:13:38.376750 dockerd[1800]: time="2025-07-10T00:13:38.376544839Z" level=info msg="Loading containers: start." Jul 10 00:13:38.390008 kernel: Initializing XFRM netlink socket Jul 10 00:13:38.724072 systemd-networkd[1459]: docker0: Link UP Jul 10 00:13:38.731365 dockerd[1800]: time="2025-07-10T00:13:38.731298102Z" level=info msg="Loading containers: done." Jul 10 00:13:38.751437 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2201239089-merged.mount: Deactivated successfully. Jul 10 00:13:38.755895 dockerd[1800]: time="2025-07-10T00:13:38.755830199Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 10 00:13:38.756059 dockerd[1800]: time="2025-07-10T00:13:38.755958961Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 10 00:13:38.756159 dockerd[1800]: time="2025-07-10T00:13:38.756135552Z" level=info msg="Initializing buildkit" Jul 10 00:13:38.794427 dockerd[1800]: time="2025-07-10T00:13:38.794333062Z" level=info msg="Completed buildkit initialization" Jul 10 00:13:38.803047 dockerd[1800]: time="2025-07-10T00:13:38.802957293Z" level=info msg="Daemon has completed initialization" Jul 10 00:13:38.803226 dockerd[1800]: time="2025-07-10T00:13:38.803138202Z" level=info msg="API listen on /run/docker.sock" Jul 10 00:13:38.803389 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 10 00:13:39.900599 containerd[1577]: time="2025-07-10T00:13:39.900524210Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 10 00:13:39.937825 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 10 00:13:39.940031 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:13:40.224196 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:13:40.229161 (kubelet)[2017]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 00:13:40.292204 kubelet[2017]: E0710 00:13:40.292115 2017 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 00:13:40.299151 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 00:13:40.299388 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 00:13:40.299802 systemd[1]: kubelet.service: Consumed 306ms CPU time, 108.4M memory peak. Jul 10 00:13:43.242373 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3771802308.mount: Deactivated successfully. Jul 10 00:13:45.033046 containerd[1577]: time="2025-07-10T00:13:45.032945869Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:13:45.033983 containerd[1577]: time="2025-07-10T00:13:45.033934663Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=28077744" Jul 10 00:13:45.035296 containerd[1577]: time="2025-07-10T00:13:45.035239921Z" level=info msg="ImageCreate event name:\"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:13:45.037723 containerd[1577]: time="2025-07-10T00:13:45.037666811Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:13:45.041001 containerd[1577]: time="2025-07-10T00:13:45.040928968Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"28074544\" in 5.140337902s" Jul 10 00:13:45.041001 containerd[1577]: time="2025-07-10T00:13:45.040987628Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\"" Jul 10 00:13:45.042023 containerd[1577]: time="2025-07-10T00:13:45.041986691Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 10 00:13:47.284548 containerd[1577]: time="2025-07-10T00:13:47.284467332Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:13:47.285710 containerd[1577]: time="2025-07-10T00:13:47.285647194Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=24713294" Jul 10 00:13:47.288835 containerd[1577]: time="2025-07-10T00:13:47.288686583Z" level=info msg="ImageCreate event name:\"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:13:47.292546 containerd[1577]: time="2025-07-10T00:13:47.292447224Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:13:47.293865 containerd[1577]: time="2025-07-10T00:13:47.293776677Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"26315128\" in 2.251748698s" Jul 10 00:13:47.293865 containerd[1577]: time="2025-07-10T00:13:47.293851767Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\"" Jul 10 00:13:47.294587 containerd[1577]: time="2025-07-10T00:13:47.294549746Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 10 00:13:48.938678 containerd[1577]: time="2025-07-10T00:13:48.938604649Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:13:48.939817 containerd[1577]: time="2025-07-10T00:13:48.939754535Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=18783671" Jul 10 00:13:48.941305 containerd[1577]: time="2025-07-10T00:13:48.941269906Z" level=info msg="ImageCreate event name:\"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:13:48.944259 containerd[1577]: time="2025-07-10T00:13:48.944190672Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:13:48.945159 containerd[1577]: time="2025-07-10T00:13:48.945124473Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"20385523\" in 1.650537267s" Jul 10 00:13:48.945159 containerd[1577]: time="2025-07-10T00:13:48.945156714Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\"" Jul 10 00:13:48.945774 containerd[1577]: time="2025-07-10T00:13:48.945719309Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 10 00:13:50.500176 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 10 00:13:50.502369 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:13:50.614340 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1840956622.mount: Deactivated successfully. Jul 10 00:13:50.730315 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:13:50.759303 (kubelet)[2099]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 00:13:50.813590 kubelet[2099]: E0710 00:13:50.813465 2099 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 00:13:50.818075 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 00:13:50.818287 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 00:13:50.818672 systemd[1]: kubelet.service: Consumed 255ms CPU time, 110.4M memory peak. Jul 10 00:13:52.941640 containerd[1577]: time="2025-07-10T00:13:52.941555160Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:13:52.942561 containerd[1577]: time="2025-07-10T00:13:52.942518616Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=30383943" Jul 10 00:13:52.944388 containerd[1577]: time="2025-07-10T00:13:52.944346604Z" level=info msg="ImageCreate event name:\"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:13:52.947051 containerd[1577]: time="2025-07-10T00:13:52.946999077Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:13:52.947412 containerd[1577]: time="2025-07-10T00:13:52.947376705Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"30382962\" in 4.001606401s" Jul 10 00:13:52.947412 containerd[1577]: time="2025-07-10T00:13:52.947409587Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\"" Jul 10 00:13:52.947986 containerd[1577]: time="2025-07-10T00:13:52.947874969Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 10 00:13:53.769537 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount507364771.mount: Deactivated successfully. Jul 10 00:13:55.810999 containerd[1577]: time="2025-07-10T00:13:55.810904058Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:13:55.811819 containerd[1577]: time="2025-07-10T00:13:55.811710329Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jul 10 00:13:55.813576 containerd[1577]: time="2025-07-10T00:13:55.813507262Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:13:55.816911 containerd[1577]: time="2025-07-10T00:13:55.816845220Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:13:55.818605 containerd[1577]: time="2025-07-10T00:13:55.818536121Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.870602985s" Jul 10 00:13:55.818657 containerd[1577]: time="2025-07-10T00:13:55.818607748Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 10 00:13:55.819248 containerd[1577]: time="2025-07-10T00:13:55.819211627Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 10 00:13:56.388462 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1219823291.mount: Deactivated successfully. Jul 10 00:13:56.399289 containerd[1577]: time="2025-07-10T00:13:56.399210005Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 00:13:56.400086 containerd[1577]: time="2025-07-10T00:13:56.400027927Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 10 00:13:56.401339 containerd[1577]: time="2025-07-10T00:13:56.401292297Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 00:13:56.403773 containerd[1577]: time="2025-07-10T00:13:56.403726651Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 00:13:56.404349 containerd[1577]: time="2025-07-10T00:13:56.404308974Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 585.06266ms" Jul 10 00:13:56.404349 containerd[1577]: time="2025-07-10T00:13:56.404335227Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 10 00:13:56.405086 containerd[1577]: time="2025-07-10T00:13:56.405055939Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 10 00:13:56.979219 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3045783130.mount: Deactivated successfully. Jul 10 00:13:59.100767 containerd[1577]: time="2025-07-10T00:13:59.100689928Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:13:59.101457 containerd[1577]: time="2025-07-10T00:13:59.101396293Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" Jul 10 00:13:59.102576 containerd[1577]: time="2025-07-10T00:13:59.102525203Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:13:59.105404 containerd[1577]: time="2025-07-10T00:13:59.105373943Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:13:59.106347 containerd[1577]: time="2025-07-10T00:13:59.106303998Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.701215425s" Jul 10 00:13:59.106347 containerd[1577]: time="2025-07-10T00:13:59.106333187Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jul 10 00:14:01.000361 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 10 00:14:01.002583 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:14:01.242503 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:14:01.257425 (kubelet)[2252]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 00:14:01.301929 kubelet[2252]: E0710 00:14:01.301816 2252 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 00:14:01.306473 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 00:14:01.306725 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 00:14:01.307223 systemd[1]: kubelet.service: Consumed 245ms CPU time, 110.2M memory peak. Jul 10 00:14:01.512509 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:14:01.512684 systemd[1]: kubelet.service: Consumed 245ms CPU time, 110.2M memory peak. Jul 10 00:14:01.515499 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:14:01.544748 systemd[1]: Reload requested from client PID 2267 ('systemctl') (unit session-9.scope)... Jul 10 00:14:01.544772 systemd[1]: Reloading... Jul 10 00:14:01.655850 zram_generator::config[2316]: No configuration found. Jul 10 00:14:01.884306 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:14:02.015545 systemd[1]: Reloading finished in 470 ms. Jul 10 00:14:02.078003 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 10 00:14:02.078165 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 10 00:14:02.078552 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:14:02.078634 systemd[1]: kubelet.service: Consumed 171ms CPU time, 98.4M memory peak. Jul 10 00:14:02.080671 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:14:02.303655 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:14:02.309419 (kubelet)[2358]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 10 00:14:02.357345 kubelet[2358]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:14:02.357345 kubelet[2358]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 10 00:14:02.357345 kubelet[2358]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:14:02.357823 kubelet[2358]: I0710 00:14:02.357390 2358 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 10 00:14:02.687133 kubelet[2358]: I0710 00:14:02.686989 2358 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 10 00:14:02.687133 kubelet[2358]: I0710 00:14:02.687032 2358 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 10 00:14:02.687373 kubelet[2358]: I0710 00:14:02.687345 2358 server.go:934] "Client rotation is on, will bootstrap in background" Jul 10 00:14:02.717210 kubelet[2358]: E0710 00:14:02.715803 2358 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.15:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:14:02.719861 kubelet[2358]: I0710 00:14:02.719773 2358 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 10 00:14:02.726957 kubelet[2358]: I0710 00:14:02.726914 2358 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 10 00:14:02.733553 kubelet[2358]: I0710 00:14:02.733490 2358 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 10 00:14:02.733702 kubelet[2358]: I0710 00:14:02.733639 2358 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 10 00:14:02.733806 kubelet[2358]: I0710 00:14:02.733770 2358 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 10 00:14:02.734040 kubelet[2358]: I0710 00:14:02.733796 2358 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 10 00:14:02.734040 kubelet[2358]: I0710 00:14:02.734041 2358 topology_manager.go:138] "Creating topology manager with none policy" Jul 10 00:14:02.734040 kubelet[2358]: I0710 00:14:02.734051 2358 container_manager_linux.go:300] "Creating device plugin manager" Jul 10 00:14:02.734386 kubelet[2358]: I0710 00:14:02.734218 2358 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:14:02.736444 kubelet[2358]: I0710 00:14:02.736400 2358 kubelet.go:408] "Attempting to sync node with API server" Jul 10 00:14:02.736444 kubelet[2358]: I0710 00:14:02.736428 2358 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 10 00:14:02.736532 kubelet[2358]: I0710 00:14:02.736483 2358 kubelet.go:314] "Adding apiserver pod source" Jul 10 00:14:02.736567 kubelet[2358]: I0710 00:14:02.736544 2358 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 10 00:14:02.738173 kubelet[2358]: W0710 00:14:02.738119 2358 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Jul 10 00:14:02.738237 kubelet[2358]: E0710 00:14:02.738183 2358 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:14:02.739414 kubelet[2358]: I0710 00:14:02.739311 2358 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 10 00:14:02.739751 kubelet[2358]: W0710 00:14:02.739677 2358 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Jul 10 00:14:02.739805 kubelet[2358]: E0710 00:14:02.739758 2358 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:14:02.739836 kubelet[2358]: I0710 00:14:02.739814 2358 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 10 00:14:02.741159 kubelet[2358]: W0710 00:14:02.740916 2358 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 10 00:14:02.743255 kubelet[2358]: I0710 00:14:02.743207 2358 server.go:1274] "Started kubelet" Jul 10 00:14:02.743444 kubelet[2358]: I0710 00:14:02.743310 2358 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 10 00:14:02.744850 kubelet[2358]: I0710 00:14:02.744803 2358 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 10 00:14:02.745858 kubelet[2358]: I0710 00:14:02.745329 2358 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 10 00:14:02.746637 kubelet[2358]: I0710 00:14:02.746391 2358 server.go:449] "Adding debug handlers to kubelet server" Jul 10 00:14:02.747052 kubelet[2358]: I0710 00:14:02.747029 2358 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 10 00:14:02.749158 kubelet[2358]: I0710 00:14:02.749130 2358 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 10 00:14:02.751088 kubelet[2358]: I0710 00:14:02.751067 2358 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 10 00:14:02.751306 kubelet[2358]: E0710 00:14:02.751276 2358 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:14:02.751554 kubelet[2358]: I0710 00:14:02.751535 2358 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 10 00:14:02.751628 kubelet[2358]: I0710 00:14:02.751601 2358 reconciler.go:26] "Reconciler: start to sync state" Jul 10 00:14:02.753689 kubelet[2358]: E0710 00:14:02.753633 2358 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="200ms" Jul 10 00:14:02.753903 kubelet[2358]: E0710 00:14:02.752100 2358 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.15:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.15:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1850bb86d06160b1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-10 00:14:02.743177393 +0000 UTC m=+0.428158726,LastTimestamp:2025-07-10 00:14:02.743177393 +0000 UTC m=+0.428158726,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 10 00:14:02.754158 kubelet[2358]: W0710 00:14:02.754115 2358 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Jul 10 00:14:02.754277 kubelet[2358]: E0710 00:14:02.754240 2358 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:14:02.754596 kubelet[2358]: E0710 00:14:02.754534 2358 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 10 00:14:02.754596 kubelet[2358]: I0710 00:14:02.754541 2358 factory.go:221] Registration of the systemd container factory successfully Jul 10 00:14:02.754884 kubelet[2358]: I0710 00:14:02.754835 2358 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 10 00:14:02.755991 kubelet[2358]: I0710 00:14:02.755952 2358 factory.go:221] Registration of the containerd container factory successfully Jul 10 00:14:02.766798 kubelet[2358]: I0710 00:14:02.766759 2358 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 10 00:14:02.766798 kubelet[2358]: I0710 00:14:02.766775 2358 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 10 00:14:02.766798 kubelet[2358]: I0710 00:14:02.766795 2358 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:14:02.774550 kubelet[2358]: I0710 00:14:02.774488 2358 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 10 00:14:02.775925 kubelet[2358]: I0710 00:14:02.775873 2358 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 10 00:14:02.775925 kubelet[2358]: I0710 00:14:02.775925 2358 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 10 00:14:02.776037 kubelet[2358]: I0710 00:14:02.775985 2358 kubelet.go:2321] "Starting kubelet main sync loop" Jul 10 00:14:02.776069 kubelet[2358]: E0710 00:14:02.776048 2358 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 10 00:14:02.776784 kubelet[2358]: W0710 00:14:02.776702 2358 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Jul 10 00:14:02.776784 kubelet[2358]: E0710 00:14:02.776746 2358 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:14:02.852278 kubelet[2358]: E0710 00:14:02.852208 2358 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:14:02.876700 kubelet[2358]: E0710 00:14:02.876643 2358 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 10 00:14:02.953223 kubelet[2358]: E0710 00:14:02.953063 2358 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:14:02.954587 kubelet[2358]: E0710 00:14:02.954551 2358 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="400ms" Jul 10 00:14:03.053991 kubelet[2358]: E0710 00:14:03.053915 2358 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:14:03.077170 kubelet[2358]: E0710 00:14:03.077116 2358 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 10 00:14:03.154699 kubelet[2358]: E0710 00:14:03.154618 2358 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:14:03.255753 kubelet[2358]: E0710 00:14:03.255698 2358 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:14:03.322919 kubelet[2358]: I0710 00:14:03.322873 2358 policy_none.go:49] "None policy: Start" Jul 10 00:14:03.323986 kubelet[2358]: I0710 00:14:03.323697 2358 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 10 00:14:03.323986 kubelet[2358]: I0710 00:14:03.323722 2358 state_mem.go:35] "Initializing new in-memory state store" Jul 10 00:14:03.330929 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 10 00:14:03.342630 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 10 00:14:03.346357 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 10 00:14:03.355677 kubelet[2358]: E0710 00:14:03.355640 2358 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="800ms" Jul 10 00:14:03.355985 kubelet[2358]: E0710 00:14:03.355926 2358 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:14:03.356055 kubelet[2358]: I0710 00:14:03.355988 2358 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 10 00:14:03.356250 kubelet[2358]: I0710 00:14:03.356222 2358 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 10 00:14:03.356314 kubelet[2358]: I0710 00:14:03.356249 2358 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 10 00:14:03.356522 kubelet[2358]: I0710 00:14:03.356507 2358 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 10 00:14:03.357596 kubelet[2358]: E0710 00:14:03.357570 2358 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 10 00:14:03.457652 kubelet[2358]: I0710 00:14:03.457604 2358 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 10 00:14:03.458019 kubelet[2358]: E0710 00:14:03.457960 2358 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" Jul 10 00:14:03.487044 systemd[1]: Created slice kubepods-burstable-poded26c67d66a9c22950ba7fa34c218cd5.slice - libcontainer container kubepods-burstable-poded26c67d66a9c22950ba7fa34c218cd5.slice. Jul 10 00:14:03.518752 systemd[1]: Created slice kubepods-burstable-pod3f04709fe51ae4ab5abd58e8da771b74.slice - libcontainer container kubepods-burstable-pod3f04709fe51ae4ab5abd58e8da771b74.slice. Jul 10 00:14:03.524372 systemd[1]: Created slice kubepods-burstable-podb35b56493416c25588cb530e37ffc065.slice - libcontainer container kubepods-burstable-podb35b56493416c25588cb530e37ffc065.slice. Jul 10 00:14:03.557033 kubelet[2358]: I0710 00:14:03.556939 2358 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ed26c67d66a9c22950ba7fa34c218cd5-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ed26c67d66a9c22950ba7fa34c218cd5\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:14:03.557033 kubelet[2358]: I0710 00:14:03.557012 2358 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ed26c67d66a9c22950ba7fa34c218cd5-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ed26c67d66a9c22950ba7fa34c218cd5\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:14:03.557033 kubelet[2358]: I0710 00:14:03.557046 2358 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ed26c67d66a9c22950ba7fa34c218cd5-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ed26c67d66a9c22950ba7fa34c218cd5\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:14:03.557033 kubelet[2358]: I0710 00:14:03.557062 2358 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:14:03.557329 kubelet[2358]: I0710 00:14:03.557080 2358 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:14:03.557329 kubelet[2358]: I0710 00:14:03.557095 2358 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:14:03.557329 kubelet[2358]: I0710 00:14:03.557110 2358 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:14:03.557329 kubelet[2358]: I0710 00:14:03.557125 2358 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:14:03.557329 kubelet[2358]: I0710 00:14:03.557140 2358 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 10 00:14:03.659342 kubelet[2358]: I0710 00:14:03.659274 2358 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 10 00:14:03.659701 kubelet[2358]: E0710 00:14:03.659657 2358 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" Jul 10 00:14:03.817021 containerd[1577]: time="2025-07-10T00:14:03.816828094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ed26c67d66a9c22950ba7fa34c218cd5,Namespace:kube-system,Attempt:0,}" Jul 10 00:14:03.822533 containerd[1577]: time="2025-07-10T00:14:03.822492595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,}" Jul 10 00:14:03.827131 containerd[1577]: time="2025-07-10T00:14:03.827103830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,}" Jul 10 00:14:03.830770 kubelet[2358]: W0710 00:14:03.830713 2358 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Jul 10 00:14:03.830830 kubelet[2358]: E0710 00:14:03.830774 2358 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:14:04.012665 kubelet[2358]: W0710 00:14:04.012545 2358 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Jul 10 00:14:04.012665 kubelet[2358]: E0710 00:14:04.012636 2358 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:14:04.114847 kubelet[2358]: I0710 00:14:04.114670 2358 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 10 00:14:04.115140 kubelet[2358]: E0710 00:14:04.115100 2358 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" Jul 10 00:14:04.115858 kubelet[2358]: W0710 00:14:04.115796 2358 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Jul 10 00:14:04.116009 kubelet[2358]: E0710 00:14:04.115863 2358 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:14:04.151646 kubelet[2358]: W0710 00:14:04.151570 2358 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Jul 10 00:14:04.151646 kubelet[2358]: E0710 00:14:04.151643 2358 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:14:04.156233 kubelet[2358]: E0710 00:14:04.156186 2358 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="1.6s" Jul 10 00:14:04.629633 containerd[1577]: time="2025-07-10T00:14:04.629543485Z" level=info msg="connecting to shim ddf04f3c0d1815a0a321dcedd4bebf5a0c6cc0c85afb6173e2b73074723a3149" address="unix:///run/containerd/s/b68999738a2d7c4fd07b9a7dff9478b24f53d77e5d912c36765b19b1ac004506" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:14:04.635132 containerd[1577]: time="2025-07-10T00:14:04.635069894Z" level=info msg="connecting to shim 418205f48e479f4dbc8cea175c3af4f9eb91ebb03e4aef421d9f2172419926f0" address="unix:///run/containerd/s/ea1c9e0889831791bb08dfe5678f65ce9d796b1814a39846e8ca2afd5faba7ca" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:14:04.652715 containerd[1577]: time="2025-07-10T00:14:04.652665796Z" level=info msg="connecting to shim 966ad8951d7dcf040c8e33613658641ae44768d56229ee9674429de3728d0c89" address="unix:///run/containerd/s/cc3d988d129685fa4117b87550b476907fe6a0d134943dd5af9a61e931d7e5c4" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:14:04.686128 systemd[1]: Started cri-containerd-ddf04f3c0d1815a0a321dcedd4bebf5a0c6cc0c85afb6173e2b73074723a3149.scope - libcontainer container ddf04f3c0d1815a0a321dcedd4bebf5a0c6cc0c85afb6173e2b73074723a3149. Jul 10 00:14:04.690898 systemd[1]: Started cri-containerd-418205f48e479f4dbc8cea175c3af4f9eb91ebb03e4aef421d9f2172419926f0.scope - libcontainer container 418205f48e479f4dbc8cea175c3af4f9eb91ebb03e4aef421d9f2172419926f0. Jul 10 00:14:04.703710 systemd[1]: Started cri-containerd-966ad8951d7dcf040c8e33613658641ae44768d56229ee9674429de3728d0c89.scope - libcontainer container 966ad8951d7dcf040c8e33613658641ae44768d56229ee9674429de3728d0c89. Jul 10 00:14:04.765430 containerd[1577]: time="2025-07-10T00:14:04.765382758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,} returns sandbox id \"ddf04f3c0d1815a0a321dcedd4bebf5a0c6cc0c85afb6173e2b73074723a3149\"" Jul 10 00:14:04.769394 containerd[1577]: time="2025-07-10T00:14:04.769355752Z" level=info msg="CreateContainer within sandbox \"ddf04f3c0d1815a0a321dcedd4bebf5a0c6cc0c85afb6173e2b73074723a3149\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 10 00:14:04.780634 containerd[1577]: time="2025-07-10T00:14:04.780359787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,} returns sandbox id \"966ad8951d7dcf040c8e33613658641ae44768d56229ee9674429de3728d0c89\"" Jul 10 00:14:04.786078 containerd[1577]: time="2025-07-10T00:14:04.786004092Z" level=info msg="CreateContainer within sandbox \"966ad8951d7dcf040c8e33613658641ae44768d56229ee9674429de3728d0c89\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 10 00:14:04.789188 containerd[1577]: time="2025-07-10T00:14:04.789156675Z" level=info msg="Container 94c9a7753e7a3653ecd372f6f9a4cff5eb2c506f8937754a2e0e8b08808d7546: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:14:04.794602 containerd[1577]: time="2025-07-10T00:14:04.794550805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ed26c67d66a9c22950ba7fa34c218cd5,Namespace:kube-system,Attempt:0,} returns sandbox id \"418205f48e479f4dbc8cea175c3af4f9eb91ebb03e4aef421d9f2172419926f0\"" Jul 10 00:14:04.796920 containerd[1577]: time="2025-07-10T00:14:04.796888058Z" level=info msg="CreateContainer within sandbox \"418205f48e479f4dbc8cea175c3af4f9eb91ebb03e4aef421d9f2172419926f0\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 10 00:14:04.799717 containerd[1577]: time="2025-07-10T00:14:04.799668974Z" level=info msg="Container 627b6a18541e636dbbbd6b0788b327b4d0c0023ca83d80c134ff1c09f4ce1358: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:14:04.801512 containerd[1577]: time="2025-07-10T00:14:04.801477675Z" level=info msg="CreateContainer within sandbox \"ddf04f3c0d1815a0a321dcedd4bebf5a0c6cc0c85afb6173e2b73074723a3149\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"94c9a7753e7a3653ecd372f6f9a4cff5eb2c506f8937754a2e0e8b08808d7546\"" Jul 10 00:14:04.802114 containerd[1577]: time="2025-07-10T00:14:04.802091895Z" level=info msg="StartContainer for \"94c9a7753e7a3653ecd372f6f9a4cff5eb2c506f8937754a2e0e8b08808d7546\"" Jul 10 00:14:04.803354 containerd[1577]: time="2025-07-10T00:14:04.803325576Z" level=info msg="connecting to shim 94c9a7753e7a3653ecd372f6f9a4cff5eb2c506f8937754a2e0e8b08808d7546" address="unix:///run/containerd/s/b68999738a2d7c4fd07b9a7dff9478b24f53d77e5d912c36765b19b1ac004506" protocol=ttrpc version=3 Jul 10 00:14:04.809391 containerd[1577]: time="2025-07-10T00:14:04.809346928Z" level=info msg="CreateContainer within sandbox \"966ad8951d7dcf040c8e33613658641ae44768d56229ee9674429de3728d0c89\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"627b6a18541e636dbbbd6b0788b327b4d0c0023ca83d80c134ff1c09f4ce1358\"" Jul 10 00:14:04.809940 containerd[1577]: time="2025-07-10T00:14:04.809884895Z" level=info msg="StartContainer for \"627b6a18541e636dbbbd6b0788b327b4d0c0023ca83d80c134ff1c09f4ce1358\"" Jul 10 00:14:04.811289 containerd[1577]: time="2025-07-10T00:14:04.811252288Z" level=info msg="connecting to shim 627b6a18541e636dbbbd6b0788b327b4d0c0023ca83d80c134ff1c09f4ce1358" address="unix:///run/containerd/s/cc3d988d129685fa4117b87550b476907fe6a0d134943dd5af9a61e931d7e5c4" protocol=ttrpc version=3 Jul 10 00:14:04.815051 containerd[1577]: time="2025-07-10T00:14:04.814381210Z" level=info msg="Container 812d2c617d081bd7817e85f6e6d9e8be8b8c78713c061fe11a241bfc258a0fdd: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:14:04.827642 containerd[1577]: time="2025-07-10T00:14:04.827589366Z" level=info msg="CreateContainer within sandbox \"418205f48e479f4dbc8cea175c3af4f9eb91ebb03e4aef421d9f2172419926f0\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"812d2c617d081bd7817e85f6e6d9e8be8b8c78713c061fe11a241bfc258a0fdd\"" Jul 10 00:14:04.828508 containerd[1577]: time="2025-07-10T00:14:04.828343009Z" level=info msg="StartContainer for \"812d2c617d081bd7817e85f6e6d9e8be8b8c78713c061fe11a241bfc258a0fdd\"" Jul 10 00:14:04.829622 containerd[1577]: time="2025-07-10T00:14:04.829589832Z" level=info msg="connecting to shim 812d2c617d081bd7817e85f6e6d9e8be8b8c78713c061fe11a241bfc258a0fdd" address="unix:///run/containerd/s/ea1c9e0889831791bb08dfe5678f65ce9d796b1814a39846e8ca2afd5faba7ca" protocol=ttrpc version=3 Jul 10 00:14:04.831228 systemd[1]: Started cri-containerd-94c9a7753e7a3653ecd372f6f9a4cff5eb2c506f8937754a2e0e8b08808d7546.scope - libcontainer container 94c9a7753e7a3653ecd372f6f9a4cff5eb2c506f8937754a2e0e8b08808d7546. Jul 10 00:14:04.835529 systemd[1]: Started cri-containerd-627b6a18541e636dbbbd6b0788b327b4d0c0023ca83d80c134ff1c09f4ce1358.scope - libcontainer container 627b6a18541e636dbbbd6b0788b327b4d0c0023ca83d80c134ff1c09f4ce1358. Jul 10 00:14:04.860129 systemd[1]: Started cri-containerd-812d2c617d081bd7817e85f6e6d9e8be8b8c78713c061fe11a241bfc258a0fdd.scope - libcontainer container 812d2c617d081bd7817e85f6e6d9e8be8b8c78713c061fe11a241bfc258a0fdd. Jul 10 00:14:04.901529 kubelet[2358]: E0710 00:14:04.901329 2358 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.15:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:14:04.918058 kubelet[2358]: I0710 00:14:04.918011 2358 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 10 00:14:04.919995 kubelet[2358]: E0710 00:14:04.919040 2358 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" Jul 10 00:14:04.922723 containerd[1577]: time="2025-07-10T00:14:04.922679493Z" level=info msg="StartContainer for \"627b6a18541e636dbbbd6b0788b327b4d0c0023ca83d80c134ff1c09f4ce1358\" returns successfully" Jul 10 00:14:04.922866 containerd[1577]: time="2025-07-10T00:14:04.922841876Z" level=info msg="StartContainer for \"94c9a7753e7a3653ecd372f6f9a4cff5eb2c506f8937754a2e0e8b08808d7546\" returns successfully" Jul 10 00:14:04.945391 containerd[1577]: time="2025-07-10T00:14:04.945334751Z" level=info msg="StartContainer for \"812d2c617d081bd7817e85f6e6d9e8be8b8c78713c061fe11a241bfc258a0fdd\" returns successfully" Jul 10 00:14:06.354705 kubelet[2358]: E0710 00:14:06.354638 2358 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 10 00:14:06.520704 kubelet[2358]: I0710 00:14:06.520657 2358 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 10 00:14:06.526621 kubelet[2358]: I0710 00:14:06.526572 2358 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 10 00:14:06.739442 kubelet[2358]: I0710 00:14:06.739038 2358 apiserver.go:52] "Watching apiserver" Jul 10 00:14:06.752259 update_engine[1516]: I20250710 00:14:06.752085 1516 update_attempter.cc:509] Updating boot flags... Jul 10 00:14:06.753561 kubelet[2358]: I0710 00:14:06.752572 2358 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 10 00:14:06.802119 kubelet[2358]: E0710 00:14:06.801269 2358 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jul 10 00:14:06.802119 kubelet[2358]: E0710 00:14:06.801711 2358 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 10 00:14:08.252811 systemd[1]: Reload requested from client PID 2646 ('systemctl') (unit session-9.scope)... Jul 10 00:14:08.252827 systemd[1]: Reloading... Jul 10 00:14:08.332103 zram_generator::config[2692]: No configuration found. Jul 10 00:14:08.430782 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:14:08.570327 systemd[1]: Reloading finished in 317 ms. Jul 10 00:14:08.603350 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:14:08.625487 systemd[1]: kubelet.service: Deactivated successfully. Jul 10 00:14:08.625831 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:14:08.625890 systemd[1]: kubelet.service: Consumed 1.063s CPU time, 131.7M memory peak. Jul 10 00:14:08.628142 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:14:08.870692 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:14:08.882377 (kubelet)[2734]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 10 00:14:08.931527 kubelet[2734]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:14:08.931527 kubelet[2734]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 10 00:14:08.931527 kubelet[2734]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:14:08.932051 kubelet[2734]: I0710 00:14:08.931659 2734 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 10 00:14:08.939301 kubelet[2734]: I0710 00:14:08.939254 2734 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 10 00:14:08.939301 kubelet[2734]: I0710 00:14:08.939285 2734 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 10 00:14:08.939582 kubelet[2734]: I0710 00:14:08.939562 2734 server.go:934] "Client rotation is on, will bootstrap in background" Jul 10 00:14:08.941374 kubelet[2734]: I0710 00:14:08.941273 2734 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 10 00:14:08.943771 kubelet[2734]: I0710 00:14:08.943737 2734 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 10 00:14:08.960605 kubelet[2734]: I0710 00:14:08.960558 2734 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 10 00:14:08.967111 kubelet[2734]: I0710 00:14:08.966223 2734 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 10 00:14:08.967111 kubelet[2734]: I0710 00:14:08.966406 2734 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 10 00:14:08.967111 kubelet[2734]: I0710 00:14:08.966557 2734 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 10 00:14:08.967111 kubelet[2734]: I0710 00:14:08.966594 2734 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 10 00:14:08.967430 kubelet[2734]: I0710 00:14:08.966778 2734 topology_manager.go:138] "Creating topology manager with none policy" Jul 10 00:14:08.967430 kubelet[2734]: I0710 00:14:08.966787 2734 container_manager_linux.go:300] "Creating device plugin manager" Jul 10 00:14:08.967430 kubelet[2734]: I0710 00:14:08.966813 2734 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:14:08.967430 kubelet[2734]: I0710 00:14:08.966951 2734 kubelet.go:408] "Attempting to sync node with API server" Jul 10 00:14:08.967430 kubelet[2734]: I0710 00:14:08.966985 2734 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 10 00:14:08.967430 kubelet[2734]: I0710 00:14:08.967024 2734 kubelet.go:314] "Adding apiserver pod source" Jul 10 00:14:08.967430 kubelet[2734]: I0710 00:14:08.967041 2734 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 10 00:14:08.968479 kubelet[2734]: I0710 00:14:08.968453 2734 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 10 00:14:08.969082 kubelet[2734]: I0710 00:14:08.969061 2734 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 10 00:14:08.969999 kubelet[2734]: I0710 00:14:08.969576 2734 server.go:1274] "Started kubelet" Jul 10 00:14:08.970166 kubelet[2734]: I0710 00:14:08.970114 2734 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 10 00:14:08.973085 kubelet[2734]: I0710 00:14:08.973059 2734 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 10 00:14:08.973483 kubelet[2734]: I0710 00:14:08.970124 2734 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 10 00:14:08.974294 kubelet[2734]: I0710 00:14:08.974270 2734 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 10 00:14:08.974679 kubelet[2734]: I0710 00:14:08.974661 2734 server.go:449] "Adding debug handlers to kubelet server" Jul 10 00:14:08.980941 kubelet[2734]: I0710 00:14:08.977538 2734 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 10 00:14:08.983473 kubelet[2734]: I0710 00:14:08.982676 2734 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 10 00:14:08.984291 kubelet[2734]: I0710 00:14:08.982696 2734 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 10 00:14:08.984291 kubelet[2734]: I0710 00:14:08.984142 2734 reconciler.go:26] "Reconciler: start to sync state" Jul 10 00:14:08.985542 kubelet[2734]: I0710 00:14:08.985518 2734 factory.go:221] Registration of the systemd container factory successfully Jul 10 00:14:08.987126 kubelet[2734]: I0710 00:14:08.987093 2734 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 10 00:14:08.988535 kubelet[2734]: E0710 00:14:08.988495 2734 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 10 00:14:08.990022 kubelet[2734]: I0710 00:14:08.989739 2734 factory.go:221] Registration of the containerd container factory successfully Jul 10 00:14:09.000467 kubelet[2734]: I0710 00:14:09.000410 2734 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 10 00:14:09.003512 kubelet[2734]: I0710 00:14:09.002352 2734 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 10 00:14:09.003512 kubelet[2734]: I0710 00:14:09.002380 2734 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 10 00:14:09.003512 kubelet[2734]: I0710 00:14:09.002407 2734 kubelet.go:2321] "Starting kubelet main sync loop" Jul 10 00:14:09.003512 kubelet[2734]: E0710 00:14:09.002455 2734 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 10 00:14:09.035995 kubelet[2734]: I0710 00:14:09.035946 2734 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 10 00:14:09.036171 kubelet[2734]: I0710 00:14:09.036143 2734 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 10 00:14:09.036171 kubelet[2734]: I0710 00:14:09.036171 2734 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:14:09.036356 kubelet[2734]: I0710 00:14:09.036339 2734 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 10 00:14:09.036406 kubelet[2734]: I0710 00:14:09.036352 2734 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 10 00:14:09.036406 kubelet[2734]: I0710 00:14:09.036370 2734 policy_none.go:49] "None policy: Start" Jul 10 00:14:09.036877 kubelet[2734]: I0710 00:14:09.036857 2734 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 10 00:14:09.036961 kubelet[2734]: I0710 00:14:09.036882 2734 state_mem.go:35] "Initializing new in-memory state store" Jul 10 00:14:09.037070 kubelet[2734]: I0710 00:14:09.037055 2734 state_mem.go:75] "Updated machine memory state" Jul 10 00:14:09.041798 kubelet[2734]: I0710 00:14:09.041771 2734 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 10 00:14:09.042046 kubelet[2734]: I0710 00:14:09.041995 2734 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 10 00:14:09.042046 kubelet[2734]: I0710 00:14:09.042017 2734 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 10 00:14:09.042253 kubelet[2734]: I0710 00:14:09.042230 2734 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 10 00:14:09.154207 kubelet[2734]: I0710 00:14:09.152958 2734 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 10 00:14:09.182014 kubelet[2734]: I0710 00:14:09.181601 2734 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jul 10 00:14:09.182014 kubelet[2734]: I0710 00:14:09.181735 2734 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 10 00:14:09.286271 kubelet[2734]: I0710 00:14:09.286188 2734 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:14:09.286271 kubelet[2734]: I0710 00:14:09.286248 2734 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:14:09.286271 kubelet[2734]: I0710 00:14:09.286277 2734 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:14:09.286562 kubelet[2734]: I0710 00:14:09.286325 2734 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:14:09.286562 kubelet[2734]: I0710 00:14:09.286362 2734 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ed26c67d66a9c22950ba7fa34c218cd5-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ed26c67d66a9c22950ba7fa34c218cd5\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:14:09.286562 kubelet[2734]: I0710 00:14:09.286386 2734 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ed26c67d66a9c22950ba7fa34c218cd5-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ed26c67d66a9c22950ba7fa34c218cd5\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:14:09.286562 kubelet[2734]: I0710 00:14:09.286407 2734 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ed26c67d66a9c22950ba7fa34c218cd5-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ed26c67d66a9c22950ba7fa34c218cd5\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:14:09.286562 kubelet[2734]: I0710 00:14:09.286427 2734 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:14:09.286752 kubelet[2734]: I0710 00:14:09.286449 2734 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 10 00:14:09.967394 kubelet[2734]: I0710 00:14:09.967335 2734 apiserver.go:52] "Watching apiserver" Jul 10 00:14:09.984602 kubelet[2734]: I0710 00:14:09.984568 2734 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 10 00:14:10.185159 kubelet[2734]: E0710 00:14:10.184295 2734 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 10 00:14:10.185159 kubelet[2734]: E0710 00:14:10.184580 2734 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 10 00:14:10.185789 kubelet[2734]: I0710 00:14:10.185535 2734 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.185514405 podStartE2EDuration="1.185514405s" podCreationTimestamp="2025-07-10 00:14:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:14:10.184713796 +0000 UTC m=+1.294333339" watchObservedRunningTime="2025-07-10 00:14:10.185514405 +0000 UTC m=+1.295133958" Jul 10 00:14:10.223904 kubelet[2734]: I0710 00:14:10.223584 2734 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.223550833 podStartE2EDuration="1.223550833s" podCreationTimestamp="2025-07-10 00:14:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:14:10.223140861 +0000 UTC m=+1.332760414" watchObservedRunningTime="2025-07-10 00:14:10.223550833 +0000 UTC m=+1.333170386" Jul 10 00:14:10.509751 kubelet[2734]: I0710 00:14:10.509357 2734 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.509332762 podStartE2EDuration="1.509332762s" podCreationTimestamp="2025-07-10 00:14:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:14:10.356324172 +0000 UTC m=+1.465943725" watchObservedRunningTime="2025-07-10 00:14:10.509332762 +0000 UTC m=+1.618952315" Jul 10 00:14:13.738714 kubelet[2734]: I0710 00:14:13.738638 2734 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 10 00:14:13.739517 containerd[1577]: time="2025-07-10T00:14:13.739485747Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 10 00:14:13.740854 kubelet[2734]: I0710 00:14:13.740811 2734 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 10 00:14:14.248518 systemd[1]: Created slice kubepods-besteffort-podd0d2f3ef_5c28_4395_b27a_723381ccdef0.slice - libcontainer container kubepods-besteffort-podd0d2f3ef_5c28_4395_b27a_723381ccdef0.slice. Jul 10 00:14:14.317868 kubelet[2734]: I0710 00:14:14.317675 2734 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d0d2f3ef-5c28-4395-b27a-723381ccdef0-xtables-lock\") pod \"kube-proxy-zxx5m\" (UID: \"d0d2f3ef-5c28-4395-b27a-723381ccdef0\") " pod="kube-system/kube-proxy-zxx5m" Jul 10 00:14:14.317868 kubelet[2734]: I0710 00:14:14.317737 2734 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkf5d\" (UniqueName: \"kubernetes.io/projected/d0d2f3ef-5c28-4395-b27a-723381ccdef0-kube-api-access-lkf5d\") pod \"kube-proxy-zxx5m\" (UID: \"d0d2f3ef-5c28-4395-b27a-723381ccdef0\") " pod="kube-system/kube-proxy-zxx5m" Jul 10 00:14:14.317868 kubelet[2734]: I0710 00:14:14.317774 2734 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d0d2f3ef-5c28-4395-b27a-723381ccdef0-kube-proxy\") pod \"kube-proxy-zxx5m\" (UID: \"d0d2f3ef-5c28-4395-b27a-723381ccdef0\") " pod="kube-system/kube-proxy-zxx5m" Jul 10 00:14:14.317868 kubelet[2734]: I0710 00:14:14.317796 2734 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d0d2f3ef-5c28-4395-b27a-723381ccdef0-lib-modules\") pod \"kube-proxy-zxx5m\" (UID: \"d0d2f3ef-5c28-4395-b27a-723381ccdef0\") " pod="kube-system/kube-proxy-zxx5m" Jul 10 00:14:14.558784 containerd[1577]: time="2025-07-10T00:14:14.558720683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zxx5m,Uid:d0d2f3ef-5c28-4395-b27a-723381ccdef0,Namespace:kube-system,Attempt:0,}" Jul 10 00:14:14.578507 systemd[1]: Created slice kubepods-besteffort-pod9929e3ba_e74e_49cc_86f9_b6eb1f944267.slice - libcontainer container kubepods-besteffort-pod9929e3ba_e74e_49cc_86f9_b6eb1f944267.slice. Jul 10 00:14:14.603918 containerd[1577]: time="2025-07-10T00:14:14.603856898Z" level=info msg="connecting to shim 623e665a1d8b7b3e1b29742b026df09dc8dae33c906bf116b3bb259c63e37ffa" address="unix:///run/containerd/s/725cb2811319f0002ad1faeb680ce29c5b148471d9a5825a306d033994ab3f8a" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:14:14.620279 kubelet[2734]: I0710 00:14:14.620238 2734 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9929e3ba-e74e-49cc-86f9-b6eb1f944267-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-9557l\" (UID: \"9929e3ba-e74e-49cc-86f9-b6eb1f944267\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-9557l" Jul 10 00:14:14.620279 kubelet[2734]: I0710 00:14:14.620285 2734 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dwmp\" (UniqueName: \"kubernetes.io/projected/9929e3ba-e74e-49cc-86f9-b6eb1f944267-kube-api-access-6dwmp\") pod \"tigera-operator-5bf8dfcb4-9557l\" (UID: \"9929e3ba-e74e-49cc-86f9-b6eb1f944267\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-9557l" Jul 10 00:14:14.639229 systemd[1]: Started cri-containerd-623e665a1d8b7b3e1b29742b026df09dc8dae33c906bf116b3bb259c63e37ffa.scope - libcontainer container 623e665a1d8b7b3e1b29742b026df09dc8dae33c906bf116b3bb259c63e37ffa. Jul 10 00:14:14.679678 containerd[1577]: time="2025-07-10T00:14:14.679617536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zxx5m,Uid:d0d2f3ef-5c28-4395-b27a-723381ccdef0,Namespace:kube-system,Attempt:0,} returns sandbox id \"623e665a1d8b7b3e1b29742b026df09dc8dae33c906bf116b3bb259c63e37ffa\"" Jul 10 00:14:14.683345 containerd[1577]: time="2025-07-10T00:14:14.683274668Z" level=info msg="CreateContainer within sandbox \"623e665a1d8b7b3e1b29742b026df09dc8dae33c906bf116b3bb259c63e37ffa\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 10 00:14:14.698020 containerd[1577]: time="2025-07-10T00:14:14.696594378Z" level=info msg="Container 38893e8153106375b5c8411447af992b0f6c2bdddce806d6f51928a549bfaf5a: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:14:14.701243 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1452852046.mount: Deactivated successfully. Jul 10 00:14:14.708998 containerd[1577]: time="2025-07-10T00:14:14.708447298Z" level=info msg="CreateContainer within sandbox \"623e665a1d8b7b3e1b29742b026df09dc8dae33c906bf116b3bb259c63e37ffa\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"38893e8153106375b5c8411447af992b0f6c2bdddce806d6f51928a549bfaf5a\"" Jul 10 00:14:14.709669 containerd[1577]: time="2025-07-10T00:14:14.709612522Z" level=info msg="StartContainer for \"38893e8153106375b5c8411447af992b0f6c2bdddce806d6f51928a549bfaf5a\"" Jul 10 00:14:14.712322 containerd[1577]: time="2025-07-10T00:14:14.712261775Z" level=info msg="connecting to shim 38893e8153106375b5c8411447af992b0f6c2bdddce806d6f51928a549bfaf5a" address="unix:///run/containerd/s/725cb2811319f0002ad1faeb680ce29c5b148471d9a5825a306d033994ab3f8a" protocol=ttrpc version=3 Jul 10 00:14:14.736164 systemd[1]: Started cri-containerd-38893e8153106375b5c8411447af992b0f6c2bdddce806d6f51928a549bfaf5a.scope - libcontainer container 38893e8153106375b5c8411447af992b0f6c2bdddce806d6f51928a549bfaf5a. Jul 10 00:14:14.787518 containerd[1577]: time="2025-07-10T00:14:14.787461751Z" level=info msg="StartContainer for \"38893e8153106375b5c8411447af992b0f6c2bdddce806d6f51928a549bfaf5a\" returns successfully" Jul 10 00:14:14.883471 containerd[1577]: time="2025-07-10T00:14:14.883349610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-9557l,Uid:9929e3ba-e74e-49cc-86f9-b6eb1f944267,Namespace:tigera-operator,Attempt:0,}" Jul 10 00:14:14.920710 containerd[1577]: time="2025-07-10T00:14:14.920630932Z" level=info msg="connecting to shim a66a8815d924f441823a7bbf02f5036631371d67d34516911c9fccdc4dbf6cb1" address="unix:///run/containerd/s/c2d5acb2f846e70b18798b35e1e10acc177e93152ba7b344e7778b429c96d72d" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:14:14.956328 systemd[1]: Started cri-containerd-a66a8815d924f441823a7bbf02f5036631371d67d34516911c9fccdc4dbf6cb1.scope - libcontainer container a66a8815d924f441823a7bbf02f5036631371d67d34516911c9fccdc4dbf6cb1. Jul 10 00:14:15.007577 containerd[1577]: time="2025-07-10T00:14:15.007519380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-9557l,Uid:9929e3ba-e74e-49cc-86f9-b6eb1f944267,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"a66a8815d924f441823a7bbf02f5036631371d67d34516911c9fccdc4dbf6cb1\"" Jul 10 00:14:15.009758 containerd[1577]: time="2025-07-10T00:14:15.009723580Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 10 00:14:15.048529 kubelet[2734]: I0710 00:14:15.048452 2734 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zxx5m" podStartSLOduration=1.048424153 podStartE2EDuration="1.048424153s" podCreationTimestamp="2025-07-10 00:14:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:14:15.038453216 +0000 UTC m=+6.148072769" watchObservedRunningTime="2025-07-10 00:14:15.048424153 +0000 UTC m=+6.158043706" Jul 10 00:14:17.163461 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2299397632.mount: Deactivated successfully. Jul 10 00:14:19.134200 containerd[1577]: time="2025-07-10T00:14:19.134106527Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:14:19.134897 containerd[1577]: time="2025-07-10T00:14:19.134866093Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 10 00:14:19.136200 containerd[1577]: time="2025-07-10T00:14:19.136162479Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:14:19.143111 containerd[1577]: time="2025-07-10T00:14:19.143047331Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:14:19.143851 containerd[1577]: time="2025-07-10T00:14:19.143792942Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 4.134031163s" Jul 10 00:14:19.143851 containerd[1577]: time="2025-07-10T00:14:19.143835169Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 10 00:14:19.146066 containerd[1577]: time="2025-07-10T00:14:19.146015568Z" level=info msg="CreateContainer within sandbox \"a66a8815d924f441823a7bbf02f5036631371d67d34516911c9fccdc4dbf6cb1\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 10 00:14:19.155895 containerd[1577]: time="2025-07-10T00:14:19.155826920Z" level=info msg="Container d71d5ee1249e14250723858d4f2d8487e85f4b004ee11b20864c407f49c6ec82: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:14:19.160191 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount154864376.mount: Deactivated successfully. Jul 10 00:14:19.163609 containerd[1577]: time="2025-07-10T00:14:19.163539633Z" level=info msg="CreateContainer within sandbox \"a66a8815d924f441823a7bbf02f5036631371d67d34516911c9fccdc4dbf6cb1\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"d71d5ee1249e14250723858d4f2d8487e85f4b004ee11b20864c407f49c6ec82\"" Jul 10 00:14:19.164294 containerd[1577]: time="2025-07-10T00:14:19.164243137Z" level=info msg="StartContainer for \"d71d5ee1249e14250723858d4f2d8487e85f4b004ee11b20864c407f49c6ec82\"" Jul 10 00:14:19.165410 containerd[1577]: time="2025-07-10T00:14:19.165370054Z" level=info msg="connecting to shim d71d5ee1249e14250723858d4f2d8487e85f4b004ee11b20864c407f49c6ec82" address="unix:///run/containerd/s/c2d5acb2f846e70b18798b35e1e10acc177e93152ba7b344e7778b429c96d72d" protocol=ttrpc version=3 Jul 10 00:14:19.220176 systemd[1]: Started cri-containerd-d71d5ee1249e14250723858d4f2d8487e85f4b004ee11b20864c407f49c6ec82.scope - libcontainer container d71d5ee1249e14250723858d4f2d8487e85f4b004ee11b20864c407f49c6ec82. Jul 10 00:14:19.335041 containerd[1577]: time="2025-07-10T00:14:19.334987854Z" level=info msg="StartContainer for \"d71d5ee1249e14250723858d4f2d8487e85f4b004ee11b20864c407f49c6ec82\" returns successfully" Jul 10 00:14:22.634082 kubelet[2734]: I0710 00:14:22.633934 2734 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-9557l" podStartSLOduration=4.498641719 podStartE2EDuration="8.633915323s" podCreationTimestamp="2025-07-10 00:14:14 +0000 UTC" firstStartedPulling="2025-07-10 00:14:15.009327042 +0000 UTC m=+6.118946595" lastFinishedPulling="2025-07-10 00:14:19.144600646 +0000 UTC m=+10.254220199" observedRunningTime="2025-07-10 00:14:20.068629315 +0000 UTC m=+11.178248868" watchObservedRunningTime="2025-07-10 00:14:22.633915323 +0000 UTC m=+13.743534876" Jul 10 00:14:31.194609 sudo[1779]: pam_unix(sudo:session): session closed for user root Jul 10 00:14:31.199102 sshd[1778]: Connection closed by 10.0.0.1 port 59336 Jul 10 00:14:31.205233 sshd-session[1776]: pam_unix(sshd:session): session closed for user core Jul 10 00:14:31.210804 systemd[1]: sshd@8-10.0.0.15:22-10.0.0.1:59336.service: Deactivated successfully. Jul 10 00:14:31.214741 systemd[1]: session-9.scope: Deactivated successfully. Jul 10 00:14:31.215372 systemd[1]: session-9.scope: Consumed 5.718s CPU time, 225.8M memory peak. Jul 10 00:14:31.217592 systemd-logind[1514]: Session 9 logged out. Waiting for processes to exit. Jul 10 00:14:31.219553 systemd-logind[1514]: Removed session 9. Jul 10 00:14:35.425954 systemd[1]: Created slice kubepods-besteffort-pod956c8931_d6a0_4fbb_9424_7187046e3822.slice - libcontainer container kubepods-besteffort-pod956c8931_d6a0_4fbb_9424_7187046e3822.slice. Jul 10 00:14:35.558835 kubelet[2734]: I0710 00:14:35.558759 2734 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwn9m\" (UniqueName: \"kubernetes.io/projected/956c8931-d6a0-4fbb-9424-7187046e3822-kube-api-access-kwn9m\") pod \"calico-typha-65f6c6b686-kb67m\" (UID: \"956c8931-d6a0-4fbb-9424-7187046e3822\") " pod="calico-system/calico-typha-65f6c6b686-kb67m" Jul 10 00:14:35.558835 kubelet[2734]: I0710 00:14:35.558814 2734 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/956c8931-d6a0-4fbb-9424-7187046e3822-tigera-ca-bundle\") pod \"calico-typha-65f6c6b686-kb67m\" (UID: \"956c8931-d6a0-4fbb-9424-7187046e3822\") " pod="calico-system/calico-typha-65f6c6b686-kb67m" Jul 10 00:14:35.558835 kubelet[2734]: I0710 00:14:35.558841 2734 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/956c8931-d6a0-4fbb-9424-7187046e3822-typha-certs\") pod \"calico-typha-65f6c6b686-kb67m\" (UID: \"956c8931-d6a0-4fbb-9424-7187046e3822\") " pod="calico-system/calico-typha-65f6c6b686-kb67m" Jul 10 00:14:35.731226 containerd[1577]: time="2025-07-10T00:14:35.731073705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-65f6c6b686-kb67m,Uid:956c8931-d6a0-4fbb-9424-7187046e3822,Namespace:calico-system,Attempt:0,}" Jul 10 00:14:35.773765 containerd[1577]: time="2025-07-10T00:14:35.773681411Z" level=info msg="connecting to shim c6b09adecf6adb3fdd27a7b196fc119e5545ef08500dc25dec4b444ce68166e9" address="unix:///run/containerd/s/10c186d58645c252befd63f6fc009f23c91b0f5475a8f19788e3a2697ac3d562" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:14:35.817351 systemd[1]: Started cri-containerd-c6b09adecf6adb3fdd27a7b196fc119e5545ef08500dc25dec4b444ce68166e9.scope - libcontainer container c6b09adecf6adb3fdd27a7b196fc119e5545ef08500dc25dec4b444ce68166e9. Jul 10 00:14:35.836320 systemd[1]: Created slice kubepods-besteffort-pod72510c6f_4df7_43bd_9b2c_ccdb4021a3b8.slice - libcontainer container kubepods-besteffort-pod72510c6f_4df7_43bd_9b2c_ccdb4021a3b8.slice. Jul 10 00:14:35.889244 containerd[1577]: time="2025-07-10T00:14:35.889193126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-65f6c6b686-kb67m,Uid:956c8931-d6a0-4fbb-9424-7187046e3822,Namespace:calico-system,Attempt:0,} returns sandbox id \"c6b09adecf6adb3fdd27a7b196fc119e5545ef08500dc25dec4b444ce68166e9\"" Jul 10 00:14:35.893410 containerd[1577]: time="2025-07-10T00:14:35.893353473Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 10 00:14:35.961254 kubelet[2734]: I0710 00:14:35.961176 2734 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/72510c6f-4df7-43bd-9b2c-ccdb4021a3b8-lib-modules\") pod \"calico-node-bfpj8\" (UID: \"72510c6f-4df7-43bd-9b2c-ccdb4021a3b8\") " pod="calico-system/calico-node-bfpj8" Jul 10 00:14:35.961254 kubelet[2734]: I0710 00:14:35.961242 2734 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/72510c6f-4df7-43bd-9b2c-ccdb4021a3b8-tigera-ca-bundle\") pod \"calico-node-bfpj8\" (UID: \"72510c6f-4df7-43bd-9b2c-ccdb4021a3b8\") " pod="calico-system/calico-node-bfpj8" Jul 10 00:14:35.961254 kubelet[2734]: I0710 00:14:35.961265 2734 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/72510c6f-4df7-43bd-9b2c-ccdb4021a3b8-var-lib-calico\") pod \"calico-node-bfpj8\" (UID: \"72510c6f-4df7-43bd-9b2c-ccdb4021a3b8\") " pod="calico-system/calico-node-bfpj8" Jul 10 00:14:35.961479 kubelet[2734]: I0710 00:14:35.961286 2734 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/72510c6f-4df7-43bd-9b2c-ccdb4021a3b8-policysync\") pod \"calico-node-bfpj8\" (UID: \"72510c6f-4df7-43bd-9b2c-ccdb4021a3b8\") " pod="calico-system/calico-node-bfpj8" Jul 10 00:14:35.961479 kubelet[2734]: I0710 00:14:35.961307 2734 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/72510c6f-4df7-43bd-9b2c-ccdb4021a3b8-var-run-calico\") pod \"calico-node-bfpj8\" (UID: \"72510c6f-4df7-43bd-9b2c-ccdb4021a3b8\") " pod="calico-system/calico-node-bfpj8" Jul 10 00:14:35.961479 kubelet[2734]: I0710 00:14:35.961330 2734 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/72510c6f-4df7-43bd-9b2c-ccdb4021a3b8-xtables-lock\") pod \"calico-node-bfpj8\" (UID: \"72510c6f-4df7-43bd-9b2c-ccdb4021a3b8\") " pod="calico-system/calico-node-bfpj8" Jul 10 00:14:35.961479 kubelet[2734]: I0710 00:14:35.961351 2734 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/72510c6f-4df7-43bd-9b2c-ccdb4021a3b8-node-certs\") pod \"calico-node-bfpj8\" (UID: \"72510c6f-4df7-43bd-9b2c-ccdb4021a3b8\") " pod="calico-system/calico-node-bfpj8" Jul 10 00:14:35.961479 kubelet[2734]: I0710 00:14:35.961374 2734 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrss6\" (UniqueName: \"kubernetes.io/projected/72510c6f-4df7-43bd-9b2c-ccdb4021a3b8-kube-api-access-hrss6\") pod \"calico-node-bfpj8\" (UID: \"72510c6f-4df7-43bd-9b2c-ccdb4021a3b8\") " pod="calico-system/calico-node-bfpj8" Jul 10 00:14:35.961610 kubelet[2734]: I0710 00:14:35.961406 2734 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/72510c6f-4df7-43bd-9b2c-ccdb4021a3b8-flexvol-driver-host\") pod \"calico-node-bfpj8\" (UID: \"72510c6f-4df7-43bd-9b2c-ccdb4021a3b8\") " pod="calico-system/calico-node-bfpj8" Jul 10 00:14:35.961610 kubelet[2734]: I0710 00:14:35.961449 2734 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/72510c6f-4df7-43bd-9b2c-ccdb4021a3b8-cni-bin-dir\") pod \"calico-node-bfpj8\" (UID: \"72510c6f-4df7-43bd-9b2c-ccdb4021a3b8\") " pod="calico-system/calico-node-bfpj8" Jul 10 00:14:35.961610 kubelet[2734]: I0710 00:14:35.961520 2734 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/72510c6f-4df7-43bd-9b2c-ccdb4021a3b8-cni-log-dir\") pod \"calico-node-bfpj8\" (UID: \"72510c6f-4df7-43bd-9b2c-ccdb4021a3b8\") " pod="calico-system/calico-node-bfpj8" Jul 10 00:14:35.961610 kubelet[2734]: I0710 00:14:35.961543 2734 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/72510c6f-4df7-43bd-9b2c-ccdb4021a3b8-cni-net-dir\") pod \"calico-node-bfpj8\" (UID: \"72510c6f-4df7-43bd-9b2c-ccdb4021a3b8\") " pod="calico-system/calico-node-bfpj8" Jul 10 00:14:36.065140 kubelet[2734]: E0710 00:14:36.065102 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:36.065140 kubelet[2734]: W0710 00:14:36.065130 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:36.065332 kubelet[2734]: E0710 00:14:36.065155 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:36.073223 kubelet[2734]: E0710 00:14:36.072730 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:36.073223 kubelet[2734]: W0710 00:14:36.072759 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:36.073223 kubelet[2734]: E0710 00:14:36.072786 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:36.077308 kubelet[2734]: E0710 00:14:36.077281 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:36.077308 kubelet[2734]: W0710 00:14:36.077301 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:36.077463 kubelet[2734]: E0710 00:14:36.077322 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:36.117204 kubelet[2734]: E0710 00:14:36.117143 2734 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9hrm5" podUID="a2039424-1061-4f04-994f-fe93d4b49d0e" Jul 10 00:14:36.145364 containerd[1577]: time="2025-07-10T00:14:36.145306332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bfpj8,Uid:72510c6f-4df7-43bd-9b2c-ccdb4021a3b8,Namespace:calico-system,Attempt:0,}" Jul 10 00:14:36.166847 kubelet[2734]: E0710 00:14:36.166780 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:36.166847 kubelet[2734]: W0710 00:14:36.166825 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:36.167076 kubelet[2734]: E0710 00:14:36.166860 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:36.168313 kubelet[2734]: E0710 00:14:36.168285 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:36.168313 kubelet[2734]: W0710 00:14:36.168307 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:36.168431 kubelet[2734]: E0710 00:14:36.168336 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:36.168610 kubelet[2734]: E0710 00:14:36.168585 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:36.168610 kubelet[2734]: W0710 00:14:36.168609 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:36.168701 kubelet[2734]: E0710 00:14:36.168621 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:36.169193 kubelet[2734]: E0710 00:14:36.169158 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:36.169193 kubelet[2734]: W0710 00:14:36.169175 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:36.169193 kubelet[2734]: E0710 00:14:36.169189 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:36.169988 kubelet[2734]: E0710 00:14:36.169522 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:36.169988 kubelet[2734]: W0710 00:14:36.169537 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:36.169988 kubelet[2734]: E0710 00:14:36.169550 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:36.169988 kubelet[2734]: E0710 00:14:36.169766 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:36.169988 kubelet[2734]: W0710 00:14:36.169776 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:36.169988 kubelet[2734]: E0710 00:14:36.169787 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:36.170330 kubelet[2734]: E0710 00:14:36.170295 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:36.170330 kubelet[2734]: W0710 00:14:36.170313 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:36.170330 kubelet[2734]: E0710 00:14:36.170327 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:36.171228 kubelet[2734]: E0710 00:14:36.171201 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:36.171228 kubelet[2734]: W0710 00:14:36.171223 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:36.171341 kubelet[2734]: E0710 00:14:36.171237 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:36.172785 kubelet[2734]: E0710 00:14:36.172755 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:36.172785 kubelet[2734]: W0710 00:14:36.172776 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:36.172922 kubelet[2734]: E0710 00:14:36.172791 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:36.173802 kubelet[2734]: E0710 00:14:36.173047 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:36.173802 kubelet[2734]: W0710 00:14:36.173060 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:36.173802 kubelet[2734]: E0710 00:14:36.173071 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:36.173802 kubelet[2734]: E0710 00:14:36.173266 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:36.173802 kubelet[2734]: W0710 00:14:36.173276 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:36.173802 kubelet[2734]: E0710 00:14:36.173287 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:36.173802 kubelet[2734]: E0710 00:14:36.173503 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:36.173802 kubelet[2734]: W0710 00:14:36.173513 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:36.173802 kubelet[2734]: E0710 00:14:36.173524 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:36.175263 kubelet[2734]: E0710 00:14:36.175166 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:36.175263 kubelet[2734]: W0710 00:14:36.175187 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:36.175263 kubelet[2734]: E0710 00:14:36.175202 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:36.177079 kubelet[2734]: E0710 00:14:36.176348 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:36.177420 kubelet[2734]: W0710 00:14:36.177267 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:36.177420 kubelet[2734]: E0710 00:14:36.177305 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:36.177569 kubelet[2734]: E0710 00:14:36.177558 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:36.177947 kubelet[2734]: W0710 00:14:36.177929 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:36.178058 kubelet[2734]: E0710 00:14:36.178044 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:36.179289 kubelet[2734]: E0710 00:14:36.179198 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:36.179289 kubelet[2734]: W0710 00:14:36.179211 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:36.179289 kubelet[2734]: E0710 00:14:36.179222 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:36.179593 kubelet[2734]: E0710 00:14:36.179578 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:36.179689 kubelet[2734]: W0710 00:14:36.179674 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:36.179777 kubelet[2734]: E0710 00:14:36.179761 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:36.181122 kubelet[2734]: E0710 00:14:36.181109 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:36.181283 kubelet[2734]: W0710 00:14:36.181184 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:36.181283 kubelet[2734]: E0710 00:14:36.181199 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:36.181429 kubelet[2734]: E0710 00:14:36.181417 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:36.181485 kubelet[2734]: W0710 00:14:36.181473 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:36.181548 kubelet[2734]: E0710 00:14:36.181537 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:36.181878 kubelet[2734]: E0710 00:14:36.181793 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:36.181878 kubelet[2734]: W0710 00:14:36.181804 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:36.181878 kubelet[2734]: E0710 00:14:36.181813 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:36.196428 containerd[1577]: time="2025-07-10T00:14:36.196350133Z" level=info msg="connecting to shim f3cdf4fbe1eaa0649819d4777903d484a06c0d156a45e12fb5158ba2bd0cb1db" address="unix:///run/containerd/s/c8919e19eb0bf8d85a72d9ea570dde978b52a2d26ddea7d77052a5529baee2c3" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:14:36.237148 systemd[1]: Started cri-containerd-f3cdf4fbe1eaa0649819d4777903d484a06c0d156a45e12fb5158ba2bd0cb1db.scope - libcontainer container f3cdf4fbe1eaa0649819d4777903d484a06c0d156a45e12fb5158ba2bd0cb1db. Jul 10 00:14:36.264805 kubelet[2734]: E0710 00:14:36.264525 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:36.264805 kubelet[2734]: W0710 00:14:36.264559 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:36.264805 kubelet[2734]: E0710 00:14:36.264589 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:36.264805 kubelet[2734]: I0710 00:14:36.264637 2734 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a2039424-1061-4f04-994f-fe93d4b49d0e-kubelet-dir\") pod \"csi-node-driver-9hrm5\" (UID: \"a2039424-1061-4f04-994f-fe93d4b49d0e\") " pod="calico-system/csi-node-driver-9hrm5" Jul 10 00:14:36.265381 kubelet[2734]: E0710 00:14:36.265208 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:36.265381 kubelet[2734]: W0710 00:14:36.265225 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:36.265381 kubelet[2734]: E0710 00:14:36.265256 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:36.265381 kubelet[2734]: I0710 00:14:36.265278 2734 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a2039424-1061-4f04-994f-fe93d4b49d0e-registration-dir\") pod \"csi-node-driver-9hrm5\" (UID: \"a2039424-1061-4f04-994f-fe93d4b49d0e\") " pod="calico-system/csi-node-driver-9hrm5" Jul 10 00:14:36.265903 kubelet[2734]: E0710 00:14:36.265876 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:36.266009 kubelet[2734]: W0710 00:14:36.265992 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:36.267046 kubelet[2734]: E0710 00:14:36.266852 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:36.267395 kubelet[2734]: E0710 00:14:36.267232 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:36.267395 kubelet[2734]: W0710 00:14:36.267250 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:36.267395 kubelet[2734]: E0710 00:14:36.267349 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:36.268092 kubelet[2734]: E0710 00:14:36.268077 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:36.268178 kubelet[2734]: W0710 00:14:36.268164 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:36.268407 kubelet[2734]: E0710 00:14:36.268387 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:36.268808 kubelet[2734]: E0710 00:14:36.268791 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:36.268934 kubelet[2734]: W0710 00:14:36.268917 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:36.269211 kubelet[2734]: E0710 00:14:36.269192 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:36.269809 kubelet[2734]: I0710 00:14:36.269770 2734 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nnx7\" (UniqueName: \"kubernetes.io/projected/a2039424-1061-4f04-994f-fe93d4b49d0e-kube-api-access-5nnx7\") pod \"csi-node-driver-9hrm5\" (UID: \"a2039424-1061-4f04-994f-fe93d4b49d0e\") " pod="calico-system/csi-node-driver-9hrm5" Jul 10 00:14:36.270011 kubelet[2734]: E0710 00:14:36.269959 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:36.270140 kubelet[2734]: W0710 00:14:36.270100 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:36.270140 kubelet[2734]: E0710 00:14:36.270121 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:36.271056 kubelet[2734]: E0710 00:14:36.270878 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:36.271056 kubelet[2734]: W0710 00:14:36.270916 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:36.271056 kubelet[2734]: E0710 00:14:36.270942 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:36.271606 kubelet[2734]: I0710 00:14:36.271458 2734 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/a2039424-1061-4f04-994f-fe93d4b49d0e-varrun\") pod \"csi-node-driver-9hrm5\" (UID: \"a2039424-1061-4f04-994f-fe93d4b49d0e\") " pod="calico-system/csi-node-driver-9hrm5" Jul 10 00:14:36.272188 kubelet[2734]: E0710 00:14:36.272054 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:36.272188 kubelet[2734]: W0710 00:14:36.272070 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:36.272188 kubelet[2734]: E0710 00:14:36.272086 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:36.274568 kubelet[2734]: E0710 00:14:36.274490 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:36.274568 kubelet[2734]: W0710 00:14:36.274508 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:36.274856 kubelet[2734]: E0710 00:14:36.274821 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:36.275056 kubelet[2734]: E0710 00:14:36.275039 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:36.275441 kubelet[2734]: W0710 00:14:36.275122 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:36.275637 kubelet[2734]: E0710 00:14:36.275590 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:36.275749 kubelet[2734]: E0710 00:14:36.275736 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:36.275820 kubelet[2734]: W0710 00:14:36.275805 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:36.275940 kubelet[2734]: E0710 00:14:36.275923 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:36.276496 kubelet[2734]: I0710 00:14:36.276395 2734 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a2039424-1061-4f04-994f-fe93d4b49d0e-socket-dir\") pod \"csi-node-driver-9hrm5\" (UID: \"a2039424-1061-4f04-994f-fe93d4b49d0e\") " pod="calico-system/csi-node-driver-9hrm5" Jul 10 00:14:36.276613 kubelet[2734]: E0710 00:14:36.276598 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:36.276757 kubelet[2734]: W0710 00:14:36.276677 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:36.276757 kubelet[2734]: E0710 00:14:36.276696 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:36.277192 kubelet[2734]: E0710 00:14:36.277177 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:36.277404 kubelet[2734]: W0710 00:14:36.277334 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:36.277404 kubelet[2734]: E0710 00:14:36.277357 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:36.277830 kubelet[2734]: E0710 00:14:36.277790 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:36.277830 kubelet[2734]: W0710 00:14:36.277803 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:36.277830 kubelet[2734]: E0710 00:14:36.277812 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:36.289083 containerd[1577]: time="2025-07-10T00:14:36.289026724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bfpj8,Uid:72510c6f-4df7-43bd-9b2c-ccdb4021a3b8,Namespace:calico-system,Attempt:0,} returns sandbox id \"f3cdf4fbe1eaa0649819d4777903d484a06c0d156a45e12fb5158ba2bd0cb1db\"" Jul 10 00:14:36.378353 kubelet[2734]: E0710 00:14:36.378183 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:36.378353 kubelet[2734]: W0710 00:14:36.378211 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:36.378353 kubelet[2734]: E0710 00:14:36.378234 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:36.378546 kubelet[2734]: E0710 00:14:36.378411 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:36.378546 kubelet[2734]: W0710 00:14:36.378420 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:36.378546 kubelet[2734]: E0710 00:14:36.378474 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:36.380231 kubelet[2734]: E0710 00:14:36.380197 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:36.380231 kubelet[2734]: W0710 00:14:36.380219 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:36.380317 kubelet[2734]: E0710 00:14:36.380256 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:36.380489 kubelet[2734]: E0710 00:14:36.380443 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:36.380489 kubelet[2734]: W0710 00:14:36.380455 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:36.380489 kubelet[2734]: E0710 00:14:36.380485 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:36.381192 kubelet[2734]: E0710 00:14:36.380666 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:36.381192 kubelet[2734]: W0710 00:14:36.381189 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:36.381292 kubelet[2734]: E0710 00:14:36.381232 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:36.381415 kubelet[2734]: E0710 00:14:36.381386 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:36.381415 kubelet[2734]: W0710 00:14:36.381403 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:36.381597 kubelet[2734]: E0710 00:14:36.381578 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:36.381597 kubelet[2734]: W0710 00:14:36.381589 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:36.381737 kubelet[2734]: E0710 00:14:36.381605 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:36.381737 kubelet[2734]: E0710 00:14:36.381713 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:36.381926 kubelet[2734]: E0710 00:14:36.381805 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:36.381926 kubelet[2734]: W0710 00:14:36.381813 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:36.381926 kubelet[2734]: E0710 00:14:36.381822 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:36.382089 kubelet[2734]: E0710 00:14:36.382025 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:36.382089 kubelet[2734]: W0710 00:14:36.382033 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:36.382089 kubelet[2734]: E0710 00:14:36.382048 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:36.382240 kubelet[2734]: E0710 00:14:36.382217 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:36.382240 kubelet[2734]: W0710 00:14:36.382229 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:36.382349 kubelet[2734]: E0710 00:14:36.382244 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:36.382460 kubelet[2734]: E0710 00:14:36.382443 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:36.382460 kubelet[2734]: W0710 00:14:36.382460 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:36.382932 kubelet[2734]: E0710 00:14:36.382669 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:36.382932 kubelet[2734]: E0710 00:14:36.382661 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:36.382932 kubelet[2734]: W0710 00:14:36.382678 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:36.382932 kubelet[2734]: E0710 00:14:36.382784 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:36.383230 kubelet[2734]: E0710 00:14:36.383203 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:36.383230 kubelet[2734]: W0710 00:14:36.383219 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:36.383230 kubelet[2734]: E0710 00:14:36.383240 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:36.383545 kubelet[2734]: E0710 00:14:36.383528 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:36.383545 kubelet[2734]: W0710 00:14:36.383540 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:36.383681 kubelet[2734]: E0710 00:14:36.383553 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:36.383770 kubelet[2734]: E0710 00:14:36.383754 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:36.383770 kubelet[2734]: W0710 00:14:36.383767 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:36.383824 kubelet[2734]: E0710 00:14:36.383785 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:36.384019 kubelet[2734]: E0710 00:14:36.383998 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:36.384019 kubelet[2734]: W0710 00:14:36.384016 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:36.384090 kubelet[2734]: E0710 00:14:36.384041 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:36.384297 kubelet[2734]: E0710 00:14:36.384282 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:36.384297 kubelet[2734]: W0710 00:14:36.384292 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:36.384357 kubelet[2734]: E0710 00:14:36.384307 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:36.384802 kubelet[2734]: E0710 00:14:36.384779 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:36.384802 kubelet[2734]: W0710 00:14:36.384799 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:36.384907 kubelet[2734]: E0710 00:14:36.384823 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:36.385144 kubelet[2734]: E0710 00:14:36.385129 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:36.385144 kubelet[2734]: W0710 00:14:36.385143 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:36.385234 kubelet[2734]: E0710 00:14:36.385211 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:36.385362 kubelet[2734]: E0710 00:14:36.385343 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:36.385362 kubelet[2734]: W0710 00:14:36.385358 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:36.385431 kubelet[2734]: E0710 00:14:36.385378 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:36.385708 kubelet[2734]: E0710 00:14:36.385645 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:36.385708 kubelet[2734]: W0710 00:14:36.385662 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:36.385854 kubelet[2734]: E0710 00:14:36.385834 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:36.388018 kubelet[2734]: E0710 00:14:36.386148 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:36.388018 kubelet[2734]: W0710 00:14:36.386164 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:36.388018 kubelet[2734]: E0710 00:14:36.386192 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:36.388018 kubelet[2734]: E0710 00:14:36.386486 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:36.388018 kubelet[2734]: W0710 00:14:36.386496 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:36.388018 kubelet[2734]: E0710 00:14:36.386524 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:36.388018 kubelet[2734]: E0710 00:14:36.386699 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:36.388018 kubelet[2734]: W0710 00:14:36.386707 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:36.388018 kubelet[2734]: E0710 00:14:36.386728 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:36.388018 kubelet[2734]: E0710 00:14:36.386958 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:36.388305 kubelet[2734]: W0710 00:14:36.386981 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:36.388305 kubelet[2734]: E0710 00:14:36.386992 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:36.393483 kubelet[2734]: E0710 00:14:36.393466 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:36.393483 kubelet[2734]: W0710 00:14:36.393478 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:36.393575 kubelet[2734]: E0710 00:14:36.393490 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:37.926565 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount824520932.mount: Deactivated successfully. Jul 10 00:14:38.022299 kubelet[2734]: E0710 00:14:38.022227 2734 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9hrm5" podUID="a2039424-1061-4f04-994f-fe93d4b49d0e" Jul 10 00:14:39.267345 containerd[1577]: time="2025-07-10T00:14:39.267278629Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:14:39.271672 containerd[1577]: time="2025-07-10T00:14:39.271634527Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Jul 10 00:14:39.273459 containerd[1577]: time="2025-07-10T00:14:39.273384224Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:14:39.275222 containerd[1577]: time="2025-07-10T00:14:39.275182220Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:14:39.275629 containerd[1577]: time="2025-07-10T00:14:39.275595849Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 3.38220089s" Jul 10 00:14:39.275681 containerd[1577]: time="2025-07-10T00:14:39.275627629Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 10 00:14:39.276892 containerd[1577]: time="2025-07-10T00:14:39.276665871Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 10 00:14:39.290461 containerd[1577]: time="2025-07-10T00:14:39.290389443Z" level=info msg="CreateContainer within sandbox \"c6b09adecf6adb3fdd27a7b196fc119e5545ef08500dc25dec4b444ce68166e9\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 10 00:14:39.299497 containerd[1577]: time="2025-07-10T00:14:39.299449845Z" level=info msg="Container 12292a786ee1a542cd6bc41c935683d8707a6510a012103964be65accdd5ac22: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:14:39.307591 containerd[1577]: time="2025-07-10T00:14:39.307548048Z" level=info msg="CreateContainer within sandbox \"c6b09adecf6adb3fdd27a7b196fc119e5545ef08500dc25dec4b444ce68166e9\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"12292a786ee1a542cd6bc41c935683d8707a6510a012103964be65accdd5ac22\"" Jul 10 00:14:39.309162 containerd[1577]: time="2025-07-10T00:14:39.308057816Z" level=info msg="StartContainer for \"12292a786ee1a542cd6bc41c935683d8707a6510a012103964be65accdd5ac22\"" Jul 10 00:14:39.309162 containerd[1577]: time="2025-07-10T00:14:39.309077294Z" level=info msg="connecting to shim 12292a786ee1a542cd6bc41c935683d8707a6510a012103964be65accdd5ac22" address="unix:///run/containerd/s/10c186d58645c252befd63f6fc009f23c91b0f5475a8f19788e3a2697ac3d562" protocol=ttrpc version=3 Jul 10 00:14:39.342280 systemd[1]: Started cri-containerd-12292a786ee1a542cd6bc41c935683d8707a6510a012103964be65accdd5ac22.scope - libcontainer container 12292a786ee1a542cd6bc41c935683d8707a6510a012103964be65accdd5ac22. Jul 10 00:14:39.403541 containerd[1577]: time="2025-07-10T00:14:39.403429970Z" level=info msg="StartContainer for \"12292a786ee1a542cd6bc41c935683d8707a6510a012103964be65accdd5ac22\" returns successfully" Jul 10 00:14:40.003506 kubelet[2734]: E0710 00:14:40.003449 2734 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9hrm5" podUID="a2039424-1061-4f04-994f-fe93d4b49d0e" Jul 10 00:14:40.092789 kubelet[2734]: I0710 00:14:40.092705 2734 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-65f6c6b686-kb67m" podStartSLOduration=1.709310832 podStartE2EDuration="5.092683411s" podCreationTimestamp="2025-07-10 00:14:35 +0000 UTC" firstStartedPulling="2025-07-10 00:14:35.893051903 +0000 UTC m=+27.002671466" lastFinishedPulling="2025-07-10 00:14:39.276424492 +0000 UTC m=+30.386044045" observedRunningTime="2025-07-10 00:14:40.091499927 +0000 UTC m=+31.201119480" watchObservedRunningTime="2025-07-10 00:14:40.092683411 +0000 UTC m=+31.202302964" Jul 10 00:14:40.113519 kubelet[2734]: E0710 00:14:40.113476 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:40.113519 kubelet[2734]: W0710 00:14:40.113500 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:40.113519 kubelet[2734]: E0710 00:14:40.113522 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:40.113788 kubelet[2734]: E0710 00:14:40.113773 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:40.113788 kubelet[2734]: W0710 00:14:40.113786 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:40.113843 kubelet[2734]: E0710 00:14:40.113797 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:40.114004 kubelet[2734]: E0710 00:14:40.113990 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:40.114004 kubelet[2734]: W0710 00:14:40.114003 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:40.114072 kubelet[2734]: E0710 00:14:40.114014 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:40.114204 kubelet[2734]: E0710 00:14:40.114189 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:40.114204 kubelet[2734]: W0710 00:14:40.114201 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:40.114249 kubelet[2734]: E0710 00:14:40.114211 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:40.114405 kubelet[2734]: E0710 00:14:40.114391 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:40.114405 kubelet[2734]: W0710 00:14:40.114403 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:40.114465 kubelet[2734]: E0710 00:14:40.114413 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:40.114596 kubelet[2734]: E0710 00:14:40.114580 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:40.114596 kubelet[2734]: W0710 00:14:40.114592 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:40.114662 kubelet[2734]: E0710 00:14:40.114602 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:40.114815 kubelet[2734]: E0710 00:14:40.114801 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:40.114849 kubelet[2734]: W0710 00:14:40.114815 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:40.114849 kubelet[2734]: E0710 00:14:40.114825 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:40.115038 kubelet[2734]: E0710 00:14:40.115023 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:40.115038 kubelet[2734]: W0710 00:14:40.115036 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:40.115089 kubelet[2734]: E0710 00:14:40.115046 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:40.115239 kubelet[2734]: E0710 00:14:40.115225 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:40.115239 kubelet[2734]: W0710 00:14:40.115237 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:40.115301 kubelet[2734]: E0710 00:14:40.115247 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:40.115432 kubelet[2734]: E0710 00:14:40.115413 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:40.115432 kubelet[2734]: W0710 00:14:40.115425 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:40.115525 kubelet[2734]: E0710 00:14:40.115436 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:40.115694 kubelet[2734]: E0710 00:14:40.115676 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:40.115694 kubelet[2734]: W0710 00:14:40.115690 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:40.115844 kubelet[2734]: E0710 00:14:40.115702 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:40.115925 kubelet[2734]: E0710 00:14:40.115911 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:40.115925 kubelet[2734]: W0710 00:14:40.115922 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:40.116099 kubelet[2734]: E0710 00:14:40.115933 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:40.116208 kubelet[2734]: E0710 00:14:40.116190 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:40.116208 kubelet[2734]: W0710 00:14:40.116202 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:40.116291 kubelet[2734]: E0710 00:14:40.116214 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:40.116460 kubelet[2734]: E0710 00:14:40.116431 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:40.116460 kubelet[2734]: W0710 00:14:40.116445 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:40.116460 kubelet[2734]: E0710 00:14:40.116458 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:40.116701 kubelet[2734]: E0710 00:14:40.116683 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:40.116701 kubelet[2734]: W0710 00:14:40.116698 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:40.116786 kubelet[2734]: E0710 00:14:40.116711 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:40.212016 kubelet[2734]: E0710 00:14:40.211983 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:40.212016 kubelet[2734]: W0710 00:14:40.212007 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:40.212185 kubelet[2734]: E0710 00:14:40.212030 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:40.212299 kubelet[2734]: E0710 00:14:40.212271 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:40.212299 kubelet[2734]: W0710 00:14:40.212292 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:40.212366 kubelet[2734]: E0710 00:14:40.212308 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:40.212531 kubelet[2734]: E0710 00:14:40.212513 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:40.212531 kubelet[2734]: W0710 00:14:40.212529 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:40.212615 kubelet[2734]: E0710 00:14:40.212547 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:40.212792 kubelet[2734]: E0710 00:14:40.212762 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:40.212792 kubelet[2734]: W0710 00:14:40.212775 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:40.212792 kubelet[2734]: E0710 00:14:40.212790 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:40.212996 kubelet[2734]: E0710 00:14:40.212980 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:40.212996 kubelet[2734]: W0710 00:14:40.212990 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:40.213082 kubelet[2734]: E0710 00:14:40.213003 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:40.213227 kubelet[2734]: E0710 00:14:40.213208 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:40.213227 kubelet[2734]: W0710 00:14:40.213223 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:40.213332 kubelet[2734]: E0710 00:14:40.213236 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:40.213626 kubelet[2734]: E0710 00:14:40.213606 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:40.213626 kubelet[2734]: W0710 00:14:40.213620 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:40.213726 kubelet[2734]: E0710 00:14:40.213657 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:40.213871 kubelet[2734]: E0710 00:14:40.213850 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:40.213871 kubelet[2734]: W0710 00:14:40.213866 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:40.214026 kubelet[2734]: E0710 00:14:40.213907 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:40.214124 kubelet[2734]: E0710 00:14:40.214106 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:40.214124 kubelet[2734]: W0710 00:14:40.214120 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:40.214229 kubelet[2734]: E0710 00:14:40.214136 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:40.214380 kubelet[2734]: E0710 00:14:40.214362 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:40.214380 kubelet[2734]: W0710 00:14:40.214377 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:40.214459 kubelet[2734]: E0710 00:14:40.214396 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:40.214695 kubelet[2734]: E0710 00:14:40.214670 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:40.214784 kubelet[2734]: W0710 00:14:40.214692 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:40.214833 kubelet[2734]: E0710 00:14:40.214780 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:40.215116 kubelet[2734]: E0710 00:14:40.215085 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:40.215116 kubelet[2734]: W0710 00:14:40.215101 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:40.215210 kubelet[2734]: E0710 00:14:40.215135 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:40.215695 kubelet[2734]: E0710 00:14:40.215417 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:40.215695 kubelet[2734]: W0710 00:14:40.215431 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:40.215695 kubelet[2734]: E0710 00:14:40.215454 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:40.215695 kubelet[2734]: E0710 00:14:40.215650 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:40.215695 kubelet[2734]: W0710 00:14:40.215658 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:40.215695 kubelet[2734]: E0710 00:14:40.215671 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:40.215906 kubelet[2734]: E0710 00:14:40.215838 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:40.215906 kubelet[2734]: W0710 00:14:40.215846 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:40.215906 kubelet[2734]: E0710 00:14:40.215858 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:40.216084 kubelet[2734]: E0710 00:14:40.216068 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:40.216084 kubelet[2734]: W0710 00:14:40.216079 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:40.216148 kubelet[2734]: E0710 00:14:40.216093 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:40.216375 kubelet[2734]: E0710 00:14:40.216357 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:40.216375 kubelet[2734]: W0710 00:14:40.216372 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:40.216461 kubelet[2734]: E0710 00:14:40.216388 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:40.216567 kubelet[2734]: E0710 00:14:40.216551 2734 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:14:40.216567 kubelet[2734]: W0710 00:14:40.216562 2734 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:14:40.216567 kubelet[2734]: E0710 00:14:40.216572 2734 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:14:40.691306 containerd[1577]: time="2025-07-10T00:14:40.691246231Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:14:40.692103 containerd[1577]: time="2025-07-10T00:14:40.692031463Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Jul 10 00:14:40.693227 containerd[1577]: time="2025-07-10T00:14:40.693189550Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:14:40.695855 containerd[1577]: time="2025-07-10T00:14:40.695804028Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:14:40.696535 containerd[1577]: time="2025-07-10T00:14:40.696493672Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 1.419790222s" Jul 10 00:14:40.696535 containerd[1577]: time="2025-07-10T00:14:40.696528677Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 10 00:14:40.698689 containerd[1577]: time="2025-07-10T00:14:40.698658703Z" level=info msg="CreateContainer within sandbox \"f3cdf4fbe1eaa0649819d4777903d484a06c0d156a45e12fb5158ba2bd0cb1db\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 10 00:14:40.904603 containerd[1577]: time="2025-07-10T00:14:40.904539730Z" level=info msg="Container 075a9aae20b48924f30232ead16458ee6a334b187bcaefd8f469e6fd33c4c2b3: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:14:40.918090 containerd[1577]: time="2025-07-10T00:14:40.918044776Z" level=info msg="CreateContainer within sandbox \"f3cdf4fbe1eaa0649819d4777903d484a06c0d156a45e12fb5158ba2bd0cb1db\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"075a9aae20b48924f30232ead16458ee6a334b187bcaefd8f469e6fd33c4c2b3\"" Jul 10 00:14:40.918607 containerd[1577]: time="2025-07-10T00:14:40.918576165Z" level=info msg="StartContainer for \"075a9aae20b48924f30232ead16458ee6a334b187bcaefd8f469e6fd33c4c2b3\"" Jul 10 00:14:40.920700 containerd[1577]: time="2025-07-10T00:14:40.920661378Z" level=info msg="connecting to shim 075a9aae20b48924f30232ead16458ee6a334b187bcaefd8f469e6fd33c4c2b3" address="unix:///run/containerd/s/c8919e19eb0bf8d85a72d9ea570dde978b52a2d26ddea7d77052a5529baee2c3" protocol=ttrpc version=3 Jul 10 00:14:40.952227 systemd[1]: Started cri-containerd-075a9aae20b48924f30232ead16458ee6a334b187bcaefd8f469e6fd33c4c2b3.scope - libcontainer container 075a9aae20b48924f30232ead16458ee6a334b187bcaefd8f469e6fd33c4c2b3. Jul 10 00:14:41.002427 containerd[1577]: time="2025-07-10T00:14:41.002365345Z" level=info msg="StartContainer for \"075a9aae20b48924f30232ead16458ee6a334b187bcaefd8f469e6fd33c4c2b3\" returns successfully" Jul 10 00:14:41.014096 systemd[1]: cri-containerd-075a9aae20b48924f30232ead16458ee6a334b187bcaefd8f469e6fd33c4c2b3.scope: Deactivated successfully. Jul 10 00:14:41.018369 containerd[1577]: time="2025-07-10T00:14:41.018315387Z" level=info msg="received exit event container_id:\"075a9aae20b48924f30232ead16458ee6a334b187bcaefd8f469e6fd33c4c2b3\" id:\"075a9aae20b48924f30232ead16458ee6a334b187bcaefd8f469e6fd33c4c2b3\" pid:3419 exited_at:{seconds:1752106481 nanos:17789027}" Jul 10 00:14:41.018657 containerd[1577]: time="2025-07-10T00:14:41.018473782Z" level=info msg="TaskExit event in podsandbox handler container_id:\"075a9aae20b48924f30232ead16458ee6a334b187bcaefd8f469e6fd33c4c2b3\" id:\"075a9aae20b48924f30232ead16458ee6a334b187bcaefd8f469e6fd33c4c2b3\" pid:3419 exited_at:{seconds:1752106481 nanos:17789027}" Jul 10 00:14:41.047603 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-075a9aae20b48924f30232ead16458ee6a334b187bcaefd8f469e6fd33c4c2b3-rootfs.mount: Deactivated successfully. Jul 10 00:14:41.083777 kubelet[2734]: I0710 00:14:41.083725 2734 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 00:14:42.003587 kubelet[2734]: E0710 00:14:42.003479 2734 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9hrm5" podUID="a2039424-1061-4f04-994f-fe93d4b49d0e" Jul 10 00:14:42.090104 containerd[1577]: time="2025-07-10T00:14:42.090034744Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 10 00:14:44.003172 kubelet[2734]: E0710 00:14:44.003079 2734 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9hrm5" podUID="a2039424-1061-4f04-994f-fe93d4b49d0e" Jul 10 00:14:44.260829 kubelet[2734]: I0710 00:14:44.260665 2734 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 00:14:46.002694 kubelet[2734]: E0710 00:14:46.002638 2734 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9hrm5" podUID="a2039424-1061-4f04-994f-fe93d4b49d0e" Jul 10 00:14:46.976787 containerd[1577]: time="2025-07-10T00:14:46.976624214Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:14:46.977939 containerd[1577]: time="2025-07-10T00:14:46.977886760Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 10 00:14:46.979727 containerd[1577]: time="2025-07-10T00:14:46.979685366Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:14:46.982748 containerd[1577]: time="2025-07-10T00:14:46.982665247Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:14:46.983647 containerd[1577]: time="2025-07-10T00:14:46.983590604Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 4.89350751s" Jul 10 00:14:46.983647 containerd[1577]: time="2025-07-10T00:14:46.983630038Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 10 00:14:46.986291 containerd[1577]: time="2025-07-10T00:14:46.986227355Z" level=info msg="CreateContainer within sandbox \"f3cdf4fbe1eaa0649819d4777903d484a06c0d156a45e12fb5158ba2bd0cb1db\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 10 00:14:46.999216 containerd[1577]: time="2025-07-10T00:14:46.999158026Z" level=info msg="Container fc93e5b7545bd6046dbe1f69ca83e2af6bd509e6cd38a3bf73af137c0932b170: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:14:47.010079 containerd[1577]: time="2025-07-10T00:14:47.010011656Z" level=info msg="CreateContainer within sandbox \"f3cdf4fbe1eaa0649819d4777903d484a06c0d156a45e12fb5158ba2bd0cb1db\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"fc93e5b7545bd6046dbe1f69ca83e2af6bd509e6cd38a3bf73af137c0932b170\"" Jul 10 00:14:47.010865 containerd[1577]: time="2025-07-10T00:14:47.010824533Z" level=info msg="StartContainer for \"fc93e5b7545bd6046dbe1f69ca83e2af6bd509e6cd38a3bf73af137c0932b170\"" Jul 10 00:14:47.014113 containerd[1577]: time="2025-07-10T00:14:47.012947927Z" level=info msg="connecting to shim fc93e5b7545bd6046dbe1f69ca83e2af6bd509e6cd38a3bf73af137c0932b170" address="unix:///run/containerd/s/c8919e19eb0bf8d85a72d9ea570dde978b52a2d26ddea7d77052a5529baee2c3" protocol=ttrpc version=3 Jul 10 00:14:47.044267 systemd[1]: Started cri-containerd-fc93e5b7545bd6046dbe1f69ca83e2af6bd509e6cd38a3bf73af137c0932b170.scope - libcontainer container fc93e5b7545bd6046dbe1f69ca83e2af6bd509e6cd38a3bf73af137c0932b170. Jul 10 00:14:47.289610 containerd[1577]: time="2025-07-10T00:14:47.289553889Z" level=info msg="StartContainer for \"fc93e5b7545bd6046dbe1f69ca83e2af6bd509e6cd38a3bf73af137c0932b170\" returns successfully" Jul 10 00:14:48.003695 kubelet[2734]: E0710 00:14:48.003620 2734 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9hrm5" podUID="a2039424-1061-4f04-994f-fe93d4b49d0e" Jul 10 00:14:49.315390 containerd[1577]: time="2025-07-10T00:14:49.315311017Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 10 00:14:49.318661 systemd[1]: cri-containerd-fc93e5b7545bd6046dbe1f69ca83e2af6bd509e6cd38a3bf73af137c0932b170.scope: Deactivated successfully. Jul 10 00:14:49.319131 systemd[1]: cri-containerd-fc93e5b7545bd6046dbe1f69ca83e2af6bd509e6cd38a3bf73af137c0932b170.scope: Consumed 648ms CPU time, 181.2M memory peak, 2.9M read from disk, 171.2M written to disk. Jul 10 00:14:49.319933 containerd[1577]: time="2025-07-10T00:14:49.319885280Z" level=info msg="received exit event container_id:\"fc93e5b7545bd6046dbe1f69ca83e2af6bd509e6cd38a3bf73af137c0932b170\" id:\"fc93e5b7545bd6046dbe1f69ca83e2af6bd509e6cd38a3bf73af137c0932b170\" pid:3480 exited_at:{seconds:1752106489 nanos:319609776}" Jul 10 00:14:49.320304 containerd[1577]: time="2025-07-10T00:14:49.320267023Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fc93e5b7545bd6046dbe1f69ca83e2af6bd509e6cd38a3bf73af137c0932b170\" id:\"fc93e5b7545bd6046dbe1f69ca83e2af6bd509e6cd38a3bf73af137c0932b170\" pid:3480 exited_at:{seconds:1752106489 nanos:319609776}" Jul 10 00:14:49.345805 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fc93e5b7545bd6046dbe1f69ca83e2af6bd509e6cd38a3bf73af137c0932b170-rootfs.mount: Deactivated successfully. Jul 10 00:14:49.350544 kubelet[2734]: I0710 00:14:49.350511 2734 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 10 00:14:49.403901 systemd[1]: Created slice kubepods-burstable-pod359de6e9_3b2e_443b_b503_de9f39e018a6.slice - libcontainer container kubepods-burstable-pod359de6e9_3b2e_443b_b503_de9f39e018a6.slice. Jul 10 00:14:49.409450 systemd[1]: Created slice kubepods-besteffort-podaf71a532_c3cb_4f6b_b34f_557db84bc94c.slice - libcontainer container kubepods-besteffort-podaf71a532_c3cb_4f6b_b34f_557db84bc94c.slice. Jul 10 00:14:49.415327 systemd[1]: Created slice kubepods-besteffort-podbcc4652c_78d0_457c_983f_264291541c82.slice - libcontainer container kubepods-besteffort-podbcc4652c_78d0_457c_983f_264291541c82.slice. Jul 10 00:14:49.421926 systemd[1]: Created slice kubepods-besteffort-pod624a8fe1_87ea_42e3_8a5f_66462efcb0d0.slice - libcontainer container kubepods-besteffort-pod624a8fe1_87ea_42e3_8a5f_66462efcb0d0.slice. Jul 10 00:14:49.426876 systemd[1]: Created slice kubepods-besteffort-podc88bc6c7_1873_4977_a99e_f5b3146d186d.slice - libcontainer container kubepods-besteffort-podc88bc6c7_1873_4977_a99e_f5b3146d186d.slice. Jul 10 00:14:49.432636 systemd[1]: Created slice kubepods-burstable-podc9b0fe0a_7632_435e_8a0e_5adbb06433c1.slice - libcontainer container kubepods-burstable-podc9b0fe0a_7632_435e_8a0e_5adbb06433c1.slice. Jul 10 00:14:49.437451 systemd[1]: Created slice kubepods-besteffort-podfa51678a_734c_4f52_b195_0df51e7bd123.slice - libcontainer container kubepods-besteffort-podfa51678a_734c_4f52_b195_0df51e7bd123.slice. Jul 10 00:14:49.477278 kubelet[2734]: I0710 00:14:49.477195 2734 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvnq9\" (UniqueName: \"kubernetes.io/projected/359de6e9-3b2e-443b-b503-de9f39e018a6-kube-api-access-bvnq9\") pod \"coredns-7c65d6cfc9-mm8jb\" (UID: \"359de6e9-3b2e-443b-b503-de9f39e018a6\") " pod="kube-system/coredns-7c65d6cfc9-mm8jb" Jul 10 00:14:49.477503 kubelet[2734]: I0710 00:14:49.477404 2734 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/359de6e9-3b2e-443b-b503-de9f39e018a6-config-volume\") pod \"coredns-7c65d6cfc9-mm8jb\" (UID: \"359de6e9-3b2e-443b-b503-de9f39e018a6\") " pod="kube-system/coredns-7c65d6cfc9-mm8jb" Jul 10 00:14:49.578793 kubelet[2734]: I0710 00:14:49.578634 2734 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fxhv\" (UniqueName: \"kubernetes.io/projected/fa51678a-734c-4f52-b195-0df51e7bd123-kube-api-access-4fxhv\") pod \"goldmane-58fd7646b9-cv2mt\" (UID: \"fa51678a-734c-4f52-b195-0df51e7bd123\") " pod="calico-system/goldmane-58fd7646b9-cv2mt" Jul 10 00:14:49.578793 kubelet[2734]: I0710 00:14:49.578692 2734 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c9b0fe0a-7632-435e-8a0e-5adbb06433c1-config-volume\") pod \"coredns-7c65d6cfc9-gg7tq\" (UID: \"c9b0fe0a-7632-435e-8a0e-5adbb06433c1\") " pod="kube-system/coredns-7c65d6cfc9-gg7tq" Jul 10 00:14:49.578793 kubelet[2734]: I0710 00:14:49.578718 2734 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxszk\" (UniqueName: \"kubernetes.io/projected/af71a532-c3cb-4f6b-b34f-557db84bc94c-kube-api-access-hxszk\") pod \"calico-kube-controllers-78f888ffbb-tv82g\" (UID: \"af71a532-c3cb-4f6b-b34f-557db84bc94c\") " pod="calico-system/calico-kube-controllers-78f888ffbb-tv82g" Jul 10 00:14:49.578793 kubelet[2734]: I0710 00:14:49.578736 2734 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/bcc4652c-78d0-457c-983f-264291541c82-whisker-backend-key-pair\") pod \"whisker-5f576f4f94-glw6d\" (UID: \"bcc4652c-78d0-457c-983f-264291541c82\") " pod="calico-system/whisker-5f576f4f94-glw6d" Jul 10 00:14:49.578793 kubelet[2734]: I0710 00:14:49.578755 2734 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa51678a-734c-4f52-b195-0df51e7bd123-config\") pod \"goldmane-58fd7646b9-cv2mt\" (UID: \"fa51678a-734c-4f52-b195-0df51e7bd123\") " pod="calico-system/goldmane-58fd7646b9-cv2mt" Jul 10 00:14:49.579078 kubelet[2734]: I0710 00:14:49.578814 2734 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-462qn\" (UniqueName: \"kubernetes.io/projected/624a8fe1-87ea-42e3-8a5f-66462efcb0d0-kube-api-access-462qn\") pod \"calico-apiserver-644f966dd-5wff4\" (UID: \"624a8fe1-87ea-42e3-8a5f-66462efcb0d0\") " pod="calico-apiserver/calico-apiserver-644f966dd-5wff4" Jul 10 00:14:49.579078 kubelet[2734]: I0710 00:14:49.578871 2734 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c88bc6c7-1873-4977-a99e-f5b3146d186d-calico-apiserver-certs\") pod \"calico-apiserver-644f966dd-9rvwf\" (UID: \"c88bc6c7-1873-4977-a99e-f5b3146d186d\") " pod="calico-apiserver/calico-apiserver-644f966dd-9rvwf" Jul 10 00:14:49.579078 kubelet[2734]: I0710 00:14:49.578939 2734 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bcc4652c-78d0-457c-983f-264291541c82-whisker-ca-bundle\") pod \"whisker-5f576f4f94-glw6d\" (UID: \"bcc4652c-78d0-457c-983f-264291541c82\") " pod="calico-system/whisker-5f576f4f94-glw6d" Jul 10 00:14:49.579078 kubelet[2734]: I0710 00:14:49.578981 2734 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/af71a532-c3cb-4f6b-b34f-557db84bc94c-tigera-ca-bundle\") pod \"calico-kube-controllers-78f888ffbb-tv82g\" (UID: \"af71a532-c3cb-4f6b-b34f-557db84bc94c\") " pod="calico-system/calico-kube-controllers-78f888ffbb-tv82g" Jul 10 00:14:49.579078 kubelet[2734]: I0710 00:14:49.579008 2734 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lg9dc\" (UniqueName: \"kubernetes.io/projected/c9b0fe0a-7632-435e-8a0e-5adbb06433c1-kube-api-access-lg9dc\") pod \"coredns-7c65d6cfc9-gg7tq\" (UID: \"c9b0fe0a-7632-435e-8a0e-5adbb06433c1\") " pod="kube-system/coredns-7c65d6cfc9-gg7tq" Jul 10 00:14:49.579529 kubelet[2734]: I0710 00:14:49.579484 2734 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2v7qf\" (UniqueName: \"kubernetes.io/projected/bcc4652c-78d0-457c-983f-264291541c82-kube-api-access-2v7qf\") pod \"whisker-5f576f4f94-glw6d\" (UID: \"bcc4652c-78d0-457c-983f-264291541c82\") " pod="calico-system/whisker-5f576f4f94-glw6d" Jul 10 00:14:49.579591 kubelet[2734]: I0710 00:14:49.579537 2734 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa51678a-734c-4f52-b195-0df51e7bd123-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-cv2mt\" (UID: \"fa51678a-734c-4f52-b195-0df51e7bd123\") " pod="calico-system/goldmane-58fd7646b9-cv2mt" Jul 10 00:14:49.579591 kubelet[2734]: I0710 00:14:49.579559 2734 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/fa51678a-734c-4f52-b195-0df51e7bd123-goldmane-key-pair\") pod \"goldmane-58fd7646b9-cv2mt\" (UID: \"fa51678a-734c-4f52-b195-0df51e7bd123\") " pod="calico-system/goldmane-58fd7646b9-cv2mt" Jul 10 00:14:49.579703 kubelet[2734]: I0710 00:14:49.579631 2734 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/624a8fe1-87ea-42e3-8a5f-66462efcb0d0-calico-apiserver-certs\") pod \"calico-apiserver-644f966dd-5wff4\" (UID: \"624a8fe1-87ea-42e3-8a5f-66462efcb0d0\") " pod="calico-apiserver/calico-apiserver-644f966dd-5wff4" Jul 10 00:14:49.579703 kubelet[2734]: I0710 00:14:49.579682 2734 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbz4v\" (UniqueName: \"kubernetes.io/projected/c88bc6c7-1873-4977-a99e-f5b3146d186d-kube-api-access-zbz4v\") pod \"calico-apiserver-644f966dd-9rvwf\" (UID: \"c88bc6c7-1873-4977-a99e-f5b3146d186d\") " pod="calico-apiserver/calico-apiserver-644f966dd-9rvwf" Jul 10 00:14:49.707937 containerd[1577]: time="2025-07-10T00:14:49.707862692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mm8jb,Uid:359de6e9-3b2e-443b-b503-de9f39e018a6,Namespace:kube-system,Attempt:0,}" Jul 10 00:14:49.713578 containerd[1577]: time="2025-07-10T00:14:49.713414612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78f888ffbb-tv82g,Uid:af71a532-c3cb-4f6b-b34f-557db84bc94c,Namespace:calico-system,Attempt:0,}" Jul 10 00:14:49.720014 containerd[1577]: time="2025-07-10T00:14:49.719427302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5f576f4f94-glw6d,Uid:bcc4652c-78d0-457c-983f-264291541c82,Namespace:calico-system,Attempt:0,}" Jul 10 00:14:49.725911 containerd[1577]: time="2025-07-10T00:14:49.725861279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-644f966dd-5wff4,Uid:624a8fe1-87ea-42e3-8a5f-66462efcb0d0,Namespace:calico-apiserver,Attempt:0,}" Jul 10 00:14:49.732317 containerd[1577]: time="2025-07-10T00:14:49.732270671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-644f966dd-9rvwf,Uid:c88bc6c7-1873-4977-a99e-f5b3146d186d,Namespace:calico-apiserver,Attempt:0,}" Jul 10 00:14:49.735961 containerd[1577]: time="2025-07-10T00:14:49.735913925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-gg7tq,Uid:c9b0fe0a-7632-435e-8a0e-5adbb06433c1,Namespace:kube-system,Attempt:0,}" Jul 10 00:14:49.741081 containerd[1577]: time="2025-07-10T00:14:49.741038556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-cv2mt,Uid:fa51678a-734c-4f52-b195-0df51e7bd123,Namespace:calico-system,Attempt:0,}" Jul 10 00:14:49.840794 containerd[1577]: time="2025-07-10T00:14:49.840570216Z" level=error msg="Failed to destroy network for sandbox \"18f212e70b9b6b81335baa70b40c40d7287a445aa3953173bdaac5749fdab743\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:14:49.857119 containerd[1577]: time="2025-07-10T00:14:49.856959687Z" level=error msg="Failed to destroy network for sandbox \"347bebd8c0ee4ddf105c6c7bee283ece83e03be625ddff418564cd9ea9d0a4d0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:14:49.858089 containerd[1577]: time="2025-07-10T00:14:49.858061435Z" level=error msg="Failed to destroy network for sandbox \"39b89a5d0ba47b7ec56735a91aeeb735a9fee9df8675a385806fa421948645c2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:14:49.858438 containerd[1577]: time="2025-07-10T00:14:49.858395619Z" level=error msg="Failed to destroy network for sandbox \"e450e2239433401c28c193ecce7a2f303025250411910b98e0c93a4b549feffd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:14:49.860686 containerd[1577]: time="2025-07-10T00:14:49.860652865Z" level=error msg="Failed to destroy network for sandbox \"8eff3b2b8a6341ffd9f2386954bfa3e270bc440da40d9a370edae0fc82dd01fd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:14:49.878024 containerd[1577]: time="2025-07-10T00:14:49.877941346Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5f576f4f94-glw6d,Uid:bcc4652c-78d0-457c-983f-264291541c82,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"18f212e70b9b6b81335baa70b40c40d7287a445aa3953173bdaac5749fdab743\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:14:49.878225 containerd[1577]: time="2025-07-10T00:14:49.878033648Z" level=error msg="Failed to destroy network for sandbox \"5c4e557d4d54a6df581ec9dd060e3e9d170aca4654acc2991d577e4c4330baad\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:14:49.878225 containerd[1577]: time="2025-07-10T00:14:49.877949632Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-644f966dd-5wff4,Uid:624a8fe1-87ea-42e3-8a5f-66462efcb0d0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"39b89a5d0ba47b7ec56735a91aeeb735a9fee9df8675a385806fa421948645c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:14:49.878225 containerd[1577]: time="2025-07-10T00:14:49.877981991Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-644f966dd-9rvwf,Uid:c88bc6c7-1873-4977-a99e-f5b3146d186d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"347bebd8c0ee4ddf105c6c7bee283ece83e03be625ddff418564cd9ea9d0a4d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:14:49.878856 kubelet[2734]: E0710 00:14:49.878793 2734 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18f212e70b9b6b81335baa70b40c40d7287a445aa3953173bdaac5749fdab743\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:14:49.879042 kubelet[2734]: E0710 00:14:49.878897 2734 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18f212e70b9b6b81335baa70b40c40d7287a445aa3953173bdaac5749fdab743\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5f576f4f94-glw6d" Jul 10 00:14:49.879042 kubelet[2734]: E0710 00:14:49.878923 2734 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18f212e70b9b6b81335baa70b40c40d7287a445aa3953173bdaac5749fdab743\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5f576f4f94-glw6d" Jul 10 00:14:49.879042 kubelet[2734]: E0710 00:14:49.878988 2734 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5f576f4f94-glw6d_calico-system(bcc4652c-78d0-457c-983f-264291541c82)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5f576f4f94-glw6d_calico-system(bcc4652c-78d0-457c-983f-264291541c82)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"18f212e70b9b6b81335baa70b40c40d7287a445aa3953173bdaac5749fdab743\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5f576f4f94-glw6d" podUID="bcc4652c-78d0-457c-983f-264291541c82" Jul 10 00:14:49.879206 kubelet[2734]: E0710 00:14:49.878801 2734 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39b89a5d0ba47b7ec56735a91aeeb735a9fee9df8675a385806fa421948645c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:14:49.879206 kubelet[2734]: E0710 00:14:49.879168 2734 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39b89a5d0ba47b7ec56735a91aeeb735a9fee9df8675a385806fa421948645c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-644f966dd-5wff4" Jul 10 00:14:49.879286 kubelet[2734]: E0710 00:14:49.879214 2734 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39b89a5d0ba47b7ec56735a91aeeb735a9fee9df8675a385806fa421948645c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-644f966dd-5wff4" Jul 10 00:14:49.879286 kubelet[2734]: E0710 00:14:49.879270 2734 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"347bebd8c0ee4ddf105c6c7bee283ece83e03be625ddff418564cd9ea9d0a4d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:14:49.879380 kubelet[2734]: E0710 00:14:49.879261 2734 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-644f966dd-5wff4_calico-apiserver(624a8fe1-87ea-42e3-8a5f-66462efcb0d0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-644f966dd-5wff4_calico-apiserver(624a8fe1-87ea-42e3-8a5f-66462efcb0d0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"39b89a5d0ba47b7ec56735a91aeeb735a9fee9df8675a385806fa421948645c2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-644f966dd-5wff4" podUID="624a8fe1-87ea-42e3-8a5f-66462efcb0d0" Jul 10 00:14:49.879380 kubelet[2734]: E0710 00:14:49.879296 2734 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"347bebd8c0ee4ddf105c6c7bee283ece83e03be625ddff418564cd9ea9d0a4d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-644f966dd-9rvwf" Jul 10 00:14:49.879380 kubelet[2734]: E0710 00:14:49.879317 2734 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"347bebd8c0ee4ddf105c6c7bee283ece83e03be625ddff418564cd9ea9d0a4d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-644f966dd-9rvwf" Jul 10 00:14:49.879522 kubelet[2734]: E0710 00:14:49.879375 2734 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-644f966dd-9rvwf_calico-apiserver(c88bc6c7-1873-4977-a99e-f5b3146d186d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-644f966dd-9rvwf_calico-apiserver(c88bc6c7-1873-4977-a99e-f5b3146d186d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"347bebd8c0ee4ddf105c6c7bee283ece83e03be625ddff418564cd9ea9d0a4d0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-644f966dd-9rvwf" podUID="c88bc6c7-1873-4977-a99e-f5b3146d186d" Jul 10 00:14:49.879582 containerd[1577]: time="2025-07-10T00:14:49.879440676Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78f888ffbb-tv82g,Uid:af71a532-c3cb-4f6b-b34f-557db84bc94c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e450e2239433401c28c193ecce7a2f303025250411910b98e0c93a4b549feffd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:14:49.880149 kubelet[2734]: E0710 00:14:49.880105 2734 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e450e2239433401c28c193ecce7a2f303025250411910b98e0c93a4b549feffd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:14:49.880149 kubelet[2734]: E0710 00:14:49.880142 2734 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e450e2239433401c28c193ecce7a2f303025250411910b98e0c93a4b549feffd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78f888ffbb-tv82g" Jul 10 00:14:49.880309 kubelet[2734]: E0710 00:14:49.880161 2734 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e450e2239433401c28c193ecce7a2f303025250411910b98e0c93a4b549feffd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78f888ffbb-tv82g" Jul 10 00:14:49.880309 kubelet[2734]: E0710 00:14:49.880202 2734 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-78f888ffbb-tv82g_calico-system(af71a532-c3cb-4f6b-b34f-557db84bc94c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-78f888ffbb-tv82g_calico-system(af71a532-c3cb-4f6b-b34f-557db84bc94c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e450e2239433401c28c193ecce7a2f303025250411910b98e0c93a4b549feffd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-78f888ffbb-tv82g" podUID="af71a532-c3cb-4f6b-b34f-557db84bc94c" Jul 10 00:14:49.882122 containerd[1577]: time="2025-07-10T00:14:49.882008512Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-gg7tq,Uid:c9b0fe0a-7632-435e-8a0e-5adbb06433c1,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8eff3b2b8a6341ffd9f2386954bfa3e270bc440da40d9a370edae0fc82dd01fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:14:49.882484 kubelet[2734]: E0710 00:14:49.882311 2734 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8eff3b2b8a6341ffd9f2386954bfa3e270bc440da40d9a370edae0fc82dd01fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:14:49.882484 kubelet[2734]: E0710 00:14:49.882378 2734 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8eff3b2b8a6341ffd9f2386954bfa3e270bc440da40d9a370edae0fc82dd01fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-gg7tq" Jul 10 00:14:49.882484 kubelet[2734]: E0710 00:14:49.882402 2734 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8eff3b2b8a6341ffd9f2386954bfa3e270bc440da40d9a370edae0fc82dd01fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-gg7tq" Jul 10 00:14:49.882661 kubelet[2734]: E0710 00:14:49.882440 2734 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-gg7tq_kube-system(c9b0fe0a-7632-435e-8a0e-5adbb06433c1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-gg7tq_kube-system(c9b0fe0a-7632-435e-8a0e-5adbb06433c1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8eff3b2b8a6341ffd9f2386954bfa3e270bc440da40d9a370edae0fc82dd01fd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-gg7tq" podUID="c9b0fe0a-7632-435e-8a0e-5adbb06433c1" Jul 10 00:14:49.883295 containerd[1577]: time="2025-07-10T00:14:49.883256934Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mm8jb,Uid:359de6e9-3b2e-443b-b503-de9f39e018a6,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c4e557d4d54a6df581ec9dd060e3e9d170aca4654acc2991d577e4c4330baad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:14:49.883458 kubelet[2734]: E0710 00:14:49.883405 2734 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c4e557d4d54a6df581ec9dd060e3e9d170aca4654acc2991d577e4c4330baad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:14:49.883458 kubelet[2734]: E0710 00:14:49.883432 2734 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c4e557d4d54a6df581ec9dd060e3e9d170aca4654acc2991d577e4c4330baad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-mm8jb" Jul 10 00:14:49.883458 kubelet[2734]: E0710 00:14:49.883449 2734 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c4e557d4d54a6df581ec9dd060e3e9d170aca4654acc2991d577e4c4330baad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-mm8jb" Jul 10 00:14:49.883651 kubelet[2734]: E0710 00:14:49.883487 2734 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-mm8jb_kube-system(359de6e9-3b2e-443b-b503-de9f39e018a6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-mm8jb_kube-system(359de6e9-3b2e-443b-b503-de9f39e018a6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5c4e557d4d54a6df581ec9dd060e3e9d170aca4654acc2991d577e4c4330baad\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-mm8jb" podUID="359de6e9-3b2e-443b-b503-de9f39e018a6" Jul 10 00:14:49.902667 containerd[1577]: time="2025-07-10T00:14:49.902613136Z" level=error msg="Failed to destroy network for sandbox \"a8cc3adf27ef7d4969fc22a1f43c71beb3789b974b444c6f9d337e1b18d3e414\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:14:49.904073 containerd[1577]: time="2025-07-10T00:14:49.904022909Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-cv2mt,Uid:fa51678a-734c-4f52-b195-0df51e7bd123,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8cc3adf27ef7d4969fc22a1f43c71beb3789b974b444c6f9d337e1b18d3e414\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:14:49.904310 kubelet[2734]: E0710 00:14:49.904273 2734 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8cc3adf27ef7d4969fc22a1f43c71beb3789b974b444c6f9d337e1b18d3e414\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:14:49.904413 kubelet[2734]: E0710 00:14:49.904333 2734 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8cc3adf27ef7d4969fc22a1f43c71beb3789b974b444c6f9d337e1b18d3e414\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-cv2mt" Jul 10 00:14:49.904413 kubelet[2734]: E0710 00:14:49.904362 2734 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8cc3adf27ef7d4969fc22a1f43c71beb3789b974b444c6f9d337e1b18d3e414\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-cv2mt" Jul 10 00:14:49.904481 kubelet[2734]: E0710 00:14:49.904406 2734 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-cv2mt_calico-system(fa51678a-734c-4f52-b195-0df51e7bd123)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-cv2mt_calico-system(fa51678a-734c-4f52-b195-0df51e7bd123)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a8cc3adf27ef7d4969fc22a1f43c71beb3789b974b444c6f9d337e1b18d3e414\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-cv2mt" podUID="fa51678a-734c-4f52-b195-0df51e7bd123" Jul 10 00:14:50.009155 systemd[1]: Created slice kubepods-besteffort-poda2039424_1061_4f04_994f_fe93d4b49d0e.slice - libcontainer container kubepods-besteffort-poda2039424_1061_4f04_994f_fe93d4b49d0e.slice. Jul 10 00:14:50.011626 containerd[1577]: time="2025-07-10T00:14:50.011580784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9hrm5,Uid:a2039424-1061-4f04-994f-fe93d4b49d0e,Namespace:calico-system,Attempt:0,}" Jul 10 00:14:50.067162 containerd[1577]: time="2025-07-10T00:14:50.067093853Z" level=error msg="Failed to destroy network for sandbox \"5aa7231af0deac8ce3512b4662eedfdfea2f30f4c592e52cbc1091e919bfdea8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:14:50.068528 containerd[1577]: time="2025-07-10T00:14:50.068489550Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9hrm5,Uid:a2039424-1061-4f04-994f-fe93d4b49d0e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5aa7231af0deac8ce3512b4662eedfdfea2f30f4c592e52cbc1091e919bfdea8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:14:50.068806 kubelet[2734]: E0710 00:14:50.068757 2734 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5aa7231af0deac8ce3512b4662eedfdfea2f30f4c592e52cbc1091e919bfdea8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:14:50.068859 kubelet[2734]: E0710 00:14:50.068838 2734 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5aa7231af0deac8ce3512b4662eedfdfea2f30f4c592e52cbc1091e919bfdea8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9hrm5" Jul 10 00:14:50.068886 kubelet[2734]: E0710 00:14:50.068863 2734 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5aa7231af0deac8ce3512b4662eedfdfea2f30f4c592e52cbc1091e919bfdea8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9hrm5" Jul 10 00:14:50.068950 kubelet[2734]: E0710 00:14:50.068919 2734 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9hrm5_calico-system(a2039424-1061-4f04-994f-fe93d4b49d0e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9hrm5_calico-system(a2039424-1061-4f04-994f-fe93d4b49d0e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5aa7231af0deac8ce3512b4662eedfdfea2f30f4c592e52cbc1091e919bfdea8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9hrm5" podUID="a2039424-1061-4f04-994f-fe93d4b49d0e" Jul 10 00:14:50.310207 containerd[1577]: time="2025-07-10T00:14:50.309959684Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 10 00:14:57.381207 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4193502695.mount: Deactivated successfully. Jul 10 00:14:58.561546 containerd[1577]: time="2025-07-10T00:14:58.561457936Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:14:58.562866 containerd[1577]: time="2025-07-10T00:14:58.562788712Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 10 00:14:58.564582 containerd[1577]: time="2025-07-10T00:14:58.564514674Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:14:58.568933 containerd[1577]: time="2025-07-10T00:14:58.568851850Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:14:58.569544 containerd[1577]: time="2025-07-10T00:14:58.569497520Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 8.259474277s" Jul 10 00:14:58.569620 containerd[1577]: time="2025-07-10T00:14:58.569545734Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 10 00:14:58.590617 containerd[1577]: time="2025-07-10T00:14:58.590555597Z" level=info msg="CreateContainer within sandbox \"f3cdf4fbe1eaa0649819d4777903d484a06c0d156a45e12fb5158ba2bd0cb1db\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 10 00:14:58.626158 containerd[1577]: time="2025-07-10T00:14:58.626069067Z" level=info msg="Container a71e2915761605f91634366398d6b4fe36d4f01c29f4b40031df31cc2ec5cdb6: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:14:58.918384 containerd[1577]: time="2025-07-10T00:14:58.918228699Z" level=info msg="CreateContainer within sandbox \"f3cdf4fbe1eaa0649819d4777903d484a06c0d156a45e12fb5158ba2bd0cb1db\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"a71e2915761605f91634366398d6b4fe36d4f01c29f4b40031df31cc2ec5cdb6\"" Jul 10 00:14:58.918925 containerd[1577]: time="2025-07-10T00:14:58.918873898Z" level=info msg="StartContainer for \"a71e2915761605f91634366398d6b4fe36d4f01c29f4b40031df31cc2ec5cdb6\"" Jul 10 00:14:58.920646 containerd[1577]: time="2025-07-10T00:14:58.920612044Z" level=info msg="connecting to shim a71e2915761605f91634366398d6b4fe36d4f01c29f4b40031df31cc2ec5cdb6" address="unix:///run/containerd/s/c8919e19eb0bf8d85a72d9ea570dde978b52a2d26ddea7d77052a5529baee2c3" protocol=ttrpc version=3 Jul 10 00:14:58.947167 systemd[1]: Started cri-containerd-a71e2915761605f91634366398d6b4fe36d4f01c29f4b40031df31cc2ec5cdb6.scope - libcontainer container a71e2915761605f91634366398d6b4fe36d4f01c29f4b40031df31cc2ec5cdb6. Jul 10 00:14:59.065324 containerd[1577]: time="2025-07-10T00:14:59.065270900Z" level=info msg="StartContainer for \"a71e2915761605f91634366398d6b4fe36d4f01c29f4b40031df31cc2ec5cdb6\" returns successfully" Jul 10 00:14:59.126481 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 10 00:14:59.126628 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 10 00:14:59.567893 containerd[1577]: time="2025-07-10T00:14:59.567814715Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a71e2915761605f91634366398d6b4fe36d4f01c29f4b40031df31cc2ec5cdb6\" id:\"c1bd1f77c79fb83f27044bad349c514debbdfbaabb845ad7c8e160c3c1139078\" pid:3842 exit_status:1 exited_at:{seconds:1752106499 nanos:567468455}" Jul 10 00:15:00.004425 kubelet[2734]: I0710 00:15:00.003899 2734 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-bfpj8" podStartSLOduration=2.723313541 podStartE2EDuration="25.003870032s" podCreationTimestamp="2025-07-10 00:14:35 +0000 UTC" firstStartedPulling="2025-07-10 00:14:36.290175639 +0000 UTC m=+27.399795182" lastFinishedPulling="2025-07-10 00:14:58.57073212 +0000 UTC m=+49.680351673" observedRunningTime="2025-07-10 00:14:59.754841725 +0000 UTC m=+50.864461278" watchObservedRunningTime="2025-07-10 00:15:00.003870032 +0000 UTC m=+51.113489586" Jul 10 00:15:00.163851 kubelet[2734]: I0710 00:15:00.163773 2734 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bcc4652c-78d0-457c-983f-264291541c82-whisker-ca-bundle\") pod \"bcc4652c-78d0-457c-983f-264291541c82\" (UID: \"bcc4652c-78d0-457c-983f-264291541c82\") " Jul 10 00:15:00.163851 kubelet[2734]: I0710 00:15:00.163830 2734 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2v7qf\" (UniqueName: \"kubernetes.io/projected/bcc4652c-78d0-457c-983f-264291541c82-kube-api-access-2v7qf\") pod \"bcc4652c-78d0-457c-983f-264291541c82\" (UID: \"bcc4652c-78d0-457c-983f-264291541c82\") " Jul 10 00:15:00.163851 kubelet[2734]: I0710 00:15:00.163848 2734 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/bcc4652c-78d0-457c-983f-264291541c82-whisker-backend-key-pair\") pod \"bcc4652c-78d0-457c-983f-264291541c82\" (UID: \"bcc4652c-78d0-457c-983f-264291541c82\") " Jul 10 00:15:00.164381 kubelet[2734]: I0710 00:15:00.164333 2734 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bcc4652c-78d0-457c-983f-264291541c82-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "bcc4652c-78d0-457c-983f-264291541c82" (UID: "bcc4652c-78d0-457c-983f-264291541c82"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 10 00:15:00.177369 kubelet[2734]: I0710 00:15:00.177296 2734 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcc4652c-78d0-457c-983f-264291541c82-kube-api-access-2v7qf" (OuterVolumeSpecName: "kube-api-access-2v7qf") pod "bcc4652c-78d0-457c-983f-264291541c82" (UID: "bcc4652c-78d0-457c-983f-264291541c82"). InnerVolumeSpecName "kube-api-access-2v7qf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 10 00:15:00.178130 kubelet[2734]: I0710 00:15:00.178095 2734 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcc4652c-78d0-457c-983f-264291541c82-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "bcc4652c-78d0-457c-983f-264291541c82" (UID: "bcc4652c-78d0-457c-983f-264291541c82"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 10 00:15:00.178294 systemd[1]: var-lib-kubelet-pods-bcc4652c\x2d78d0\x2d457c\x2d983f\x2d264291541c82-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2v7qf.mount: Deactivated successfully. Jul 10 00:15:00.178421 systemd[1]: var-lib-kubelet-pods-bcc4652c\x2d78d0\x2d457c\x2d983f\x2d264291541c82-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 10 00:15:00.264242 kubelet[2734]: I0710 00:15:00.264094 2734 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2v7qf\" (UniqueName: \"kubernetes.io/projected/bcc4652c-78d0-457c-983f-264291541c82-kube-api-access-2v7qf\") on node \"localhost\" DevicePath \"\"" Jul 10 00:15:00.264242 kubelet[2734]: I0710 00:15:00.264128 2734 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/bcc4652c-78d0-457c-983f-264291541c82-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 10 00:15:00.264242 kubelet[2734]: I0710 00:15:00.264137 2734 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bcc4652c-78d0-457c-983f-264291541c82-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 10 00:15:00.368244 systemd[1]: Removed slice kubepods-besteffort-podbcc4652c_78d0_457c_983f_264291541c82.slice - libcontainer container kubepods-besteffort-podbcc4652c_78d0_457c_983f_264291541c82.slice. Jul 10 00:15:00.449369 containerd[1577]: time="2025-07-10T00:15:00.449319686Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a71e2915761605f91634366398d6b4fe36d4f01c29f4b40031df31cc2ec5cdb6\" id:\"16b28a07f292a9b915595dc62ba1c3a652b3a67aacdd49a9b0ff2c12b8708145\" pid:3877 exit_status:1 exited_at:{seconds:1752106500 nanos:448992043}" Jul 10 00:15:01.005004 containerd[1577]: time="2025-07-10T00:15:01.004692961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-cv2mt,Uid:fa51678a-734c-4f52-b195-0df51e7bd123,Namespace:calico-system,Attempt:0,}" Jul 10 00:15:01.005594 containerd[1577]: time="2025-07-10T00:15:01.005140114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-gg7tq,Uid:c9b0fe0a-7632-435e-8a0e-5adbb06433c1,Namespace:kube-system,Attempt:0,}" Jul 10 00:15:01.615207 systemd[1]: Created slice kubepods-besteffort-podd164cda6_2a00_4ece_a7cd_c26da42774e7.slice - libcontainer container kubepods-besteffort-podd164cda6_2a00_4ece_a7cd_c26da42774e7.slice. Jul 10 00:15:01.775275 kubelet[2734]: I0710 00:15:01.775167 2734 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4tx2\" (UniqueName: \"kubernetes.io/projected/d164cda6-2a00-4ece-a7cd-c26da42774e7-kube-api-access-p4tx2\") pod \"whisker-66666b6b-qrtjl\" (UID: \"d164cda6-2a00-4ece-a7cd-c26da42774e7\") " pod="calico-system/whisker-66666b6b-qrtjl" Jul 10 00:15:01.775275 kubelet[2734]: I0710 00:15:01.775243 2734 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d164cda6-2a00-4ece-a7cd-c26da42774e7-whisker-backend-key-pair\") pod \"whisker-66666b6b-qrtjl\" (UID: \"d164cda6-2a00-4ece-a7cd-c26da42774e7\") " pod="calico-system/whisker-66666b6b-qrtjl" Jul 10 00:15:01.775275 kubelet[2734]: I0710 00:15:01.775263 2734 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d164cda6-2a00-4ece-a7cd-c26da42774e7-whisker-ca-bundle\") pod \"whisker-66666b6b-qrtjl\" (UID: \"d164cda6-2a00-4ece-a7cd-c26da42774e7\") " pod="calico-system/whisker-66666b6b-qrtjl" Jul 10 00:15:02.818817 containerd[1577]: time="2025-07-10T00:15:02.818767865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66666b6b-qrtjl,Uid:d164cda6-2a00-4ece-a7cd-c26da42774e7,Namespace:calico-system,Attempt:0,}" Jul 10 00:15:03.004014 containerd[1577]: time="2025-07-10T00:15:03.003831199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9hrm5,Uid:a2039424-1061-4f04-994f-fe93d4b49d0e,Namespace:calico-system,Attempt:0,}" Jul 10 00:15:03.004014 containerd[1577]: time="2025-07-10T00:15:03.003947373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mm8jb,Uid:359de6e9-3b2e-443b-b503-de9f39e018a6,Namespace:kube-system,Attempt:0,}" Jul 10 00:15:03.006286 kubelet[2734]: I0710 00:15:03.006245 2734 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcc4652c-78d0-457c-983f-264291541c82" path="/var/lib/kubelet/pods/bcc4652c-78d0-457c-983f-264291541c82/volumes" Jul 10 00:15:04.004469 containerd[1577]: time="2025-07-10T00:15:04.004409425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-644f966dd-5wff4,Uid:624a8fe1-87ea-42e3-8a5f-66462efcb0d0,Namespace:calico-apiserver,Attempt:0,}" Jul 10 00:15:05.003655 containerd[1577]: time="2025-07-10T00:15:05.003593030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78f888ffbb-tv82g,Uid:af71a532-c3cb-4f6b-b34f-557db84bc94c,Namespace:calico-system,Attempt:0,}" Jul 10 00:15:05.003900 containerd[1577]: time="2025-07-10T00:15:05.003597229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-644f966dd-9rvwf,Uid:c88bc6c7-1873-4977-a99e-f5b3146d186d,Namespace:calico-apiserver,Attempt:0,}" Jul 10 00:15:05.464590 systemd-networkd[1459]: cali2a5c42b92d2: Link UP Jul 10 00:15:05.465904 systemd-networkd[1459]: cali2a5c42b92d2: Gained carrier Jul 10 00:15:05.820590 containerd[1577]: 2025-07-10 00:15:01.377 [INFO][3890] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 10 00:15:05.820590 containerd[1577]: 2025-07-10 00:15:01.673 [INFO][3890] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--58fd7646b9--cv2mt-eth0 goldmane-58fd7646b9- calico-system fa51678a-734c-4f52-b195-0df51e7bd123 851 0 2025-07-10 00:14:35 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-58fd7646b9-cv2mt eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali2a5c42b92d2 [] [] }} ContainerID="fe24bf492e09ba2c68c553db826b3d82c0e4836bd50032e38762c859941bf4ef" Namespace="calico-system" Pod="goldmane-58fd7646b9-cv2mt" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--cv2mt-" Jul 10 00:15:05.820590 containerd[1577]: 2025-07-10 00:15:01.673 [INFO][3890] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fe24bf492e09ba2c68c553db826b3d82c0e4836bd50032e38762c859941bf4ef" Namespace="calico-system" Pod="goldmane-58fd7646b9-cv2mt" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--cv2mt-eth0" Jul 10 00:15:05.820590 containerd[1577]: 2025-07-10 00:15:03.315 [INFO][3919] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fe24bf492e09ba2c68c553db826b3d82c0e4836bd50032e38762c859941bf4ef" HandleID="k8s-pod-network.fe24bf492e09ba2c68c553db826b3d82c0e4836bd50032e38762c859941bf4ef" Workload="localhost-k8s-goldmane--58fd7646b9--cv2mt-eth0" Jul 10 00:15:05.822353 containerd[1577]: 2025-07-10 00:15:03.316 [INFO][3919] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fe24bf492e09ba2c68c553db826b3d82c0e4836bd50032e38762c859941bf4ef" HandleID="k8s-pod-network.fe24bf492e09ba2c68c553db826b3d82c0e4836bd50032e38762c859941bf4ef" Workload="localhost-k8s-goldmane--58fd7646b9--cv2mt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00011f070), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-58fd7646b9-cv2mt", "timestamp":"2025-07-10 00:15:03.315885646 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:15:05.822353 containerd[1577]: 2025-07-10 00:15:03.316 [INFO][3919] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:15:05.822353 containerd[1577]: 2025-07-10 00:15:03.316 [INFO][3919] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:15:05.822353 containerd[1577]: 2025-07-10 00:15:03.316 [INFO][3919] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 00:15:05.822353 containerd[1577]: 2025-07-10 00:15:03.470 [INFO][3919] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fe24bf492e09ba2c68c553db826b3d82c0e4836bd50032e38762c859941bf4ef" host="localhost" Jul 10 00:15:05.822353 containerd[1577]: 2025-07-10 00:15:04.111 [INFO][3919] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 00:15:05.822353 containerd[1577]: 2025-07-10 00:15:04.133 [INFO][3919] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 00:15:05.822353 containerd[1577]: 2025-07-10 00:15:04.139 [INFO][3919] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 00:15:05.822353 containerd[1577]: 2025-07-10 00:15:04.147 [INFO][3919] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 00:15:05.822353 containerd[1577]: 2025-07-10 00:15:04.147 [INFO][3919] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fe24bf492e09ba2c68c553db826b3d82c0e4836bd50032e38762c859941bf4ef" host="localhost" Jul 10 00:15:05.822662 containerd[1577]: 2025-07-10 00:15:04.154 [INFO][3919] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.fe24bf492e09ba2c68c553db826b3d82c0e4836bd50032e38762c859941bf4ef Jul 10 00:15:05.822662 containerd[1577]: 2025-07-10 00:15:04.200 [INFO][3919] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fe24bf492e09ba2c68c553db826b3d82c0e4836bd50032e38762c859941bf4ef" host="localhost" Jul 10 00:15:05.822662 containerd[1577]: 2025-07-10 00:15:04.422 [INFO][3919] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.fe24bf492e09ba2c68c553db826b3d82c0e4836bd50032e38762c859941bf4ef" host="localhost" Jul 10 00:15:05.822662 containerd[1577]: 2025-07-10 00:15:04.423 [INFO][3919] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.fe24bf492e09ba2c68c553db826b3d82c0e4836bd50032e38762c859941bf4ef" host="localhost" Jul 10 00:15:05.822662 containerd[1577]: 2025-07-10 00:15:04.423 [INFO][3919] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:15:05.822662 containerd[1577]: 2025-07-10 00:15:04.423 [INFO][3919] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="fe24bf492e09ba2c68c553db826b3d82c0e4836bd50032e38762c859941bf4ef" HandleID="k8s-pod-network.fe24bf492e09ba2c68c553db826b3d82c0e4836bd50032e38762c859941bf4ef" Workload="localhost-k8s-goldmane--58fd7646b9--cv2mt-eth0" Jul 10 00:15:05.822834 containerd[1577]: 2025-07-10 00:15:04.443 [INFO][3890] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fe24bf492e09ba2c68c553db826b3d82c0e4836bd50032e38762c859941bf4ef" Namespace="calico-system" Pod="goldmane-58fd7646b9-cv2mt" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--cv2mt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--cv2mt-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"fa51678a-734c-4f52-b195-0df51e7bd123", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 14, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-58fd7646b9-cv2mt", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2a5c42b92d2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:15:05.822834 containerd[1577]: 2025-07-10 00:15:04.444 [INFO][3890] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="fe24bf492e09ba2c68c553db826b3d82c0e4836bd50032e38762c859941bf4ef" Namespace="calico-system" Pod="goldmane-58fd7646b9-cv2mt" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--cv2mt-eth0" Jul 10 00:15:05.822937 containerd[1577]: 2025-07-10 00:15:04.444 [INFO][3890] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2a5c42b92d2 ContainerID="fe24bf492e09ba2c68c553db826b3d82c0e4836bd50032e38762c859941bf4ef" Namespace="calico-system" Pod="goldmane-58fd7646b9-cv2mt" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--cv2mt-eth0" Jul 10 00:15:05.822937 containerd[1577]: 2025-07-10 00:15:05.465 [INFO][3890] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fe24bf492e09ba2c68c553db826b3d82c0e4836bd50032e38762c859941bf4ef" Namespace="calico-system" Pod="goldmane-58fd7646b9-cv2mt" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--cv2mt-eth0" Jul 10 00:15:05.823866 containerd[1577]: 2025-07-10 00:15:05.465 [INFO][3890] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fe24bf492e09ba2c68c553db826b3d82c0e4836bd50032e38762c859941bf4ef" Namespace="calico-system" Pod="goldmane-58fd7646b9-cv2mt" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--cv2mt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--cv2mt-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"fa51678a-734c-4f52-b195-0df51e7bd123", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 14, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fe24bf492e09ba2c68c553db826b3d82c0e4836bd50032e38762c859941bf4ef", Pod:"goldmane-58fd7646b9-cv2mt", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2a5c42b92d2", MAC:"3e:13:68:6a:9c:4f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:15:05.824047 containerd[1577]: 2025-07-10 00:15:05.806 [INFO][3890] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fe24bf492e09ba2c68c553db826b3d82c0e4836bd50032e38762c859941bf4ef" Namespace="calico-system" Pod="goldmane-58fd7646b9-cv2mt" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--cv2mt-eth0" Jul 10 00:15:06.038303 systemd-networkd[1459]: cali24e9248bbc3: Link UP Jul 10 00:15:06.038877 systemd-networkd[1459]: cali24e9248bbc3: Gained carrier Jul 10 00:15:06.396848 containerd[1577]: 2025-07-10 00:15:01.390 [INFO][3901] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 10 00:15:06.396848 containerd[1577]: 2025-07-10 00:15:01.667 [INFO][3901] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--gg7tq-eth0 coredns-7c65d6cfc9- kube-system c9b0fe0a-7632-435e-8a0e-5adbb06433c1 848 0 2025-07-10 00:14:14 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-gg7tq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali24e9248bbc3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="bdf30ad2adf3686534f7923bbc8379990ffee5833178ba5424b12324366611b0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gg7tq" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--gg7tq-" Jul 10 00:15:06.396848 containerd[1577]: 2025-07-10 00:15:01.670 [INFO][3901] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bdf30ad2adf3686534f7923bbc8379990ffee5833178ba5424b12324366611b0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gg7tq" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--gg7tq-eth0" Jul 10 00:15:06.396848 containerd[1577]: 2025-07-10 00:15:03.316 [INFO][3921] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bdf30ad2adf3686534f7923bbc8379990ffee5833178ba5424b12324366611b0" HandleID="k8s-pod-network.bdf30ad2adf3686534f7923bbc8379990ffee5833178ba5424b12324366611b0" Workload="localhost-k8s-coredns--7c65d6cfc9--gg7tq-eth0" Jul 10 00:15:06.397499 containerd[1577]: 2025-07-10 00:15:03.317 [INFO][3921] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bdf30ad2adf3686534f7923bbc8379990ffee5833178ba5424b12324366611b0" HandleID="k8s-pod-network.bdf30ad2adf3686534f7923bbc8379990ffee5833178ba5424b12324366611b0" Workload="localhost-k8s-coredns--7c65d6cfc9--gg7tq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000331290), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-gg7tq", "timestamp":"2025-07-10 00:15:03.316879703 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:15:06.397499 containerd[1577]: 2025-07-10 00:15:03.317 [INFO][3921] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:15:06.397499 containerd[1577]: 2025-07-10 00:15:04.426 [INFO][3921] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:15:06.397499 containerd[1577]: 2025-07-10 00:15:04.426 [INFO][3921] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 00:15:06.397499 containerd[1577]: 2025-07-10 00:15:05.785 [INFO][3921] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bdf30ad2adf3686534f7923bbc8379990ffee5833178ba5424b12324366611b0" host="localhost" Jul 10 00:15:06.397499 containerd[1577]: 2025-07-10 00:15:05.815 [INFO][3921] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 00:15:06.397499 containerd[1577]: 2025-07-10 00:15:05.827 [INFO][3921] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 00:15:06.397499 containerd[1577]: 2025-07-10 00:15:05.832 [INFO][3921] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 00:15:06.397499 containerd[1577]: 2025-07-10 00:15:05.835 [INFO][3921] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 00:15:06.397499 containerd[1577]: 2025-07-10 00:15:05.835 [INFO][3921] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bdf30ad2adf3686534f7923bbc8379990ffee5833178ba5424b12324366611b0" host="localhost" Jul 10 00:15:06.397737 containerd[1577]: 2025-07-10 00:15:05.837 [INFO][3921] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.bdf30ad2adf3686534f7923bbc8379990ffee5833178ba5424b12324366611b0 Jul 10 00:15:06.397737 containerd[1577]: 2025-07-10 00:15:05.878 [INFO][3921] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bdf30ad2adf3686534f7923bbc8379990ffee5833178ba5424b12324366611b0" host="localhost" Jul 10 00:15:06.397737 containerd[1577]: 2025-07-10 00:15:06.031 [INFO][3921] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.bdf30ad2adf3686534f7923bbc8379990ffee5833178ba5424b12324366611b0" host="localhost" Jul 10 00:15:06.397737 containerd[1577]: 2025-07-10 00:15:06.031 [INFO][3921] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.bdf30ad2adf3686534f7923bbc8379990ffee5833178ba5424b12324366611b0" host="localhost" Jul 10 00:15:06.397737 containerd[1577]: 2025-07-10 00:15:06.031 [INFO][3921] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:15:06.397737 containerd[1577]: 2025-07-10 00:15:06.031 [INFO][3921] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="bdf30ad2adf3686534f7923bbc8379990ffee5833178ba5424b12324366611b0" HandleID="k8s-pod-network.bdf30ad2adf3686534f7923bbc8379990ffee5833178ba5424b12324366611b0" Workload="localhost-k8s-coredns--7c65d6cfc9--gg7tq-eth0" Jul 10 00:15:06.397906 containerd[1577]: 2025-07-10 00:15:06.035 [INFO][3901] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bdf30ad2adf3686534f7923bbc8379990ffee5833178ba5424b12324366611b0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gg7tq" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--gg7tq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--gg7tq-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"c9b0fe0a-7632-435e-8a0e-5adbb06433c1", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 14, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-gg7tq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali24e9248bbc3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:15:06.398035 containerd[1577]: 2025-07-10 00:15:06.036 [INFO][3901] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="bdf30ad2adf3686534f7923bbc8379990ffee5833178ba5424b12324366611b0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gg7tq" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--gg7tq-eth0" Jul 10 00:15:06.398035 containerd[1577]: 2025-07-10 00:15:06.036 [INFO][3901] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali24e9248bbc3 ContainerID="bdf30ad2adf3686534f7923bbc8379990ffee5833178ba5424b12324366611b0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gg7tq" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--gg7tq-eth0" Jul 10 00:15:06.398035 containerd[1577]: 2025-07-10 00:15:06.040 [INFO][3901] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bdf30ad2adf3686534f7923bbc8379990ffee5833178ba5424b12324366611b0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gg7tq" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--gg7tq-eth0" Jul 10 00:15:06.398120 containerd[1577]: 2025-07-10 00:15:06.040 [INFO][3901] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bdf30ad2adf3686534f7923bbc8379990ffee5833178ba5424b12324366611b0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gg7tq" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--gg7tq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--gg7tq-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"c9b0fe0a-7632-435e-8a0e-5adbb06433c1", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 14, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bdf30ad2adf3686534f7923bbc8379990ffee5833178ba5424b12324366611b0", Pod:"coredns-7c65d6cfc9-gg7tq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali24e9248bbc3", MAC:"52:98:2f:03:7b:21", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:15:06.398120 containerd[1577]: 2025-07-10 00:15:06.392 [INFO][3901] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bdf30ad2adf3686534f7923bbc8379990ffee5833178ba5424b12324366611b0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gg7tq" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--gg7tq-eth0" Jul 10 00:15:06.813053 systemd-networkd[1459]: vxlan.calico: Link UP Jul 10 00:15:06.813066 systemd-networkd[1459]: vxlan.calico: Gained carrier Jul 10 00:15:07.021263 systemd-networkd[1459]: cali2a5c42b92d2: Gained IPv6LL Jul 10 00:15:07.218353 systemd-networkd[1459]: calic4d4f4b32d2: Link UP Jul 10 00:15:07.218563 systemd-networkd[1459]: calic4d4f4b32d2: Gained carrier Jul 10 00:15:07.341146 systemd-networkd[1459]: cali24e9248bbc3: Gained IPv6LL Jul 10 00:15:07.690051 containerd[1577]: 2025-07-10 00:15:06.470 [INFO][4098] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--66666b6b--qrtjl-eth0 whisker-66666b6b- calico-system d164cda6-2a00-4ece-a7cd-c26da42774e7 926 0 2025-07-10 00:15:01 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:66666b6b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-66666b6b-qrtjl eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calic4d4f4b32d2 [] [] }} ContainerID="9f6f3e038efb5e262e27755386044dc7cceb8e6a15e5100cd2265399007b3d13" Namespace="calico-system" Pod="whisker-66666b6b-qrtjl" WorkloadEndpoint="localhost-k8s-whisker--66666b6b--qrtjl-" Jul 10 00:15:07.690051 containerd[1577]: 2025-07-10 00:15:06.471 [INFO][4098] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9f6f3e038efb5e262e27755386044dc7cceb8e6a15e5100cd2265399007b3d13" Namespace="calico-system" Pod="whisker-66666b6b-qrtjl" WorkloadEndpoint="localhost-k8s-whisker--66666b6b--qrtjl-eth0" Jul 10 00:15:07.690051 containerd[1577]: 2025-07-10 00:15:06.854 [INFO][4140] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9f6f3e038efb5e262e27755386044dc7cceb8e6a15e5100cd2265399007b3d13" HandleID="k8s-pod-network.9f6f3e038efb5e262e27755386044dc7cceb8e6a15e5100cd2265399007b3d13" Workload="localhost-k8s-whisker--66666b6b--qrtjl-eth0" Jul 10 00:15:07.690051 containerd[1577]: 2025-07-10 00:15:06.854 [INFO][4140] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9f6f3e038efb5e262e27755386044dc7cceb8e6a15e5100cd2265399007b3d13" HandleID="k8s-pod-network.9f6f3e038efb5e262e27755386044dc7cceb8e6a15e5100cd2265399007b3d13" Workload="localhost-k8s-whisker--66666b6b--qrtjl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004982e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-66666b6b-qrtjl", "timestamp":"2025-07-10 00:15:06.854103849 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:15:07.690051 containerd[1577]: 2025-07-10 00:15:06.854 [INFO][4140] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:15:07.690051 containerd[1577]: 2025-07-10 00:15:06.854 [INFO][4140] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:15:07.690051 containerd[1577]: 2025-07-10 00:15:06.854 [INFO][4140] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 00:15:07.690051 containerd[1577]: 2025-07-10 00:15:06.976 [INFO][4140] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9f6f3e038efb5e262e27755386044dc7cceb8e6a15e5100cd2265399007b3d13" host="localhost" Jul 10 00:15:07.690051 containerd[1577]: 2025-07-10 00:15:06.984 [INFO][4140] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 00:15:07.690051 containerd[1577]: 2025-07-10 00:15:06.992 [INFO][4140] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 00:15:07.690051 containerd[1577]: 2025-07-10 00:15:06.994 [INFO][4140] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 00:15:07.690051 containerd[1577]: 2025-07-10 00:15:06.997 [INFO][4140] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 00:15:07.690051 containerd[1577]: 2025-07-10 00:15:06.997 [INFO][4140] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9f6f3e038efb5e262e27755386044dc7cceb8e6a15e5100cd2265399007b3d13" host="localhost" Jul 10 00:15:07.690051 containerd[1577]: 2025-07-10 00:15:06.999 [INFO][4140] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9f6f3e038efb5e262e27755386044dc7cceb8e6a15e5100cd2265399007b3d13 Jul 10 00:15:07.690051 containerd[1577]: 2025-07-10 00:15:07.046 [INFO][4140] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9f6f3e038efb5e262e27755386044dc7cceb8e6a15e5100cd2265399007b3d13" host="localhost" Jul 10 00:15:07.690051 containerd[1577]: 2025-07-10 00:15:07.204 [INFO][4140] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.9f6f3e038efb5e262e27755386044dc7cceb8e6a15e5100cd2265399007b3d13" host="localhost" Jul 10 00:15:07.690051 containerd[1577]: 2025-07-10 00:15:07.204 [INFO][4140] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.9f6f3e038efb5e262e27755386044dc7cceb8e6a15e5100cd2265399007b3d13" host="localhost" Jul 10 00:15:07.690051 containerd[1577]: 2025-07-10 00:15:07.204 [INFO][4140] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:15:07.690051 containerd[1577]: 2025-07-10 00:15:07.204 [INFO][4140] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="9f6f3e038efb5e262e27755386044dc7cceb8e6a15e5100cd2265399007b3d13" HandleID="k8s-pod-network.9f6f3e038efb5e262e27755386044dc7cceb8e6a15e5100cd2265399007b3d13" Workload="localhost-k8s-whisker--66666b6b--qrtjl-eth0" Jul 10 00:15:07.692487 containerd[1577]: 2025-07-10 00:15:07.210 [INFO][4098] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9f6f3e038efb5e262e27755386044dc7cceb8e6a15e5100cd2265399007b3d13" Namespace="calico-system" Pod="whisker-66666b6b-qrtjl" WorkloadEndpoint="localhost-k8s-whisker--66666b6b--qrtjl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--66666b6b--qrtjl-eth0", GenerateName:"whisker-66666b6b-", Namespace:"calico-system", SelfLink:"", UID:"d164cda6-2a00-4ece-a7cd-c26da42774e7", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 15, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"66666b6b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-66666b6b-qrtjl", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic4d4f4b32d2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:15:07.692487 containerd[1577]: 2025-07-10 00:15:07.210 [INFO][4098] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="9f6f3e038efb5e262e27755386044dc7cceb8e6a15e5100cd2265399007b3d13" Namespace="calico-system" Pod="whisker-66666b6b-qrtjl" WorkloadEndpoint="localhost-k8s-whisker--66666b6b--qrtjl-eth0" Jul 10 00:15:07.692487 containerd[1577]: 2025-07-10 00:15:07.211 [INFO][4098] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic4d4f4b32d2 ContainerID="9f6f3e038efb5e262e27755386044dc7cceb8e6a15e5100cd2265399007b3d13" Namespace="calico-system" Pod="whisker-66666b6b-qrtjl" WorkloadEndpoint="localhost-k8s-whisker--66666b6b--qrtjl-eth0" Jul 10 00:15:07.692487 containerd[1577]: 2025-07-10 00:15:07.220 [INFO][4098] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9f6f3e038efb5e262e27755386044dc7cceb8e6a15e5100cd2265399007b3d13" Namespace="calico-system" Pod="whisker-66666b6b-qrtjl" WorkloadEndpoint="localhost-k8s-whisker--66666b6b--qrtjl-eth0" Jul 10 00:15:07.692487 containerd[1577]: 2025-07-10 00:15:07.222 [INFO][4098] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9f6f3e038efb5e262e27755386044dc7cceb8e6a15e5100cd2265399007b3d13" Namespace="calico-system" Pod="whisker-66666b6b-qrtjl" WorkloadEndpoint="localhost-k8s-whisker--66666b6b--qrtjl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--66666b6b--qrtjl-eth0", GenerateName:"whisker-66666b6b-", Namespace:"calico-system", SelfLink:"", UID:"d164cda6-2a00-4ece-a7cd-c26da42774e7", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 15, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"66666b6b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9f6f3e038efb5e262e27755386044dc7cceb8e6a15e5100cd2265399007b3d13", Pod:"whisker-66666b6b-qrtjl", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic4d4f4b32d2", MAC:"de:18:08:f8:22:b8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:15:07.692487 containerd[1577]: 2025-07-10 00:15:07.685 [INFO][4098] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9f6f3e038efb5e262e27755386044dc7cceb8e6a15e5100cd2265399007b3d13" Namespace="calico-system" Pod="whisker-66666b6b-qrtjl" WorkloadEndpoint="localhost-k8s-whisker--66666b6b--qrtjl-eth0" Jul 10 00:15:07.697582 systemd-networkd[1459]: califce4ab06f7f: Link UP Jul 10 00:15:07.699174 systemd-networkd[1459]: califce4ab06f7f: Gained carrier Jul 10 00:15:07.948414 containerd[1577]: 2025-07-10 00:15:06.476 [INFO][4099] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--9hrm5-eth0 csi-node-driver- calico-system a2039424-1061-4f04-994f-fe93d4b49d0e 724 0 2025-07-10 00:14:36 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-9hrm5 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] califce4ab06f7f [] [] }} ContainerID="1ad2c4ba48793d0b48bc85e449d0c94d2016ccfa73a7a9c64c45eb88b2e6982c" Namespace="calico-system" Pod="csi-node-driver-9hrm5" WorkloadEndpoint="localhost-k8s-csi--node--driver--9hrm5-" Jul 10 00:15:07.948414 containerd[1577]: 2025-07-10 00:15:06.476 [INFO][4099] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1ad2c4ba48793d0b48bc85e449d0c94d2016ccfa73a7a9c64c45eb88b2e6982c" Namespace="calico-system" Pod="csi-node-driver-9hrm5" WorkloadEndpoint="localhost-k8s-csi--node--driver--9hrm5-eth0" Jul 10 00:15:07.948414 containerd[1577]: 2025-07-10 00:15:06.855 [INFO][4142] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1ad2c4ba48793d0b48bc85e449d0c94d2016ccfa73a7a9c64c45eb88b2e6982c" HandleID="k8s-pod-network.1ad2c4ba48793d0b48bc85e449d0c94d2016ccfa73a7a9c64c45eb88b2e6982c" Workload="localhost-k8s-csi--node--driver--9hrm5-eth0" Jul 10 00:15:07.948414 containerd[1577]: 2025-07-10 00:15:06.856 [INFO][4142] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1ad2c4ba48793d0b48bc85e449d0c94d2016ccfa73a7a9c64c45eb88b2e6982c" HandleID="k8s-pod-network.1ad2c4ba48793d0b48bc85e449d0c94d2016ccfa73a7a9c64c45eb88b2e6982c" Workload="localhost-k8s-csi--node--driver--9hrm5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001b0b60), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-9hrm5", "timestamp":"2025-07-10 00:15:06.85512355 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:15:07.948414 containerd[1577]: 2025-07-10 00:15:06.856 [INFO][4142] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:15:07.948414 containerd[1577]: 2025-07-10 00:15:07.205 [INFO][4142] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:15:07.948414 containerd[1577]: 2025-07-10 00:15:07.205 [INFO][4142] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 00:15:07.948414 containerd[1577]: 2025-07-10 00:15:07.222 [INFO][4142] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1ad2c4ba48793d0b48bc85e449d0c94d2016ccfa73a7a9c64c45eb88b2e6982c" host="localhost" Jul 10 00:15:07.948414 containerd[1577]: 2025-07-10 00:15:07.227 [INFO][4142] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 00:15:07.948414 containerd[1577]: 2025-07-10 00:15:07.232 [INFO][4142] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 00:15:07.948414 containerd[1577]: 2025-07-10 00:15:07.234 [INFO][4142] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 00:15:07.948414 containerd[1577]: 2025-07-10 00:15:07.236 [INFO][4142] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 00:15:07.948414 containerd[1577]: 2025-07-10 00:15:07.236 [INFO][4142] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1ad2c4ba48793d0b48bc85e449d0c94d2016ccfa73a7a9c64c45eb88b2e6982c" host="localhost" Jul 10 00:15:07.948414 containerd[1577]: 2025-07-10 00:15:07.238 [INFO][4142] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1ad2c4ba48793d0b48bc85e449d0c94d2016ccfa73a7a9c64c45eb88b2e6982c Jul 10 00:15:07.948414 containerd[1577]: 2025-07-10 00:15:07.387 [INFO][4142] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1ad2c4ba48793d0b48bc85e449d0c94d2016ccfa73a7a9c64c45eb88b2e6982c" host="localhost" Jul 10 00:15:07.948414 containerd[1577]: 2025-07-10 00:15:07.686 [INFO][4142] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.1ad2c4ba48793d0b48bc85e449d0c94d2016ccfa73a7a9c64c45eb88b2e6982c" host="localhost" Jul 10 00:15:07.948414 containerd[1577]: 2025-07-10 00:15:07.687 [INFO][4142] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.1ad2c4ba48793d0b48bc85e449d0c94d2016ccfa73a7a9c64c45eb88b2e6982c" host="localhost" Jul 10 00:15:07.948414 containerd[1577]: 2025-07-10 00:15:07.687 [INFO][4142] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:15:07.948414 containerd[1577]: 2025-07-10 00:15:07.687 [INFO][4142] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="1ad2c4ba48793d0b48bc85e449d0c94d2016ccfa73a7a9c64c45eb88b2e6982c" HandleID="k8s-pod-network.1ad2c4ba48793d0b48bc85e449d0c94d2016ccfa73a7a9c64c45eb88b2e6982c" Workload="localhost-k8s-csi--node--driver--9hrm5-eth0" Jul 10 00:15:07.949088 containerd[1577]: 2025-07-10 00:15:07.693 [INFO][4099] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1ad2c4ba48793d0b48bc85e449d0c94d2016ccfa73a7a9c64c45eb88b2e6982c" Namespace="calico-system" Pod="csi-node-driver-9hrm5" WorkloadEndpoint="localhost-k8s-csi--node--driver--9hrm5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--9hrm5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a2039424-1061-4f04-994f-fe93d4b49d0e", ResourceVersion:"724", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 14, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-9hrm5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califce4ab06f7f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:15:07.949088 containerd[1577]: 2025-07-10 00:15:07.694 [INFO][4099] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="1ad2c4ba48793d0b48bc85e449d0c94d2016ccfa73a7a9c64c45eb88b2e6982c" Namespace="calico-system" Pod="csi-node-driver-9hrm5" WorkloadEndpoint="localhost-k8s-csi--node--driver--9hrm5-eth0" Jul 10 00:15:07.949088 containerd[1577]: 2025-07-10 00:15:07.694 [INFO][4099] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califce4ab06f7f ContainerID="1ad2c4ba48793d0b48bc85e449d0c94d2016ccfa73a7a9c64c45eb88b2e6982c" Namespace="calico-system" Pod="csi-node-driver-9hrm5" WorkloadEndpoint="localhost-k8s-csi--node--driver--9hrm5-eth0" Jul 10 00:15:07.949088 containerd[1577]: 2025-07-10 00:15:07.701 [INFO][4099] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1ad2c4ba48793d0b48bc85e449d0c94d2016ccfa73a7a9c64c45eb88b2e6982c" Namespace="calico-system" Pod="csi-node-driver-9hrm5" WorkloadEndpoint="localhost-k8s-csi--node--driver--9hrm5-eth0" Jul 10 00:15:07.949088 containerd[1577]: 2025-07-10 00:15:07.703 [INFO][4099] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1ad2c4ba48793d0b48bc85e449d0c94d2016ccfa73a7a9c64c45eb88b2e6982c" Namespace="calico-system" Pod="csi-node-driver-9hrm5" WorkloadEndpoint="localhost-k8s-csi--node--driver--9hrm5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--9hrm5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a2039424-1061-4f04-994f-fe93d4b49d0e", ResourceVersion:"724", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 14, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1ad2c4ba48793d0b48bc85e449d0c94d2016ccfa73a7a9c64c45eb88b2e6982c", Pod:"csi-node-driver-9hrm5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califce4ab06f7f", MAC:"9e:55:be:8d:18:49", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:15:07.949088 containerd[1577]: 2025-07-10 00:15:07.944 [INFO][4099] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1ad2c4ba48793d0b48bc85e449d0c94d2016ccfa73a7a9c64c45eb88b2e6982c" Namespace="calico-system" Pod="csi-node-driver-9hrm5" WorkloadEndpoint="localhost-k8s-csi--node--driver--9hrm5-eth0" Jul 10 00:15:08.173197 systemd-networkd[1459]: vxlan.calico: Gained IPv6LL Jul 10 00:15:08.737125 systemd-networkd[1459]: calidceebaf933e: Link UP Jul 10 00:15:08.739791 systemd-networkd[1459]: calidceebaf933e: Gained carrier Jul 10 00:15:08.882175 systemd-networkd[1459]: calic4d4f4b32d2: Gained IPv6LL Jul 10 00:15:08.952536 containerd[1577]: 2025-07-10 00:15:08.209 [INFO][4241] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--mm8jb-eth0 coredns-7c65d6cfc9- kube-system 359de6e9-3b2e-443b-b503-de9f39e018a6 844 0 2025-07-10 00:14:14 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-mm8jb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calidceebaf933e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="0d4c6d818c5dba6ffba9c3594e1b51793b0cd63c430360586be10ff2399e9ac0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mm8jb" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mm8jb-" Jul 10 00:15:08.952536 containerd[1577]: 2025-07-10 00:15:08.209 [INFO][4241] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0d4c6d818c5dba6ffba9c3594e1b51793b0cd63c430360586be10ff2399e9ac0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mm8jb" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mm8jb-eth0" Jul 10 00:15:08.952536 containerd[1577]: 2025-07-10 00:15:08.261 [INFO][4268] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0d4c6d818c5dba6ffba9c3594e1b51793b0cd63c430360586be10ff2399e9ac0" HandleID="k8s-pod-network.0d4c6d818c5dba6ffba9c3594e1b51793b0cd63c430360586be10ff2399e9ac0" Workload="localhost-k8s-coredns--7c65d6cfc9--mm8jb-eth0" Jul 10 00:15:08.952536 containerd[1577]: 2025-07-10 00:15:08.261 [INFO][4268] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0d4c6d818c5dba6ffba9c3594e1b51793b0cd63c430360586be10ff2399e9ac0" HandleID="k8s-pod-network.0d4c6d818c5dba6ffba9c3594e1b51793b0cd63c430360586be10ff2399e9ac0" Workload="localhost-k8s-coredns--7c65d6cfc9--mm8jb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f450), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-mm8jb", "timestamp":"2025-07-10 00:15:08.261240508 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:15:08.952536 containerd[1577]: 2025-07-10 00:15:08.261 [INFO][4268] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:15:08.952536 containerd[1577]: 2025-07-10 00:15:08.261 [INFO][4268] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:15:08.952536 containerd[1577]: 2025-07-10 00:15:08.261 [INFO][4268] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 00:15:08.952536 containerd[1577]: 2025-07-10 00:15:08.275 [INFO][4268] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0d4c6d818c5dba6ffba9c3594e1b51793b0cd63c430360586be10ff2399e9ac0" host="localhost" Jul 10 00:15:08.952536 containerd[1577]: 2025-07-10 00:15:08.282 [INFO][4268] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 00:15:08.952536 containerd[1577]: 2025-07-10 00:15:08.405 [INFO][4268] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 00:15:08.952536 containerd[1577]: 2025-07-10 00:15:08.407 [INFO][4268] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 00:15:08.952536 containerd[1577]: 2025-07-10 00:15:08.410 [INFO][4268] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 00:15:08.952536 containerd[1577]: 2025-07-10 00:15:08.410 [INFO][4268] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0d4c6d818c5dba6ffba9c3594e1b51793b0cd63c430360586be10ff2399e9ac0" host="localhost" Jul 10 00:15:08.952536 containerd[1577]: 2025-07-10 00:15:08.411 [INFO][4268] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0d4c6d818c5dba6ffba9c3594e1b51793b0cd63c430360586be10ff2399e9ac0 Jul 10 00:15:08.952536 containerd[1577]: 2025-07-10 00:15:08.436 [INFO][4268] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0d4c6d818c5dba6ffba9c3594e1b51793b0cd63c430360586be10ff2399e9ac0" host="localhost" Jul 10 00:15:08.952536 containerd[1577]: 2025-07-10 00:15:08.727 [INFO][4268] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.0d4c6d818c5dba6ffba9c3594e1b51793b0cd63c430360586be10ff2399e9ac0" host="localhost" Jul 10 00:15:08.952536 containerd[1577]: 2025-07-10 00:15:08.727 [INFO][4268] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.0d4c6d818c5dba6ffba9c3594e1b51793b0cd63c430360586be10ff2399e9ac0" host="localhost" Jul 10 00:15:08.952536 containerd[1577]: 2025-07-10 00:15:08.727 [INFO][4268] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:15:08.952536 containerd[1577]: 2025-07-10 00:15:08.727 [INFO][4268] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="0d4c6d818c5dba6ffba9c3594e1b51793b0cd63c430360586be10ff2399e9ac0" HandleID="k8s-pod-network.0d4c6d818c5dba6ffba9c3594e1b51793b0cd63c430360586be10ff2399e9ac0" Workload="localhost-k8s-coredns--7c65d6cfc9--mm8jb-eth0" Jul 10 00:15:08.954002 containerd[1577]: 2025-07-10 00:15:08.731 [INFO][4241] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0d4c6d818c5dba6ffba9c3594e1b51793b0cd63c430360586be10ff2399e9ac0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mm8jb" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mm8jb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--mm8jb-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"359de6e9-3b2e-443b-b503-de9f39e018a6", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 14, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-mm8jb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidceebaf933e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:15:08.954002 containerd[1577]: 2025-07-10 00:15:08.731 [INFO][4241] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="0d4c6d818c5dba6ffba9c3594e1b51793b0cd63c430360586be10ff2399e9ac0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mm8jb" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mm8jb-eth0" Jul 10 00:15:08.954002 containerd[1577]: 2025-07-10 00:15:08.731 [INFO][4241] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidceebaf933e ContainerID="0d4c6d818c5dba6ffba9c3594e1b51793b0cd63c430360586be10ff2399e9ac0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mm8jb" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mm8jb-eth0" Jul 10 00:15:08.954002 containerd[1577]: 2025-07-10 00:15:08.741 [INFO][4241] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0d4c6d818c5dba6ffba9c3594e1b51793b0cd63c430360586be10ff2399e9ac0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mm8jb" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mm8jb-eth0" Jul 10 00:15:08.954002 containerd[1577]: 2025-07-10 00:15:08.741 [INFO][4241] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0d4c6d818c5dba6ffba9c3594e1b51793b0cd63c430360586be10ff2399e9ac0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mm8jb" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mm8jb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--mm8jb-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"359de6e9-3b2e-443b-b503-de9f39e018a6", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 14, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0d4c6d818c5dba6ffba9c3594e1b51793b0cd63c430360586be10ff2399e9ac0", Pod:"coredns-7c65d6cfc9-mm8jb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidceebaf933e", MAC:"02:d4:d3:33:b3:ba", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:15:08.954002 containerd[1577]: 2025-07-10 00:15:08.947 [INFO][4241] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0d4c6d818c5dba6ffba9c3594e1b51793b0cd63c430360586be10ff2399e9ac0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mm8jb" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--mm8jb-eth0" Jul 10 00:15:09.645128 systemd-networkd[1459]: califce4ab06f7f: Gained IPv6LL Jul 10 00:15:10.477181 systemd-networkd[1459]: calidceebaf933e: Gained IPv6LL Jul 10 00:15:10.538168 systemd-networkd[1459]: cali225178cebb2: Link UP Jul 10 00:15:10.538943 systemd-networkd[1459]: cali225178cebb2: Gained carrier Jul 10 00:15:10.805148 containerd[1577]: 2025-07-10 00:15:08.238 [INFO][4255] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--644f966dd--5wff4-eth0 calico-apiserver-644f966dd- calico-apiserver 624a8fe1-87ea-42e3-8a5f-66462efcb0d0 855 0 2025-07-10 00:14:32 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:644f966dd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-644f966dd-5wff4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali225178cebb2 [] [] }} ContainerID="831ad543e2f81d3377119665d0bd499e7c7ebb76be12338c9a6b109869779dfd" Namespace="calico-apiserver" Pod="calico-apiserver-644f966dd-5wff4" WorkloadEndpoint="localhost-k8s-calico--apiserver--644f966dd--5wff4-" Jul 10 00:15:10.805148 containerd[1577]: 2025-07-10 00:15:08.238 [INFO][4255] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="831ad543e2f81d3377119665d0bd499e7c7ebb76be12338c9a6b109869779dfd" Namespace="calico-apiserver" Pod="calico-apiserver-644f966dd-5wff4" WorkloadEndpoint="localhost-k8s-calico--apiserver--644f966dd--5wff4-eth0" Jul 10 00:15:10.805148 containerd[1577]: 2025-07-10 00:15:08.328 [INFO][4275] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="831ad543e2f81d3377119665d0bd499e7c7ebb76be12338c9a6b109869779dfd" HandleID="k8s-pod-network.831ad543e2f81d3377119665d0bd499e7c7ebb76be12338c9a6b109869779dfd" Workload="localhost-k8s-calico--apiserver--644f966dd--5wff4-eth0" Jul 10 00:15:10.805148 containerd[1577]: 2025-07-10 00:15:08.328 [INFO][4275] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="831ad543e2f81d3377119665d0bd499e7c7ebb76be12338c9a6b109869779dfd" HandleID="k8s-pod-network.831ad543e2f81d3377119665d0bd499e7c7ebb76be12338c9a6b109869779dfd" Workload="localhost-k8s-calico--apiserver--644f966dd--5wff4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000130630), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-644f966dd-5wff4", "timestamp":"2025-07-10 00:15:08.328597482 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:15:10.805148 containerd[1577]: 2025-07-10 00:15:08.328 [INFO][4275] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:15:10.805148 containerd[1577]: 2025-07-10 00:15:08.727 [INFO][4275] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:15:10.805148 containerd[1577]: 2025-07-10 00:15:08.728 [INFO][4275] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 00:15:10.805148 containerd[1577]: 2025-07-10 00:15:08.739 [INFO][4275] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.831ad543e2f81d3377119665d0bd499e7c7ebb76be12338c9a6b109869779dfd" host="localhost" Jul 10 00:15:10.805148 containerd[1577]: 2025-07-10 00:15:08.751 [INFO][4275] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 00:15:10.805148 containerd[1577]: 2025-07-10 00:15:08.953 [INFO][4275] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 00:15:10.805148 containerd[1577]: 2025-07-10 00:15:08.956 [INFO][4275] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 00:15:10.805148 containerd[1577]: 2025-07-10 00:15:08.958 [INFO][4275] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 00:15:10.805148 containerd[1577]: 2025-07-10 00:15:08.958 [INFO][4275] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.831ad543e2f81d3377119665d0bd499e7c7ebb76be12338c9a6b109869779dfd" host="localhost" Jul 10 00:15:10.805148 containerd[1577]: 2025-07-10 00:15:08.960 [INFO][4275] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.831ad543e2f81d3377119665d0bd499e7c7ebb76be12338c9a6b109869779dfd Jul 10 00:15:10.805148 containerd[1577]: 2025-07-10 00:15:09.450 [INFO][4275] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.831ad543e2f81d3377119665d0bd499e7c7ebb76be12338c9a6b109869779dfd" host="localhost" Jul 10 00:15:10.805148 containerd[1577]: 2025-07-10 00:15:10.527 [INFO][4275] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.831ad543e2f81d3377119665d0bd499e7c7ebb76be12338c9a6b109869779dfd" host="localhost" Jul 10 00:15:10.805148 containerd[1577]: 2025-07-10 00:15:10.527 [INFO][4275] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.831ad543e2f81d3377119665d0bd499e7c7ebb76be12338c9a6b109869779dfd" host="localhost" Jul 10 00:15:10.805148 containerd[1577]: 2025-07-10 00:15:10.528 [INFO][4275] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:15:10.805148 containerd[1577]: 2025-07-10 00:15:10.528 [INFO][4275] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="831ad543e2f81d3377119665d0bd499e7c7ebb76be12338c9a6b109869779dfd" HandleID="k8s-pod-network.831ad543e2f81d3377119665d0bd499e7c7ebb76be12338c9a6b109869779dfd" Workload="localhost-k8s-calico--apiserver--644f966dd--5wff4-eth0" Jul 10 00:15:10.810485 containerd[1577]: 2025-07-10 00:15:10.531 [INFO][4255] cni-plugin/k8s.go 418: Populated endpoint ContainerID="831ad543e2f81d3377119665d0bd499e7c7ebb76be12338c9a6b109869779dfd" Namespace="calico-apiserver" Pod="calico-apiserver-644f966dd-5wff4" WorkloadEndpoint="localhost-k8s-calico--apiserver--644f966dd--5wff4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--644f966dd--5wff4-eth0", GenerateName:"calico-apiserver-644f966dd-", Namespace:"calico-apiserver", SelfLink:"", UID:"624a8fe1-87ea-42e3-8a5f-66462efcb0d0", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 14, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"644f966dd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-644f966dd-5wff4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali225178cebb2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:15:10.810485 containerd[1577]: 2025-07-10 00:15:10.531 [INFO][4255] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="831ad543e2f81d3377119665d0bd499e7c7ebb76be12338c9a6b109869779dfd" Namespace="calico-apiserver" Pod="calico-apiserver-644f966dd-5wff4" WorkloadEndpoint="localhost-k8s-calico--apiserver--644f966dd--5wff4-eth0" Jul 10 00:15:10.810485 containerd[1577]: 2025-07-10 00:15:10.531 [INFO][4255] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali225178cebb2 ContainerID="831ad543e2f81d3377119665d0bd499e7c7ebb76be12338c9a6b109869779dfd" Namespace="calico-apiserver" Pod="calico-apiserver-644f966dd-5wff4" WorkloadEndpoint="localhost-k8s-calico--apiserver--644f966dd--5wff4-eth0" Jul 10 00:15:10.810485 containerd[1577]: 2025-07-10 00:15:10.540 [INFO][4255] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="831ad543e2f81d3377119665d0bd499e7c7ebb76be12338c9a6b109869779dfd" Namespace="calico-apiserver" Pod="calico-apiserver-644f966dd-5wff4" WorkloadEndpoint="localhost-k8s-calico--apiserver--644f966dd--5wff4-eth0" Jul 10 00:15:10.810485 containerd[1577]: 2025-07-10 00:15:10.542 [INFO][4255] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="831ad543e2f81d3377119665d0bd499e7c7ebb76be12338c9a6b109869779dfd" Namespace="calico-apiserver" Pod="calico-apiserver-644f966dd-5wff4" WorkloadEndpoint="localhost-k8s-calico--apiserver--644f966dd--5wff4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--644f966dd--5wff4-eth0", GenerateName:"calico-apiserver-644f966dd-", Namespace:"calico-apiserver", SelfLink:"", UID:"624a8fe1-87ea-42e3-8a5f-66462efcb0d0", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 14, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"644f966dd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"831ad543e2f81d3377119665d0bd499e7c7ebb76be12338c9a6b109869779dfd", Pod:"calico-apiserver-644f966dd-5wff4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali225178cebb2", MAC:"5a:1a:22:8b:c9:0b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:15:10.810485 containerd[1577]: 2025-07-10 00:15:10.795 [INFO][4255] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="831ad543e2f81d3377119665d0bd499e7c7ebb76be12338c9a6b109869779dfd" Namespace="calico-apiserver" Pod="calico-apiserver-644f966dd-5wff4" WorkloadEndpoint="localhost-k8s-calico--apiserver--644f966dd--5wff4-eth0" Jul 10 00:15:11.193504 systemd-networkd[1459]: calie44f45bf24d: Link UP Jul 10 00:15:11.195723 systemd-networkd[1459]: calie44f45bf24d: Gained carrier Jul 10 00:15:11.441497 containerd[1577]: 2025-07-10 00:15:08.408 [INFO][4285] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--78f888ffbb--tv82g-eth0 calico-kube-controllers-78f888ffbb- calico-system af71a532-c3cb-4f6b-b34f-557db84bc94c 852 0 2025-07-10 00:14:36 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:78f888ffbb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-78f888ffbb-tv82g eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calie44f45bf24d [] [] }} ContainerID="9cf3226afe5d6295fff5b9b571cc7529040cad156f3ef65a39a96c6bdb385840" Namespace="calico-system" Pod="calico-kube-controllers-78f888ffbb-tv82g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78f888ffbb--tv82g-" Jul 10 00:15:11.441497 containerd[1577]: 2025-07-10 00:15:08.408 [INFO][4285] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9cf3226afe5d6295fff5b9b571cc7529040cad156f3ef65a39a96c6bdb385840" Namespace="calico-system" Pod="calico-kube-controllers-78f888ffbb-tv82g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78f888ffbb--tv82g-eth0" Jul 10 00:15:11.441497 containerd[1577]: 2025-07-10 00:15:08.436 [INFO][4301] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9cf3226afe5d6295fff5b9b571cc7529040cad156f3ef65a39a96c6bdb385840" HandleID="k8s-pod-network.9cf3226afe5d6295fff5b9b571cc7529040cad156f3ef65a39a96c6bdb385840" Workload="localhost-k8s-calico--kube--controllers--78f888ffbb--tv82g-eth0" Jul 10 00:15:11.441497 containerd[1577]: 2025-07-10 00:15:08.437 [INFO][4301] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9cf3226afe5d6295fff5b9b571cc7529040cad156f3ef65a39a96c6bdb385840" HandleID="k8s-pod-network.9cf3226afe5d6295fff5b9b571cc7529040cad156f3ef65a39a96c6bdb385840" Workload="localhost-k8s-calico--kube--controllers--78f888ffbb--tv82g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00042e360), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-78f888ffbb-tv82g", "timestamp":"2025-07-10 00:15:08.436837381 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:15:11.441497 containerd[1577]: 2025-07-10 00:15:08.437 [INFO][4301] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:15:11.441497 containerd[1577]: 2025-07-10 00:15:10.528 [INFO][4301] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:15:11.441497 containerd[1577]: 2025-07-10 00:15:10.528 [INFO][4301] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 00:15:11.441497 containerd[1577]: 2025-07-10 00:15:10.797 [INFO][4301] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9cf3226afe5d6295fff5b9b571cc7529040cad156f3ef65a39a96c6bdb385840" host="localhost" Jul 10 00:15:11.441497 containerd[1577]: 2025-07-10 00:15:10.826 [INFO][4301] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 00:15:11.441497 containerd[1577]: 2025-07-10 00:15:10.845 [INFO][4301] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 00:15:11.441497 containerd[1577]: 2025-07-10 00:15:10.848 [INFO][4301] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 00:15:11.441497 containerd[1577]: 2025-07-10 00:15:10.851 [INFO][4301] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 00:15:11.441497 containerd[1577]: 2025-07-10 00:15:10.851 [INFO][4301] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9cf3226afe5d6295fff5b9b571cc7529040cad156f3ef65a39a96c6bdb385840" host="localhost" Jul 10 00:15:11.441497 containerd[1577]: 2025-07-10 00:15:10.853 [INFO][4301] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9cf3226afe5d6295fff5b9b571cc7529040cad156f3ef65a39a96c6bdb385840 Jul 10 00:15:11.441497 containerd[1577]: 2025-07-10 00:15:10.882 [INFO][4301] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9cf3226afe5d6295fff5b9b571cc7529040cad156f3ef65a39a96c6bdb385840" host="localhost" Jul 10 00:15:11.441497 containerd[1577]: 2025-07-10 00:15:11.185 [INFO][4301] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.9cf3226afe5d6295fff5b9b571cc7529040cad156f3ef65a39a96c6bdb385840" host="localhost" Jul 10 00:15:11.441497 containerd[1577]: 2025-07-10 00:15:11.185 [INFO][4301] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.9cf3226afe5d6295fff5b9b571cc7529040cad156f3ef65a39a96c6bdb385840" host="localhost" Jul 10 00:15:11.441497 containerd[1577]: 2025-07-10 00:15:11.185 [INFO][4301] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:15:11.441497 containerd[1577]: 2025-07-10 00:15:11.185 [INFO][4301] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="9cf3226afe5d6295fff5b9b571cc7529040cad156f3ef65a39a96c6bdb385840" HandleID="k8s-pod-network.9cf3226afe5d6295fff5b9b571cc7529040cad156f3ef65a39a96c6bdb385840" Workload="localhost-k8s-calico--kube--controllers--78f888ffbb--tv82g-eth0" Jul 10 00:15:11.443238 containerd[1577]: 2025-07-10 00:15:11.188 [INFO][4285] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9cf3226afe5d6295fff5b9b571cc7529040cad156f3ef65a39a96c6bdb385840" Namespace="calico-system" Pod="calico-kube-controllers-78f888ffbb-tv82g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78f888ffbb--tv82g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--78f888ffbb--tv82g-eth0", GenerateName:"calico-kube-controllers-78f888ffbb-", Namespace:"calico-system", SelfLink:"", UID:"af71a532-c3cb-4f6b-b34f-557db84bc94c", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 14, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78f888ffbb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-78f888ffbb-tv82g", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie44f45bf24d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:15:11.443238 containerd[1577]: 2025-07-10 00:15:11.188 [INFO][4285] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="9cf3226afe5d6295fff5b9b571cc7529040cad156f3ef65a39a96c6bdb385840" Namespace="calico-system" Pod="calico-kube-controllers-78f888ffbb-tv82g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78f888ffbb--tv82g-eth0" Jul 10 00:15:11.443238 containerd[1577]: 2025-07-10 00:15:11.188 [INFO][4285] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie44f45bf24d ContainerID="9cf3226afe5d6295fff5b9b571cc7529040cad156f3ef65a39a96c6bdb385840" Namespace="calico-system" Pod="calico-kube-controllers-78f888ffbb-tv82g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78f888ffbb--tv82g-eth0" Jul 10 00:15:11.443238 containerd[1577]: 2025-07-10 00:15:11.194 [INFO][4285] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9cf3226afe5d6295fff5b9b571cc7529040cad156f3ef65a39a96c6bdb385840" Namespace="calico-system" Pod="calico-kube-controllers-78f888ffbb-tv82g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78f888ffbb--tv82g-eth0" Jul 10 00:15:11.443238 containerd[1577]: 2025-07-10 00:15:11.197 [INFO][4285] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9cf3226afe5d6295fff5b9b571cc7529040cad156f3ef65a39a96c6bdb385840" Namespace="calico-system" Pod="calico-kube-controllers-78f888ffbb-tv82g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78f888ffbb--tv82g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--78f888ffbb--tv82g-eth0", GenerateName:"calico-kube-controllers-78f888ffbb-", Namespace:"calico-system", SelfLink:"", UID:"af71a532-c3cb-4f6b-b34f-557db84bc94c", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 14, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78f888ffbb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9cf3226afe5d6295fff5b9b571cc7529040cad156f3ef65a39a96c6bdb385840", Pod:"calico-kube-controllers-78f888ffbb-tv82g", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie44f45bf24d", MAC:"ce:7d:10:a7:cf:d9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:15:11.443238 containerd[1577]: 2025-07-10 00:15:11.436 [INFO][4285] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9cf3226afe5d6295fff5b9b571cc7529040cad156f3ef65a39a96c6bdb385840" Namespace="calico-system" Pod="calico-kube-controllers-78f888ffbb-tv82g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78f888ffbb--tv82g-eth0" Jul 10 00:15:11.821269 systemd-networkd[1459]: cali225178cebb2: Gained IPv6LL Jul 10 00:15:12.056865 systemd-networkd[1459]: cali9be724a5f36: Link UP Jul 10 00:15:12.059355 systemd-networkd[1459]: cali9be724a5f36: Gained carrier Jul 10 00:15:12.135009 containerd[1577]: 2025-07-10 00:15:08.736 [INFO][4309] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--644f966dd--9rvwf-eth0 calico-apiserver-644f966dd- calico-apiserver c88bc6c7-1873-4977-a99e-f5b3146d186d 854 0 2025-07-10 00:14:32 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:644f966dd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-644f966dd-9rvwf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9be724a5f36 [] [] }} ContainerID="57996c1565e257f5fc56f6c24e18411467de13e602b25997069e5967d45ee4dd" Namespace="calico-apiserver" Pod="calico-apiserver-644f966dd-9rvwf" WorkloadEndpoint="localhost-k8s-calico--apiserver--644f966dd--9rvwf-" Jul 10 00:15:12.135009 containerd[1577]: 2025-07-10 00:15:08.740 [INFO][4309] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="57996c1565e257f5fc56f6c24e18411467de13e602b25997069e5967d45ee4dd" Namespace="calico-apiserver" Pod="calico-apiserver-644f966dd-9rvwf" WorkloadEndpoint="localhost-k8s-calico--apiserver--644f966dd--9rvwf-eth0" Jul 10 00:15:12.135009 containerd[1577]: 2025-07-10 00:15:08.777 [INFO][4326] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="57996c1565e257f5fc56f6c24e18411467de13e602b25997069e5967d45ee4dd" HandleID="k8s-pod-network.57996c1565e257f5fc56f6c24e18411467de13e602b25997069e5967d45ee4dd" Workload="localhost-k8s-calico--apiserver--644f966dd--9rvwf-eth0" Jul 10 00:15:12.135009 containerd[1577]: 2025-07-10 00:15:08.777 [INFO][4326] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="57996c1565e257f5fc56f6c24e18411467de13e602b25997069e5967d45ee4dd" HandleID="k8s-pod-network.57996c1565e257f5fc56f6c24e18411467de13e602b25997069e5967d45ee4dd" Workload="localhost-k8s-calico--apiserver--644f966dd--9rvwf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f7b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-644f966dd-9rvwf", "timestamp":"2025-07-10 00:15:08.777647447 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:15:12.135009 containerd[1577]: 2025-07-10 00:15:08.777 [INFO][4326] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:15:12.135009 containerd[1577]: 2025-07-10 00:15:11.185 [INFO][4326] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:15:12.135009 containerd[1577]: 2025-07-10 00:15:11.185 [INFO][4326] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 00:15:12.135009 containerd[1577]: 2025-07-10 00:15:11.438 [INFO][4326] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.57996c1565e257f5fc56f6c24e18411467de13e602b25997069e5967d45ee4dd" host="localhost" Jul 10 00:15:12.135009 containerd[1577]: 2025-07-10 00:15:11.447 [INFO][4326] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 00:15:12.135009 containerd[1577]: 2025-07-10 00:15:11.451 [INFO][4326] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 00:15:12.135009 containerd[1577]: 2025-07-10 00:15:11.453 [INFO][4326] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 00:15:12.135009 containerd[1577]: 2025-07-10 00:15:11.456 [INFO][4326] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 00:15:12.135009 containerd[1577]: 2025-07-10 00:15:11.456 [INFO][4326] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.57996c1565e257f5fc56f6c24e18411467de13e602b25997069e5967d45ee4dd" host="localhost" Jul 10 00:15:12.135009 containerd[1577]: 2025-07-10 00:15:11.458 [INFO][4326] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.57996c1565e257f5fc56f6c24e18411467de13e602b25997069e5967d45ee4dd Jul 10 00:15:12.135009 containerd[1577]: 2025-07-10 00:15:11.498 [INFO][4326] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.57996c1565e257f5fc56f6c24e18411467de13e602b25997069e5967d45ee4dd" host="localhost" Jul 10 00:15:12.135009 containerd[1577]: 2025-07-10 00:15:12.048 [INFO][4326] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.57996c1565e257f5fc56f6c24e18411467de13e602b25997069e5967d45ee4dd" host="localhost" Jul 10 00:15:12.135009 containerd[1577]: 2025-07-10 00:15:12.048 [INFO][4326] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.57996c1565e257f5fc56f6c24e18411467de13e602b25997069e5967d45ee4dd" host="localhost" Jul 10 00:15:12.135009 containerd[1577]: 2025-07-10 00:15:12.048 [INFO][4326] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:15:12.135009 containerd[1577]: 2025-07-10 00:15:12.048 [INFO][4326] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="57996c1565e257f5fc56f6c24e18411467de13e602b25997069e5967d45ee4dd" HandleID="k8s-pod-network.57996c1565e257f5fc56f6c24e18411467de13e602b25997069e5967d45ee4dd" Workload="localhost-k8s-calico--apiserver--644f966dd--9rvwf-eth0" Jul 10 00:15:12.135908 containerd[1577]: 2025-07-10 00:15:12.052 [INFO][4309] cni-plugin/k8s.go 418: Populated endpoint ContainerID="57996c1565e257f5fc56f6c24e18411467de13e602b25997069e5967d45ee4dd" Namespace="calico-apiserver" Pod="calico-apiserver-644f966dd-9rvwf" WorkloadEndpoint="localhost-k8s-calico--apiserver--644f966dd--9rvwf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--644f966dd--9rvwf-eth0", GenerateName:"calico-apiserver-644f966dd-", Namespace:"calico-apiserver", SelfLink:"", UID:"c88bc6c7-1873-4977-a99e-f5b3146d186d", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 14, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"644f966dd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-644f966dd-9rvwf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9be724a5f36", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:15:12.135908 containerd[1577]: 2025-07-10 00:15:12.053 [INFO][4309] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="57996c1565e257f5fc56f6c24e18411467de13e602b25997069e5967d45ee4dd" Namespace="calico-apiserver" Pod="calico-apiserver-644f966dd-9rvwf" WorkloadEndpoint="localhost-k8s-calico--apiserver--644f966dd--9rvwf-eth0" Jul 10 00:15:12.135908 containerd[1577]: 2025-07-10 00:15:12.053 [INFO][4309] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9be724a5f36 ContainerID="57996c1565e257f5fc56f6c24e18411467de13e602b25997069e5967d45ee4dd" Namespace="calico-apiserver" Pod="calico-apiserver-644f966dd-9rvwf" WorkloadEndpoint="localhost-k8s-calico--apiserver--644f966dd--9rvwf-eth0" Jul 10 00:15:12.135908 containerd[1577]: 2025-07-10 00:15:12.057 [INFO][4309] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="57996c1565e257f5fc56f6c24e18411467de13e602b25997069e5967d45ee4dd" Namespace="calico-apiserver" Pod="calico-apiserver-644f966dd-9rvwf" WorkloadEndpoint="localhost-k8s-calico--apiserver--644f966dd--9rvwf-eth0" Jul 10 00:15:12.135908 containerd[1577]: 2025-07-10 00:15:12.059 [INFO][4309] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="57996c1565e257f5fc56f6c24e18411467de13e602b25997069e5967d45ee4dd" Namespace="calico-apiserver" Pod="calico-apiserver-644f966dd-9rvwf" WorkloadEndpoint="localhost-k8s-calico--apiserver--644f966dd--9rvwf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--644f966dd--9rvwf-eth0", GenerateName:"calico-apiserver-644f966dd-", Namespace:"calico-apiserver", SelfLink:"", UID:"c88bc6c7-1873-4977-a99e-f5b3146d186d", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 14, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"644f966dd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"57996c1565e257f5fc56f6c24e18411467de13e602b25997069e5967d45ee4dd", Pod:"calico-apiserver-644f966dd-9rvwf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9be724a5f36", MAC:"3a:24:f3:bd:92:87", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:15:12.135908 containerd[1577]: 2025-07-10 00:15:12.131 [INFO][4309] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="57996c1565e257f5fc56f6c24e18411467de13e602b25997069e5967d45ee4dd" Namespace="calico-apiserver" Pod="calico-apiserver-644f966dd-9rvwf" WorkloadEndpoint="localhost-k8s-calico--apiserver--644f966dd--9rvwf-eth0" Jul 10 00:15:12.333179 systemd-networkd[1459]: calie44f45bf24d: Gained IPv6LL Jul 10 00:15:13.229166 systemd-networkd[1459]: cali9be724a5f36: Gained IPv6LL Jul 10 00:15:13.965737 containerd[1577]: time="2025-07-10T00:15:13.965677425Z" level=info msg="connecting to shim fe24bf492e09ba2c68c553db826b3d82c0e4836bd50032e38762c859941bf4ef" address="unix:///run/containerd/s/2d8160b55e5e154663607c63164d7ad873f7d44cfab3205533ce1bca5871539f" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:15:14.031099 containerd[1577]: time="2025-07-10T00:15:14.030319849Z" level=info msg="connecting to shim bdf30ad2adf3686534f7923bbc8379990ffee5833178ba5424b12324366611b0" address="unix:///run/containerd/s/6b6b3d762f0a6f0d3c117b1b022ec09ebdbcbd7b0083739101299749c3ed1a33" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:15:14.034297 systemd[1]: Started cri-containerd-fe24bf492e09ba2c68c553db826b3d82c0e4836bd50032e38762c859941bf4ef.scope - libcontainer container fe24bf492e09ba2c68c553db826b3d82c0e4836bd50032e38762c859941bf4ef. Jul 10 00:15:14.053328 systemd-resolved[1409]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:15:14.066142 systemd[1]: Started cri-containerd-bdf30ad2adf3686534f7923bbc8379990ffee5833178ba5424b12324366611b0.scope - libcontainer container bdf30ad2adf3686534f7923bbc8379990ffee5833178ba5424b12324366611b0. Jul 10 00:15:14.082796 systemd-resolved[1409]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:15:14.182651 containerd[1577]: time="2025-07-10T00:15:14.182597531Z" level=info msg="connecting to shim 9f6f3e038efb5e262e27755386044dc7cceb8e6a15e5100cd2265399007b3d13" address="unix:///run/containerd/s/cb37ed2697b62183ca0eec3053d40df9c778caf1c049ceeae38bada4086c42d5" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:15:14.222115 systemd[1]: Started cri-containerd-9f6f3e038efb5e262e27755386044dc7cceb8e6a15e5100cd2265399007b3d13.scope - libcontainer container 9f6f3e038efb5e262e27755386044dc7cceb8e6a15e5100cd2265399007b3d13. Jul 10 00:15:14.236575 systemd-resolved[1409]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:15:14.425998 containerd[1577]: time="2025-07-10T00:15:14.425639246Z" level=info msg="connecting to shim 1ad2c4ba48793d0b48bc85e449d0c94d2016ccfa73a7a9c64c45eb88b2e6982c" address="unix:///run/containerd/s/b69f7766f52cec8e49bbf63bb5947b3dc26445b11214278124613d03a85bb8e5" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:15:14.456155 systemd[1]: Started cri-containerd-1ad2c4ba48793d0b48bc85e449d0c94d2016ccfa73a7a9c64c45eb88b2e6982c.scope - libcontainer container 1ad2c4ba48793d0b48bc85e449d0c94d2016ccfa73a7a9c64c45eb88b2e6982c. Jul 10 00:15:14.471264 systemd-resolved[1409]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:15:14.874057 containerd[1577]: time="2025-07-10T00:15:14.873732690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-cv2mt,Uid:fa51678a-734c-4f52-b195-0df51e7bd123,Namespace:calico-system,Attempt:0,} returns sandbox id \"fe24bf492e09ba2c68c553db826b3d82c0e4836bd50032e38762c859941bf4ef\"" Jul 10 00:15:14.877047 containerd[1577]: time="2025-07-10T00:15:14.876555914Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 10 00:15:14.894154 containerd[1577]: time="2025-07-10T00:15:14.894052098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-gg7tq,Uid:c9b0fe0a-7632-435e-8a0e-5adbb06433c1,Namespace:kube-system,Attempt:0,} returns sandbox id \"bdf30ad2adf3686534f7923bbc8379990ffee5833178ba5424b12324366611b0\"" Jul 10 00:15:14.897018 systemd[1]: Started sshd@9-10.0.0.15:22-10.0.0.1:52424.service - OpenSSH per-connection server daemon (10.0.0.1:52424). Jul 10 00:15:14.897849 containerd[1577]: time="2025-07-10T00:15:14.897803530Z" level=info msg="CreateContainer within sandbox \"bdf30ad2adf3686534f7923bbc8379990ffee5833178ba5424b12324366611b0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 10 00:15:14.942292 containerd[1577]: time="2025-07-10T00:15:14.942171936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66666b6b-qrtjl,Uid:d164cda6-2a00-4ece-a7cd-c26da42774e7,Namespace:calico-system,Attempt:0,} returns sandbox id \"9f6f3e038efb5e262e27755386044dc7cceb8e6a15e5100cd2265399007b3d13\"" Jul 10 00:15:15.029213 sshd[4569]: Accepted publickey for core from 10.0.0.1 port 52424 ssh2: RSA SHA256:CN83gutZb/k5+6WAkn10Pe0824AMOrEDH4+5h0rggeY Jul 10 00:15:15.052833 sshd-session[4569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:15:15.058297 systemd-logind[1514]: New session 10 of user core. Jul 10 00:15:15.071179 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 10 00:15:15.137277 containerd[1577]: time="2025-07-10T00:15:15.137127709Z" level=info msg="connecting to shim 0d4c6d818c5dba6ffba9c3594e1b51793b0cd63c430360586be10ff2399e9ac0" address="unix:///run/containerd/s/2f6df254d04187ab69e03b6a3a7c333f1c3ce3624dbcfe59b7c40e70bb038d7a" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:15:15.162518 containerd[1577]: time="2025-07-10T00:15:15.162450840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9hrm5,Uid:a2039424-1061-4f04-994f-fe93d4b49d0e,Namespace:calico-system,Attempt:0,} returns sandbox id \"1ad2c4ba48793d0b48bc85e449d0c94d2016ccfa73a7a9c64c45eb88b2e6982c\"" Jul 10 00:15:15.178124 systemd[1]: Started cri-containerd-0d4c6d818c5dba6ffba9c3594e1b51793b0cd63c430360586be10ff2399e9ac0.scope - libcontainer container 0d4c6d818c5dba6ffba9c3594e1b51793b0cd63c430360586be10ff2399e9ac0. Jul 10 00:15:15.192497 systemd-resolved[1409]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:15:15.479336 containerd[1577]: time="2025-07-10T00:15:15.479024348Z" level=info msg="connecting to shim 831ad543e2f81d3377119665d0bd499e7c7ebb76be12338c9a6b109869779dfd" address="unix:///run/containerd/s/65a5f0d4f98e10c4b1291bdb02ce3165f0f8ab23264d0d5dcf49c72d4fa5833d" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:15:15.512133 systemd[1]: Started cri-containerd-831ad543e2f81d3377119665d0bd499e7c7ebb76be12338c9a6b109869779dfd.scope - libcontainer container 831ad543e2f81d3377119665d0bd499e7c7ebb76be12338c9a6b109869779dfd. Jul 10 00:15:15.525268 systemd-resolved[1409]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:15:15.689914 sshd[4573]: Connection closed by 10.0.0.1 port 52424 Jul 10 00:15:15.690318 sshd-session[4569]: pam_unix(sshd:session): session closed for user core Jul 10 00:15:15.694573 systemd[1]: sshd@9-10.0.0.15:22-10.0.0.1:52424.service: Deactivated successfully. Jul 10 00:15:15.696672 systemd[1]: session-10.scope: Deactivated successfully. Jul 10 00:15:15.697600 systemd-logind[1514]: Session 10 logged out. Waiting for processes to exit. Jul 10 00:15:15.698724 systemd-logind[1514]: Removed session 10. Jul 10 00:15:15.877315 containerd[1577]: time="2025-07-10T00:15:15.877225336Z" level=info msg="connecting to shim 9cf3226afe5d6295fff5b9b571cc7529040cad156f3ef65a39a96c6bdb385840" address="unix:///run/containerd/s/e7a4acc4b271403286f9b1e5b3e5e5e7f3329a10a214bda8f4e261dbe1e69027" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:15:15.890882 containerd[1577]: time="2025-07-10T00:15:15.890826720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mm8jb,Uid:359de6e9-3b2e-443b-b503-de9f39e018a6,Namespace:kube-system,Attempt:0,} returns sandbox id \"0d4c6d818c5dba6ffba9c3594e1b51793b0cd63c430360586be10ff2399e9ac0\"" Jul 10 00:15:15.898147 containerd[1577]: time="2025-07-10T00:15:15.898098162Z" level=info msg="CreateContainer within sandbox \"0d4c6d818c5dba6ffba9c3594e1b51793b0cd63c430360586be10ff2399e9ac0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 10 00:15:15.911354 systemd[1]: Started cri-containerd-9cf3226afe5d6295fff5b9b571cc7529040cad156f3ef65a39a96c6bdb385840.scope - libcontainer container 9cf3226afe5d6295fff5b9b571cc7529040cad156f3ef65a39a96c6bdb385840. Jul 10 00:15:15.924288 containerd[1577]: time="2025-07-10T00:15:15.924228097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-644f966dd-5wff4,Uid:624a8fe1-87ea-42e3-8a5f-66462efcb0d0,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"831ad543e2f81d3377119665d0bd499e7c7ebb76be12338c9a6b109869779dfd\"" Jul 10 00:15:15.932047 systemd-resolved[1409]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:15:16.120886 containerd[1577]: time="2025-07-10T00:15:16.120828599Z" level=info msg="connecting to shim 57996c1565e257f5fc56f6c24e18411467de13e602b25997069e5967d45ee4dd" address="unix:///run/containerd/s/cc407d178732a4a3f8c37ab3a95300518fc5b5b8419c0558a747e16cd5b113da" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:15:16.165287 systemd[1]: Started cri-containerd-57996c1565e257f5fc56f6c24e18411467de13e602b25997069e5967d45ee4dd.scope - libcontainer container 57996c1565e257f5fc56f6c24e18411467de13e602b25997069e5967d45ee4dd. Jul 10 00:15:16.181687 systemd-resolved[1409]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:15:16.182068 containerd[1577]: time="2025-07-10T00:15:16.182000591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78f888ffbb-tv82g,Uid:af71a532-c3cb-4f6b-b34f-557db84bc94c,Namespace:calico-system,Attempt:0,} returns sandbox id \"9cf3226afe5d6295fff5b9b571cc7529040cad156f3ef65a39a96c6bdb385840\"" Jul 10 00:15:17.220398 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount343856013.mount: Deactivated successfully. Jul 10 00:15:17.221807 containerd[1577]: time="2025-07-10T00:15:17.221709674Z" level=info msg="Container ddc8fcb240f5ad27e2e2f37cbb57ae343493eed6ef2f71a8c95c5e966cb3b09b: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:15:17.224497 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount698742565.mount: Deactivated successfully. Jul 10 00:15:17.278868 containerd[1577]: time="2025-07-10T00:15:17.278189579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-644f966dd-9rvwf,Uid:c88bc6c7-1873-4977-a99e-f5b3146d186d,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"57996c1565e257f5fc56f6c24e18411467de13e602b25997069e5967d45ee4dd\"" Jul 10 00:15:17.694906 containerd[1577]: time="2025-07-10T00:15:17.694837953Z" level=info msg="Container 722583e376e21f38feed5faf441e5d10a5b41dea30ad944d59bb7048c06ec771: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:15:18.518486 containerd[1577]: time="2025-07-10T00:15:18.518324303Z" level=info msg="CreateContainer within sandbox \"0d4c6d818c5dba6ffba9c3594e1b51793b0cd63c430360586be10ff2399e9ac0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"722583e376e21f38feed5faf441e5d10a5b41dea30ad944d59bb7048c06ec771\"" Jul 10 00:15:18.522179 containerd[1577]: time="2025-07-10T00:15:18.520211178Z" level=info msg="StartContainer for \"722583e376e21f38feed5faf441e5d10a5b41dea30ad944d59bb7048c06ec771\"" Jul 10 00:15:18.522179 containerd[1577]: time="2025-07-10T00:15:18.522066391Z" level=info msg="connecting to shim 722583e376e21f38feed5faf441e5d10a5b41dea30ad944d59bb7048c06ec771" address="unix:///run/containerd/s/2f6df254d04187ab69e03b6a3a7c333f1c3ce3624dbcfe59b7c40e70bb038d7a" protocol=ttrpc version=3 Jul 10 00:15:18.548522 containerd[1577]: time="2025-07-10T00:15:18.548371997Z" level=info msg="CreateContainer within sandbox \"bdf30ad2adf3686534f7923bbc8379990ffee5833178ba5424b12324366611b0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ddc8fcb240f5ad27e2e2f37cbb57ae343493eed6ef2f71a8c95c5e966cb3b09b\"" Jul 10 00:15:18.549437 containerd[1577]: time="2025-07-10T00:15:18.549409349Z" level=info msg="StartContainer for \"ddc8fcb240f5ad27e2e2f37cbb57ae343493eed6ef2f71a8c95c5e966cb3b09b\"" Jul 10 00:15:18.555869 containerd[1577]: time="2025-07-10T00:15:18.555606899Z" level=info msg="connecting to shim ddc8fcb240f5ad27e2e2f37cbb57ae343493eed6ef2f71a8c95c5e966cb3b09b" address="unix:///run/containerd/s/6b6b3d762f0a6f0d3c117b1b022ec09ebdbcbd7b0083739101299749c3ed1a33" protocol=ttrpc version=3 Jul 10 00:15:18.623338 systemd[1]: Started cri-containerd-722583e376e21f38feed5faf441e5d10a5b41dea30ad944d59bb7048c06ec771.scope - libcontainer container 722583e376e21f38feed5faf441e5d10a5b41dea30ad944d59bb7048c06ec771. Jul 10 00:15:18.655591 systemd[1]: Started cri-containerd-ddc8fcb240f5ad27e2e2f37cbb57ae343493eed6ef2f71a8c95c5e966cb3b09b.scope - libcontainer container ddc8fcb240f5ad27e2e2f37cbb57ae343493eed6ef2f71a8c95c5e966cb3b09b. Jul 10 00:15:18.791164 containerd[1577]: time="2025-07-10T00:15:18.790945503Z" level=info msg="StartContainer for \"ddc8fcb240f5ad27e2e2f37cbb57ae343493eed6ef2f71a8c95c5e966cb3b09b\" returns successfully" Jul 10 00:15:18.795799 containerd[1577]: time="2025-07-10T00:15:18.795751696Z" level=info msg="StartContainer for \"722583e376e21f38feed5faf441e5d10a5b41dea30ad944d59bb7048c06ec771\" returns successfully" Jul 10 00:15:19.464345 kubelet[2734]: I0710 00:15:19.464267 2734 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-gg7tq" podStartSLOduration=65.464245618 podStartE2EDuration="1m5.464245618s" podCreationTimestamp="2025-07-10 00:14:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:15:19.463354416 +0000 UTC m=+70.572973989" watchObservedRunningTime="2025-07-10 00:15:19.464245618 +0000 UTC m=+70.573865161" Jul 10 00:15:19.511875 kubelet[2734]: I0710 00:15:19.511597 2734 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-mm8jb" podStartSLOduration=65.511573544 podStartE2EDuration="1m5.511573544s" podCreationTimestamp="2025-07-10 00:14:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:15:19.511436732 +0000 UTC m=+70.621056305" watchObservedRunningTime="2025-07-10 00:15:19.511573544 +0000 UTC m=+70.621193107" Jul 10 00:15:20.712404 systemd[1]: Started sshd@10-10.0.0.15:22-10.0.0.1:47614.service - OpenSSH per-connection server daemon (10.0.0.1:47614). Jul 10 00:15:21.250999 sshd[4856]: Accepted publickey for core from 10.0.0.1 port 47614 ssh2: RSA SHA256:CN83gutZb/k5+6WAkn10Pe0824AMOrEDH4+5h0rggeY Jul 10 00:15:21.260658 sshd-session[4856]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:15:21.280579 systemd-logind[1514]: New session 11 of user core. Jul 10 00:15:21.305450 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 10 00:15:21.577291 sshd[4860]: Connection closed by 10.0.0.1 port 47614 Jul 10 00:15:21.578695 sshd-session[4856]: pam_unix(sshd:session): session closed for user core Jul 10 00:15:21.587833 systemd[1]: sshd@10-10.0.0.15:22-10.0.0.1:47614.service: Deactivated successfully. Jul 10 00:15:21.593389 systemd[1]: session-11.scope: Deactivated successfully. Jul 10 00:15:21.596430 systemd-logind[1514]: Session 11 logged out. Waiting for processes to exit. Jul 10 00:15:21.602222 systemd-logind[1514]: Removed session 11. Jul 10 00:15:22.958426 containerd[1577]: time="2025-07-10T00:15:22.958321521Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a71e2915761605f91634366398d6b4fe36d4f01c29f4b40031df31cc2ec5cdb6\" id:\"edff220e9675eeaaed2e0fd3c5abfb59f48d87d9799c0ac7d8d71221310284cc\" pid:4896 exited_at:{seconds:1752106522 nanos:957644180}" Jul 10 00:15:23.076557 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1697378677.mount: Deactivated successfully. Jul 10 00:15:25.829834 containerd[1577]: time="2025-07-10T00:15:25.828589028Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:15:25.845501 containerd[1577]: time="2025-07-10T00:15:25.845285040Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 10 00:15:25.882821 containerd[1577]: time="2025-07-10T00:15:25.882747596Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:15:25.944785 containerd[1577]: time="2025-07-10T00:15:25.944678107Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:15:25.947084 containerd[1577]: time="2025-07-10T00:15:25.946200637Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 11.069608794s" Jul 10 00:15:25.947084 containerd[1577]: time="2025-07-10T00:15:25.946267074Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 10 00:15:25.948561 containerd[1577]: time="2025-07-10T00:15:25.948477955Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 10 00:15:25.949988 containerd[1577]: time="2025-07-10T00:15:25.949937105Z" level=info msg="CreateContainer within sandbox \"fe24bf492e09ba2c68c553db826b3d82c0e4836bd50032e38762c859941bf4ef\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 10 00:15:26.159991 containerd[1577]: time="2025-07-10T00:15:26.158351116Z" level=info msg="Container aa05ca4624e9d0f6fbaa76e5a51c36ce6aa10671abe68b3f391cea3714f03643: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:15:26.470501 containerd[1577]: time="2025-07-10T00:15:26.470292098Z" level=info msg="CreateContainer within sandbox \"fe24bf492e09ba2c68c553db826b3d82c0e4836bd50032e38762c859941bf4ef\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"aa05ca4624e9d0f6fbaa76e5a51c36ce6aa10671abe68b3f391cea3714f03643\"" Jul 10 00:15:26.478643 containerd[1577]: time="2025-07-10T00:15:26.478534933Z" level=info msg="StartContainer for \"aa05ca4624e9d0f6fbaa76e5a51c36ce6aa10671abe68b3f391cea3714f03643\"" Jul 10 00:15:26.481419 containerd[1577]: time="2025-07-10T00:15:26.481339283Z" level=info msg="connecting to shim aa05ca4624e9d0f6fbaa76e5a51c36ce6aa10671abe68b3f391cea3714f03643" address="unix:///run/containerd/s/2d8160b55e5e154663607c63164d7ad873f7d44cfab3205533ce1bca5871539f" protocol=ttrpc version=3 Jul 10 00:15:26.557296 systemd[1]: Started cri-containerd-aa05ca4624e9d0f6fbaa76e5a51c36ce6aa10671abe68b3f391cea3714f03643.scope - libcontainer container aa05ca4624e9d0f6fbaa76e5a51c36ce6aa10671abe68b3f391cea3714f03643. Jul 10 00:15:26.604166 systemd[1]: Started sshd@11-10.0.0.15:22-10.0.0.1:47630.service - OpenSSH per-connection server daemon (10.0.0.1:47630). Jul 10 00:15:26.739988 sshd[4938]: Accepted publickey for core from 10.0.0.1 port 47630 ssh2: RSA SHA256:CN83gutZb/k5+6WAkn10Pe0824AMOrEDH4+5h0rggeY Jul 10 00:15:26.744237 sshd-session[4938]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:15:26.752656 containerd[1577]: time="2025-07-10T00:15:26.752368608Z" level=info msg="StartContainer for \"aa05ca4624e9d0f6fbaa76e5a51c36ce6aa10671abe68b3f391cea3714f03643\" returns successfully" Jul 10 00:15:26.764459 systemd-logind[1514]: New session 12 of user core. Jul 10 00:15:26.773679 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 10 00:15:27.142056 sshd[4956]: Connection closed by 10.0.0.1 port 47630 Jul 10 00:15:27.144934 sshd-session[4938]: pam_unix(sshd:session): session closed for user core Jul 10 00:15:27.154096 systemd[1]: sshd@11-10.0.0.15:22-10.0.0.1:47630.service: Deactivated successfully. Jul 10 00:15:27.159462 systemd[1]: session-12.scope: Deactivated successfully. Jul 10 00:15:27.160952 systemd-logind[1514]: Session 12 logged out. Waiting for processes to exit. Jul 10 00:15:27.172185 systemd-logind[1514]: Removed session 12. Jul 10 00:15:27.663732 kubelet[2734]: I0710 00:15:27.663434 2734 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-cv2mt" podStartSLOduration=41.587125396 podStartE2EDuration="52.659593415s" podCreationTimestamp="2025-07-10 00:14:35 +0000 UTC" firstStartedPulling="2025-07-10 00:15:14.875742557 +0000 UTC m=+65.985362110" lastFinishedPulling="2025-07-10 00:15:25.948210566 +0000 UTC m=+77.057830129" observedRunningTime="2025-07-10 00:15:27.655278843 +0000 UTC m=+78.764898406" watchObservedRunningTime="2025-07-10 00:15:27.659593415 +0000 UTC m=+78.769212969" Jul 10 00:15:27.790923 containerd[1577]: time="2025-07-10T00:15:27.790008177Z" level=info msg="TaskExit event in podsandbox handler container_id:\"aa05ca4624e9d0f6fbaa76e5a51c36ce6aa10671abe68b3f391cea3714f03643\" id:\"ab00848fb8bf9f801a526fcbabde2be0a8ac4f2ae428c7a1ac04e44d9fb6d2b8\" pid:4990 exit_status:1 exited_at:{seconds:1752106527 nanos:789144955}" Jul 10 00:15:28.775802 containerd[1577]: time="2025-07-10T00:15:28.775736267Z" level=info msg="TaskExit event in podsandbox handler container_id:\"aa05ca4624e9d0f6fbaa76e5a51c36ce6aa10671abe68b3f391cea3714f03643\" id:\"a8b51a9f5b7af87f9ff55c6691c3e6480818bca12ac3be7a5b86002258916f8a\" pid:5019 exit_status:1 exited_at:{seconds:1752106528 nanos:769217544}" Jul 10 00:15:29.109726 containerd[1577]: time="2025-07-10T00:15:29.109567130Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 10 00:15:29.110843 containerd[1577]: time="2025-07-10T00:15:29.110428909Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:15:29.114766 containerd[1577]: time="2025-07-10T00:15:29.114713058Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:15:29.124365 containerd[1577]: time="2025-07-10T00:15:29.124267270Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:15:29.127740 containerd[1577]: time="2025-07-10T00:15:29.126833300Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 3.178298998s" Jul 10 00:15:29.127740 containerd[1577]: time="2025-07-10T00:15:29.126889528Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 10 00:15:29.135700 containerd[1577]: time="2025-07-10T00:15:29.135284645Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 10 00:15:29.144549 containerd[1577]: time="2025-07-10T00:15:29.143777869Z" level=info msg="CreateContainer within sandbox \"9f6f3e038efb5e262e27755386044dc7cceb8e6a15e5100cd2265399007b3d13\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 10 00:15:29.207467 containerd[1577]: time="2025-07-10T00:15:29.207397603Z" level=info msg="Container 7b6a3ced407f76d91e317eacaa80adf43537cee1aa6b9fef784b3152112c6e4c: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:15:29.210546 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1533820160.mount: Deactivated successfully. Jul 10 00:15:29.233840 containerd[1577]: time="2025-07-10T00:15:29.233739175Z" level=info msg="CreateContainer within sandbox \"9f6f3e038efb5e262e27755386044dc7cceb8e6a15e5100cd2265399007b3d13\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"7b6a3ced407f76d91e317eacaa80adf43537cee1aa6b9fef784b3152112c6e4c\"" Jul 10 00:15:29.237857 containerd[1577]: time="2025-07-10T00:15:29.234909241Z" level=info msg="StartContainer for \"7b6a3ced407f76d91e317eacaa80adf43537cee1aa6b9fef784b3152112c6e4c\"" Jul 10 00:15:29.238071 containerd[1577]: time="2025-07-10T00:15:29.237890541Z" level=info msg="connecting to shim 7b6a3ced407f76d91e317eacaa80adf43537cee1aa6b9fef784b3152112c6e4c" address="unix:///run/containerd/s/cb37ed2697b62183ca0eec3053d40df9c778caf1c049ceeae38bada4086c42d5" protocol=ttrpc version=3 Jul 10 00:15:29.304334 systemd[1]: Started cri-containerd-7b6a3ced407f76d91e317eacaa80adf43537cee1aa6b9fef784b3152112c6e4c.scope - libcontainer container 7b6a3ced407f76d91e317eacaa80adf43537cee1aa6b9fef784b3152112c6e4c. Jul 10 00:15:29.479869 containerd[1577]: time="2025-07-10T00:15:29.479316153Z" level=info msg="StartContainer for \"7b6a3ced407f76d91e317eacaa80adf43537cee1aa6b9fef784b3152112c6e4c\" returns successfully" Jul 10 00:15:29.787960 containerd[1577]: time="2025-07-10T00:15:29.787428071Z" level=info msg="TaskExit event in podsandbox handler container_id:\"aa05ca4624e9d0f6fbaa76e5a51c36ce6aa10671abe68b3f391cea3714f03643\" id:\"aeecd10244e3a2ee62ee6cef10391a74865144ce8aa968981d421a975e1ac318\" pid:5084 exit_status:1 exited_at:{seconds:1752106529 nanos:786856384}" Jul 10 00:15:31.606432 containerd[1577]: time="2025-07-10T00:15:31.606336750Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:15:31.607233 containerd[1577]: time="2025-07-10T00:15:31.607156948Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 10 00:15:31.608532 containerd[1577]: time="2025-07-10T00:15:31.608448323Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:15:31.610581 containerd[1577]: time="2025-07-10T00:15:31.610504992Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:15:31.611801 containerd[1577]: time="2025-07-10T00:15:31.611083401Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 2.475745646s" Jul 10 00:15:31.611801 containerd[1577]: time="2025-07-10T00:15:31.611190355Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 10 00:15:31.613771 containerd[1577]: time="2025-07-10T00:15:31.612871451Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 10 00:15:31.615860 containerd[1577]: time="2025-07-10T00:15:31.615806028Z" level=info msg="CreateContainer within sandbox \"1ad2c4ba48793d0b48bc85e449d0c94d2016ccfa73a7a9c64c45eb88b2e6982c\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 10 00:15:31.636738 containerd[1577]: time="2025-07-10T00:15:31.636662381Z" level=info msg="Container 227cfb040acde9cba31ddfebac2c468282554662a56c272669df665d2e904267: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:15:31.659921 containerd[1577]: time="2025-07-10T00:15:31.659855936Z" level=info msg="CreateContainer within sandbox \"1ad2c4ba48793d0b48bc85e449d0c94d2016ccfa73a7a9c64c45eb88b2e6982c\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"227cfb040acde9cba31ddfebac2c468282554662a56c272669df665d2e904267\"" Jul 10 00:15:31.661039 containerd[1577]: time="2025-07-10T00:15:31.660915670Z" level=info msg="StartContainer for \"227cfb040acde9cba31ddfebac2c468282554662a56c272669df665d2e904267\"" Jul 10 00:15:31.662992 containerd[1577]: time="2025-07-10T00:15:31.662916834Z" level=info msg="connecting to shim 227cfb040acde9cba31ddfebac2c468282554662a56c272669df665d2e904267" address="unix:///run/containerd/s/b69f7766f52cec8e49bbf63bb5947b3dc26445b11214278124613d03a85bb8e5" protocol=ttrpc version=3 Jul 10 00:15:31.692283 systemd[1]: Started cri-containerd-227cfb040acde9cba31ddfebac2c468282554662a56c272669df665d2e904267.scope - libcontainer container 227cfb040acde9cba31ddfebac2c468282554662a56c272669df665d2e904267. Jul 10 00:15:31.749514 containerd[1577]: time="2025-07-10T00:15:31.749458870Z" level=info msg="StartContainer for \"227cfb040acde9cba31ddfebac2c468282554662a56c272669df665d2e904267\" returns successfully" Jul 10 00:15:32.159472 systemd[1]: Started sshd@12-10.0.0.15:22-10.0.0.1:38938.service - OpenSSH per-connection server daemon (10.0.0.1:38938). Jul 10 00:15:32.229411 sshd[5130]: Accepted publickey for core from 10.0.0.1 port 38938 ssh2: RSA SHA256:CN83gutZb/k5+6WAkn10Pe0824AMOrEDH4+5h0rggeY Jul 10 00:15:32.231635 sshd-session[5130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:15:32.237634 systemd-logind[1514]: New session 13 of user core. Jul 10 00:15:32.248209 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 10 00:15:32.400339 sshd[5134]: Connection closed by 10.0.0.1 port 38938 Jul 10 00:15:32.400789 sshd-session[5130]: pam_unix(sshd:session): session closed for user core Jul 10 00:15:32.406386 systemd[1]: sshd@12-10.0.0.15:22-10.0.0.1:38938.service: Deactivated successfully. Jul 10 00:15:32.409454 systemd[1]: session-13.scope: Deactivated successfully. Jul 10 00:15:32.410636 systemd-logind[1514]: Session 13 logged out. Waiting for processes to exit. Jul 10 00:15:32.412920 systemd-logind[1514]: Removed session 13. Jul 10 00:15:35.093514 containerd[1577]: time="2025-07-10T00:15:35.093422289Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:15:35.094214 containerd[1577]: time="2025-07-10T00:15:35.094149369Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 10 00:15:35.095804 containerd[1577]: time="2025-07-10T00:15:35.095766670Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:15:35.097934 containerd[1577]: time="2025-07-10T00:15:35.097899740Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:15:35.098699 containerd[1577]: time="2025-07-10T00:15:35.098654132Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 3.485742666s" Jul 10 00:15:35.098758 containerd[1577]: time="2025-07-10T00:15:35.098701091Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 10 00:15:35.099705 containerd[1577]: time="2025-07-10T00:15:35.099629554Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 10 00:15:35.102608 containerd[1577]: time="2025-07-10T00:15:35.102557893Z" level=info msg="CreateContainer within sandbox \"831ad543e2f81d3377119665d0bd499e7c7ebb76be12338c9a6b109869779dfd\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 10 00:15:35.112431 containerd[1577]: time="2025-07-10T00:15:35.112380220Z" level=info msg="Container 187f957ba390067720b43369573a24c235fc8525e21588897e8bbdfb11e5881c: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:15:35.123920 containerd[1577]: time="2025-07-10T00:15:35.123871056Z" level=info msg="CreateContainer within sandbox \"831ad543e2f81d3377119665d0bd499e7c7ebb76be12338c9a6b109869779dfd\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"187f957ba390067720b43369573a24c235fc8525e21588897e8bbdfb11e5881c\"" Jul 10 00:15:35.124544 containerd[1577]: time="2025-07-10T00:15:35.124497957Z" level=info msg="StartContainer for \"187f957ba390067720b43369573a24c235fc8525e21588897e8bbdfb11e5881c\"" Jul 10 00:15:35.126008 containerd[1577]: time="2025-07-10T00:15:35.125942569Z" level=info msg="connecting to shim 187f957ba390067720b43369573a24c235fc8525e21588897e8bbdfb11e5881c" address="unix:///run/containerd/s/65a5f0d4f98e10c4b1291bdb02ce3165f0f8ab23264d0d5dcf49c72d4fa5833d" protocol=ttrpc version=3 Jul 10 00:15:35.159189 systemd[1]: Started cri-containerd-187f957ba390067720b43369573a24c235fc8525e21588897e8bbdfb11e5881c.scope - libcontainer container 187f957ba390067720b43369573a24c235fc8525e21588897e8bbdfb11e5881c. Jul 10 00:15:35.285653 containerd[1577]: time="2025-07-10T00:15:35.285595739Z" level=info msg="StartContainer for \"187f957ba390067720b43369573a24c235fc8525e21588897e8bbdfb11e5881c\" returns successfully" Jul 10 00:15:35.613927 kubelet[2734]: I0710 00:15:35.613548 2734 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-644f966dd-5wff4" podStartSLOduration=44.439660342 podStartE2EDuration="1m3.613486807s" podCreationTimestamp="2025-07-10 00:14:32 +0000 UTC" firstStartedPulling="2025-07-10 00:15:15.925707277 +0000 UTC m=+67.035326830" lastFinishedPulling="2025-07-10 00:15:35.099533742 +0000 UTC m=+86.209153295" observedRunningTime="2025-07-10 00:15:35.61339888 +0000 UTC m=+86.723018433" watchObservedRunningTime="2025-07-10 00:15:35.613486807 +0000 UTC m=+86.723106360" Jul 10 00:15:35.777414 update_engine[1516]: I20250710 00:15:35.777333 1516 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jul 10 00:15:35.777414 update_engine[1516]: I20250710 00:15:35.777403 1516 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jul 10 00:15:35.778937 update_engine[1516]: I20250710 00:15:35.778867 1516 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jul 10 00:15:35.788793 update_engine[1516]: I20250710 00:15:35.788720 1516 omaha_request_params.cc:62] Current group set to beta Jul 10 00:15:35.788945 update_engine[1516]: I20250710 00:15:35.788925 1516 update_attempter.cc:499] Already updated boot flags. Skipping. Jul 10 00:15:35.788945 update_engine[1516]: I20250710 00:15:35.788939 1516 update_attempter.cc:643] Scheduling an action processor start. Jul 10 00:15:35.789098 update_engine[1516]: I20250710 00:15:35.788998 1516 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 10 00:15:35.789098 update_engine[1516]: I20250710 00:15:35.789076 1516 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jul 10 00:15:35.789197 update_engine[1516]: I20250710 00:15:35.789166 1516 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 10 00:15:35.789197 update_engine[1516]: I20250710 00:15:35.789186 1516 omaha_request_action.cc:272] Request: Jul 10 00:15:35.789197 update_engine[1516]: Jul 10 00:15:35.789197 update_engine[1516]: Jul 10 00:15:35.789197 update_engine[1516]: Jul 10 00:15:35.789197 update_engine[1516]: Jul 10 00:15:35.789197 update_engine[1516]: Jul 10 00:15:35.789197 update_engine[1516]: Jul 10 00:15:35.789197 update_engine[1516]: Jul 10 00:15:35.789197 update_engine[1516]: Jul 10 00:15:35.789496 update_engine[1516]: I20250710 00:15:35.789197 1516 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 10 00:15:35.795347 update_engine[1516]: I20250710 00:15:35.795287 1516 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 10 00:15:35.795748 update_engine[1516]: I20250710 00:15:35.795692 1516 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 10 00:15:35.800104 locksmithd[1573]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jul 10 00:15:35.803098 update_engine[1516]: E20250710 00:15:35.802954 1516 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 10 00:15:35.803098 update_engine[1516]: I20250710 00:15:35.803073 1516 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jul 10 00:15:37.415931 systemd[1]: Started sshd@13-10.0.0.15:22-10.0.0.1:38940.service - OpenSSH per-connection server daemon (10.0.0.1:38940). Jul 10 00:15:37.514874 sshd[5195]: Accepted publickey for core from 10.0.0.1 port 38940 ssh2: RSA SHA256:CN83gutZb/k5+6WAkn10Pe0824AMOrEDH4+5h0rggeY Jul 10 00:15:37.516646 sshd-session[5195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:15:37.522043 systemd-logind[1514]: New session 14 of user core. Jul 10 00:15:37.526132 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 10 00:15:38.049521 sshd[5197]: Connection closed by 10.0.0.1 port 38940 Jul 10 00:15:38.049996 sshd-session[5195]: pam_unix(sshd:session): session closed for user core Jul 10 00:15:38.056203 systemd[1]: sshd@13-10.0.0.15:22-10.0.0.1:38940.service: Deactivated successfully. Jul 10 00:15:38.059215 systemd[1]: session-14.scope: Deactivated successfully. Jul 10 00:15:38.060449 systemd-logind[1514]: Session 14 logged out. Waiting for processes to exit. Jul 10 00:15:38.062935 systemd-logind[1514]: Removed session 14. Jul 10 00:15:39.964938 containerd[1577]: time="2025-07-10T00:15:39.964866427Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:15:39.967025 containerd[1577]: time="2025-07-10T00:15:39.966987699Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 10 00:15:39.971595 containerd[1577]: time="2025-07-10T00:15:39.971555573Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:15:39.974958 containerd[1577]: time="2025-07-10T00:15:39.974886189Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:15:39.975510 containerd[1577]: time="2025-07-10T00:15:39.975447854Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 4.875784145s" Jul 10 00:15:39.975510 containerd[1577]: time="2025-07-10T00:15:39.975502278Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 10 00:15:39.976564 containerd[1577]: time="2025-07-10T00:15:39.976543051Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 10 00:15:39.988898 containerd[1577]: time="2025-07-10T00:15:39.988839472Z" level=info msg="CreateContainer within sandbox \"9cf3226afe5d6295fff5b9b571cc7529040cad156f3ef65a39a96c6bdb385840\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 10 00:15:40.002981 containerd[1577]: time="2025-07-10T00:15:40.002913953Z" level=info msg="Container 4d5372cec1709e091b845e5ce9a503ca943a7fd81c0f613208c6fd9dc382ab1c: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:15:40.018983 containerd[1577]: time="2025-07-10T00:15:40.018918427Z" level=info msg="CreateContainer within sandbox \"9cf3226afe5d6295fff5b9b571cc7529040cad156f3ef65a39a96c6bdb385840\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"4d5372cec1709e091b845e5ce9a503ca943a7fd81c0f613208c6fd9dc382ab1c\"" Jul 10 00:15:40.020900 containerd[1577]: time="2025-07-10T00:15:40.019846978Z" level=info msg="StartContainer for \"4d5372cec1709e091b845e5ce9a503ca943a7fd81c0f613208c6fd9dc382ab1c\"" Jul 10 00:15:40.021808 containerd[1577]: time="2025-07-10T00:15:40.021765265Z" level=info msg="connecting to shim 4d5372cec1709e091b845e5ce9a503ca943a7fd81c0f613208c6fd9dc382ab1c" address="unix:///run/containerd/s/e7a4acc4b271403286f9b1e5b3e5e5e7f3329a10a214bda8f4e261dbe1e69027" protocol=ttrpc version=3 Jul 10 00:15:40.154292 systemd[1]: Started cri-containerd-4d5372cec1709e091b845e5ce9a503ca943a7fd81c0f613208c6fd9dc382ab1c.scope - libcontainer container 4d5372cec1709e091b845e5ce9a503ca943a7fd81c0f613208c6fd9dc382ab1c. Jul 10 00:15:40.210612 containerd[1577]: time="2025-07-10T00:15:40.210563714Z" level=info msg="StartContainer for \"4d5372cec1709e091b845e5ce9a503ca943a7fd81c0f613208c6fd9dc382ab1c\" returns successfully" Jul 10 00:15:40.431086 containerd[1577]: time="2025-07-10T00:15:40.431007844Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:15:40.434941 containerd[1577]: time="2025-07-10T00:15:40.434884164Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 10 00:15:40.436706 containerd[1577]: time="2025-07-10T00:15:40.436676422Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 460.041195ms" Jul 10 00:15:40.437565 containerd[1577]: time="2025-07-10T00:15:40.436710006Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 10 00:15:40.438932 containerd[1577]: time="2025-07-10T00:15:40.438903715Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 10 00:15:40.439597 containerd[1577]: time="2025-07-10T00:15:40.439541103Z" level=info msg="CreateContainer within sandbox \"57996c1565e257f5fc56f6c24e18411467de13e602b25997069e5967d45ee4dd\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 10 00:15:40.461239 containerd[1577]: time="2025-07-10T00:15:40.461179070Z" level=info msg="Container 11f6d68b333c1c912e17b4b5d28ba8c4c8ac1cd42ecc3623f97f8e6922956bc9: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:15:40.469992 containerd[1577]: time="2025-07-10T00:15:40.469826746Z" level=info msg="CreateContainer within sandbox \"57996c1565e257f5fc56f6c24e18411467de13e602b25997069e5967d45ee4dd\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"11f6d68b333c1c912e17b4b5d28ba8c4c8ac1cd42ecc3623f97f8e6922956bc9\"" Jul 10 00:15:40.470843 containerd[1577]: time="2025-07-10T00:15:40.470470236Z" level=info msg="StartContainer for \"11f6d68b333c1c912e17b4b5d28ba8c4c8ac1cd42ecc3623f97f8e6922956bc9\"" Jul 10 00:15:40.471715 containerd[1577]: time="2025-07-10T00:15:40.471665452Z" level=info msg="connecting to shim 11f6d68b333c1c912e17b4b5d28ba8c4c8ac1cd42ecc3623f97f8e6922956bc9" address="unix:///run/containerd/s/cc407d178732a4a3f8c37ab3a95300518fc5b5b8419c0558a747e16cd5b113da" protocol=ttrpc version=3 Jul 10 00:15:40.505342 systemd[1]: Started cri-containerd-11f6d68b333c1c912e17b4b5d28ba8c4c8ac1cd42ecc3623f97f8e6922956bc9.scope - libcontainer container 11f6d68b333c1c912e17b4b5d28ba8c4c8ac1cd42ecc3623f97f8e6922956bc9. Jul 10 00:15:40.762148 containerd[1577]: time="2025-07-10T00:15:40.761993755Z" level=info msg="StartContainer for \"11f6d68b333c1c912e17b4b5d28ba8c4c8ac1cd42ecc3623f97f8e6922956bc9\" returns successfully" Jul 10 00:15:40.828343 containerd[1577]: time="2025-07-10T00:15:40.827420296Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4d5372cec1709e091b845e5ce9a503ca943a7fd81c0f613208c6fd9dc382ab1c\" id:\"5f8eae80adf5e68a056699aa45e7013c0becbe98621e1786c9a87a68759e13ae\" pid:5311 exited_at:{seconds:1752106540 nanos:825869265}" Jul 10 00:15:40.828816 kubelet[2734]: I0710 00:15:40.828486 2734 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-644f966dd-9rvwf" podStartSLOduration=45.670650011 podStartE2EDuration="1m8.828464977s" podCreationTimestamp="2025-07-10 00:14:32 +0000 UTC" firstStartedPulling="2025-07-10 00:15:17.279733449 +0000 UTC m=+68.389352992" lastFinishedPulling="2025-07-10 00:15:40.437548405 +0000 UTC m=+91.547167958" observedRunningTime="2025-07-10 00:15:40.828180688 +0000 UTC m=+91.937800241" watchObservedRunningTime="2025-07-10 00:15:40.828464977 +0000 UTC m=+91.938084530" Jul 10 00:15:40.828816 kubelet[2734]: I0710 00:15:40.828645 2734 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-78f888ffbb-tv82g" podStartSLOduration=41.035843915 podStartE2EDuration="1m4.828640069s" podCreationTimestamp="2025-07-10 00:14:36 +0000 UTC" firstStartedPulling="2025-07-10 00:15:16.183573007 +0000 UTC m=+67.293192570" lastFinishedPulling="2025-07-10 00:15:39.976369171 +0000 UTC m=+91.085988724" observedRunningTime="2025-07-10 00:15:40.812757966 +0000 UTC m=+91.922377529" watchObservedRunningTime="2025-07-10 00:15:40.828640069 +0000 UTC m=+91.938259622" Jul 10 00:15:43.066200 systemd[1]: Started sshd@14-10.0.0.15:22-10.0.0.1:39656.service - OpenSSH per-connection server daemon (10.0.0.1:39656). Jul 10 00:15:43.140987 sshd[5330]: Accepted publickey for core from 10.0.0.1 port 39656 ssh2: RSA SHA256:CN83gutZb/k5+6WAkn10Pe0824AMOrEDH4+5h0rggeY Jul 10 00:15:43.143223 sshd-session[5330]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:15:43.157002 systemd-logind[1514]: New session 15 of user core. Jul 10 00:15:43.163268 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 10 00:15:43.371850 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3282536055.mount: Deactivated successfully. Jul 10 00:15:43.407768 sshd[5332]: Connection closed by 10.0.0.1 port 39656 Jul 10 00:15:43.408115 sshd-session[5330]: pam_unix(sshd:session): session closed for user core Jul 10 00:15:43.419282 systemd[1]: sshd@14-10.0.0.15:22-10.0.0.1:39656.service: Deactivated successfully. Jul 10 00:15:43.421546 systemd[1]: session-15.scope: Deactivated successfully. Jul 10 00:15:43.422401 systemd-logind[1514]: Session 15 logged out. Waiting for processes to exit. Jul 10 00:15:43.425591 systemd[1]: Started sshd@15-10.0.0.15:22-10.0.0.1:39662.service - OpenSSH per-connection server daemon (10.0.0.1:39662). Jul 10 00:15:43.426856 systemd-logind[1514]: Removed session 15. Jul 10 00:15:43.483364 sshd[5347]: Accepted publickey for core from 10.0.0.1 port 39662 ssh2: RSA SHA256:CN83gutZb/k5+6WAkn10Pe0824AMOrEDH4+5h0rggeY Jul 10 00:15:43.485093 sshd-session[5347]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:15:43.491264 systemd-logind[1514]: New session 16 of user core. Jul 10 00:15:43.499183 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 10 00:15:43.686807 sshd[5349]: Connection closed by 10.0.0.1 port 39662 Jul 10 00:15:43.688176 sshd-session[5347]: pam_unix(sshd:session): session closed for user core Jul 10 00:15:43.701436 systemd[1]: sshd@15-10.0.0.15:22-10.0.0.1:39662.service: Deactivated successfully. Jul 10 00:15:43.703637 systemd[1]: session-16.scope: Deactivated successfully. Jul 10 00:15:43.705560 systemd-logind[1514]: Session 16 logged out. Waiting for processes to exit. Jul 10 00:15:43.708018 systemd[1]: Started sshd@16-10.0.0.15:22-10.0.0.1:39676.service - OpenSSH per-connection server daemon (10.0.0.1:39676). Jul 10 00:15:43.709842 systemd-logind[1514]: Removed session 16. Jul 10 00:15:43.765920 sshd[5365]: Accepted publickey for core from 10.0.0.1 port 39676 ssh2: RSA SHA256:CN83gutZb/k5+6WAkn10Pe0824AMOrEDH4+5h0rggeY Jul 10 00:15:43.767908 sshd-session[5365]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:15:43.773020 systemd-logind[1514]: New session 17 of user core. Jul 10 00:15:43.782134 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 10 00:15:43.793739 containerd[1577]: time="2025-07-10T00:15:43.793694338Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:15:43.857328 containerd[1577]: time="2025-07-10T00:15:43.857264216Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 10 00:15:43.899119 containerd[1577]: time="2025-07-10T00:15:43.899046632Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:15:43.912838 containerd[1577]: time="2025-07-10T00:15:43.911877704Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:15:43.915213 containerd[1577]: time="2025-07-10T00:15:43.915159222Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 3.476223827s" Jul 10 00:15:43.915213 containerd[1577]: time="2025-07-10T00:15:43.915209537Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 10 00:15:43.930626 sshd[5367]: Connection closed by 10.0.0.1 port 39676 Jul 10 00:15:43.932123 sshd-session[5365]: pam_unix(sshd:session): session closed for user core Jul 10 00:15:43.938905 systemd-logind[1514]: Session 17 logged out. Waiting for processes to exit. Jul 10 00:15:43.942712 systemd[1]: sshd@16-10.0.0.15:22-10.0.0.1:39676.service: Deactivated successfully. Jul 10 00:15:43.946326 systemd[1]: session-17.scope: Deactivated successfully. Jul 10 00:15:43.947469 containerd[1577]: time="2025-07-10T00:15:43.947295822Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 10 00:15:43.950580 containerd[1577]: time="2025-07-10T00:15:43.949430357Z" level=info msg="CreateContainer within sandbox \"9f6f3e038efb5e262e27755386044dc7cceb8e6a15e5100cd2265399007b3d13\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 10 00:15:43.953199 systemd-logind[1514]: Removed session 17. Jul 10 00:15:43.971019 containerd[1577]: time="2025-07-10T00:15:43.969113661Z" level=info msg="Container d060189ad9f4c786eef9fe87211e937f95f108f976aa3e628598ad69d3227704: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:15:43.980240 containerd[1577]: time="2025-07-10T00:15:43.980186181Z" level=info msg="CreateContainer within sandbox \"9f6f3e038efb5e262e27755386044dc7cceb8e6a15e5100cd2265399007b3d13\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"d060189ad9f4c786eef9fe87211e937f95f108f976aa3e628598ad69d3227704\"" Jul 10 00:15:43.981012 containerd[1577]: time="2025-07-10T00:15:43.980832926Z" level=info msg="StartContainer for \"d060189ad9f4c786eef9fe87211e937f95f108f976aa3e628598ad69d3227704\"" Jul 10 00:15:43.982270 containerd[1577]: time="2025-07-10T00:15:43.982239451Z" level=info msg="connecting to shim d060189ad9f4c786eef9fe87211e937f95f108f976aa3e628598ad69d3227704" address="unix:///run/containerd/s/cb37ed2697b62183ca0eec3053d40df9c778caf1c049ceeae38bada4086c42d5" protocol=ttrpc version=3 Jul 10 00:15:44.041288 systemd[1]: Started cri-containerd-d060189ad9f4c786eef9fe87211e937f95f108f976aa3e628598ad69d3227704.scope - libcontainer container d060189ad9f4c786eef9fe87211e937f95f108f976aa3e628598ad69d3227704. Jul 10 00:15:44.124769 containerd[1577]: time="2025-07-10T00:15:44.124723210Z" level=info msg="StartContainer for \"d060189ad9f4c786eef9fe87211e937f95f108f976aa3e628598ad69d3227704\" returns successfully" Jul 10 00:15:45.753603 update_engine[1516]: I20250710 00:15:45.752978 1516 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 10 00:15:45.754908 update_engine[1516]: I20250710 00:15:45.754885 1516 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 10 00:15:45.757777 update_engine[1516]: I20250710 00:15:45.757695 1516 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 10 00:15:45.764521 update_engine[1516]: E20250710 00:15:45.764356 1516 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 10 00:15:45.764521 update_engine[1516]: I20250710 00:15:45.764426 1516 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jul 10 00:15:47.139842 containerd[1577]: time="2025-07-10T00:15:47.139760605Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:15:47.140751 containerd[1577]: time="2025-07-10T00:15:47.140723086Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 10 00:15:47.142425 containerd[1577]: time="2025-07-10T00:15:47.142383480Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:15:47.145513 containerd[1577]: time="2025-07-10T00:15:47.145458752Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:15:47.146092 containerd[1577]: time="2025-07-10T00:15:47.146028270Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 3.198687172s" Jul 10 00:15:47.146092 containerd[1577]: time="2025-07-10T00:15:47.146069849Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 10 00:15:47.149243 containerd[1577]: time="2025-07-10T00:15:47.149171110Z" level=info msg="CreateContainer within sandbox \"1ad2c4ba48793d0b48bc85e449d0c94d2016ccfa73a7a9c64c45eb88b2e6982c\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 10 00:15:47.160910 containerd[1577]: time="2025-07-10T00:15:47.160849029Z" level=info msg="Container 7b3787ce2ef02df33cebd1579497e14921f4c2bf6ec8b327e934a26392eb779c: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:15:47.174355 containerd[1577]: time="2025-07-10T00:15:47.174303802Z" level=info msg="CreateContainer within sandbox \"1ad2c4ba48793d0b48bc85e449d0c94d2016ccfa73a7a9c64c45eb88b2e6982c\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"7b3787ce2ef02df33cebd1579497e14921f4c2bf6ec8b327e934a26392eb779c\"" Jul 10 00:15:47.175262 containerd[1577]: time="2025-07-10T00:15:47.175205709Z" level=info msg="StartContainer for \"7b3787ce2ef02df33cebd1579497e14921f4c2bf6ec8b327e934a26392eb779c\"" Jul 10 00:15:47.176725 containerd[1577]: time="2025-07-10T00:15:47.176697433Z" level=info msg="connecting to shim 7b3787ce2ef02df33cebd1579497e14921f4c2bf6ec8b327e934a26392eb779c" address="unix:///run/containerd/s/b69f7766f52cec8e49bbf63bb5947b3dc26445b11214278124613d03a85bb8e5" protocol=ttrpc version=3 Jul 10 00:15:47.211244 systemd[1]: Started cri-containerd-7b3787ce2ef02df33cebd1579497e14921f4c2bf6ec8b327e934a26392eb779c.scope - libcontainer container 7b3787ce2ef02df33cebd1579497e14921f4c2bf6ec8b327e934a26392eb779c. Jul 10 00:15:47.266153 containerd[1577]: time="2025-07-10T00:15:47.266098156Z" level=info msg="StartContainer for \"7b3787ce2ef02df33cebd1579497e14921f4c2bf6ec8b327e934a26392eb779c\" returns successfully" Jul 10 00:15:47.801287 kubelet[2734]: I0710 00:15:47.801218 2734 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-9hrm5" podStartSLOduration=39.818223548 podStartE2EDuration="1m11.801201389s" podCreationTimestamp="2025-07-10 00:14:36 +0000 UTC" firstStartedPulling="2025-07-10 00:15:15.164137598 +0000 UTC m=+66.273757141" lastFinishedPulling="2025-07-10 00:15:47.147115429 +0000 UTC m=+98.256734982" observedRunningTime="2025-07-10 00:15:47.800849664 +0000 UTC m=+98.910469217" watchObservedRunningTime="2025-07-10 00:15:47.801201389 +0000 UTC m=+98.910820942" Jul 10 00:15:47.801933 kubelet[2734]: I0710 00:15:47.801447 2734 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-66666b6b-qrtjl" podStartSLOduration=17.800000851 podStartE2EDuration="46.801441363s" podCreationTimestamp="2025-07-10 00:15:01 +0000 UTC" firstStartedPulling="2025-07-10 00:15:14.944423756 +0000 UTC m=+66.054043310" lastFinishedPulling="2025-07-10 00:15:43.945864269 +0000 UTC m=+95.055483822" observedRunningTime="2025-07-10 00:15:45.273481285 +0000 UTC m=+96.383100848" watchObservedRunningTime="2025-07-10 00:15:47.801441363 +0000 UTC m=+98.911060916" Jul 10 00:15:48.128911 kubelet[2734]: I0710 00:15:48.128795 2734 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 10 00:15:48.128911 kubelet[2734]: I0710 00:15:48.128836 2734 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 10 00:15:48.948523 systemd[1]: Started sshd@17-10.0.0.15:22-10.0.0.1:56108.service - OpenSSH per-connection server daemon (10.0.0.1:56108). Jul 10 00:15:49.031588 sshd[5472]: Accepted publickey for core from 10.0.0.1 port 56108 ssh2: RSA SHA256:CN83gutZb/k5+6WAkn10Pe0824AMOrEDH4+5h0rggeY Jul 10 00:15:49.033471 sshd-session[5472]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:15:49.041248 systemd-logind[1514]: New session 18 of user core. Jul 10 00:15:49.050312 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 10 00:15:49.344503 sshd[5474]: Connection closed by 10.0.0.1 port 56108 Jul 10 00:15:49.346225 sshd-session[5472]: pam_unix(sshd:session): session closed for user core Jul 10 00:15:49.352866 systemd[1]: sshd@17-10.0.0.15:22-10.0.0.1:56108.service: Deactivated successfully. Jul 10 00:15:49.355678 systemd[1]: session-18.scope: Deactivated successfully. Jul 10 00:15:49.356748 systemd-logind[1514]: Session 18 logged out. Waiting for processes to exit. Jul 10 00:15:49.358478 systemd-logind[1514]: Removed session 18. Jul 10 00:15:49.775385 containerd[1577]: time="2025-07-10T00:15:49.775312901Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4d5372cec1709e091b845e5ce9a503ca943a7fd81c0f613208c6fd9dc382ab1c\" id:\"de2ec01cb54cd47a8ee56e1a27ec32a630379b50cd41511e167f00de82202e33\" pid:5500 exited_at:{seconds:1752106549 nanos:774650839}" Jul 10 00:15:50.225657 containerd[1577]: time="2025-07-10T00:15:50.225247883Z" level=info msg="TaskExit event in podsandbox handler container_id:\"aa05ca4624e9d0f6fbaa76e5a51c36ce6aa10671abe68b3f391cea3714f03643\" id:\"88916f21f6ee95d8585abc72c81cdc9ed1ad0dfa91edbfd6405f2624226be8f5\" pid:5518 exited_at:{seconds:1752106550 nanos:224851423}" Jul 10 00:15:52.742773 containerd[1577]: time="2025-07-10T00:15:52.742710864Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a71e2915761605f91634366398d6b4fe36d4f01c29f4b40031df31cc2ec5cdb6\" id:\"169c1295dd928238f442bc8a647e15ef8cc532e98b42b5b88efc026f6fc99e7d\" pid:5551 exited_at:{seconds:1752106552 nanos:742327730}" Jul 10 00:15:54.363731 systemd[1]: Started sshd@18-10.0.0.15:22-10.0.0.1:56118.service - OpenSSH per-connection server daemon (10.0.0.1:56118). Jul 10 00:15:54.430004 sshd[5565]: Accepted publickey for core from 10.0.0.1 port 56118 ssh2: RSA SHA256:CN83gutZb/k5+6WAkn10Pe0824AMOrEDH4+5h0rggeY Jul 10 00:15:54.431997 sshd-session[5565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:15:54.437365 systemd-logind[1514]: New session 19 of user core. Jul 10 00:15:54.447353 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 10 00:15:54.593647 sshd[5567]: Connection closed by 10.0.0.1 port 56118 Jul 10 00:15:54.594013 sshd-session[5565]: pam_unix(sshd:session): session closed for user core Jul 10 00:15:54.598342 systemd[1]: sshd@18-10.0.0.15:22-10.0.0.1:56118.service: Deactivated successfully. Jul 10 00:15:54.600462 systemd[1]: session-19.scope: Deactivated successfully. Jul 10 00:15:54.601302 systemd-logind[1514]: Session 19 logged out. Waiting for processes to exit. Jul 10 00:15:54.603035 systemd-logind[1514]: Removed session 19. Jul 10 00:15:55.751491 update_engine[1516]: I20250710 00:15:55.751395 1516 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 10 00:15:55.751912 update_engine[1516]: I20250710 00:15:55.751696 1516 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 10 00:15:55.751943 update_engine[1516]: I20250710 00:15:55.751927 1516 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 10 00:15:55.762273 update_engine[1516]: E20250710 00:15:55.762219 1516 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 10 00:15:55.762273 update_engine[1516]: I20250710 00:15:55.762271 1516 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jul 10 00:15:59.610733 systemd[1]: Started sshd@19-10.0.0.15:22-10.0.0.1:47678.service - OpenSSH per-connection server daemon (10.0.0.1:47678). Jul 10 00:15:59.661378 sshd[5580]: Accepted publickey for core from 10.0.0.1 port 47678 ssh2: RSA SHA256:CN83gutZb/k5+6WAkn10Pe0824AMOrEDH4+5h0rggeY Jul 10 00:15:59.663071 sshd-session[5580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:15:59.667504 systemd-logind[1514]: New session 20 of user core. Jul 10 00:15:59.677168 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 10 00:15:59.801600 sshd[5582]: Connection closed by 10.0.0.1 port 47678 Jul 10 00:15:59.802057 sshd-session[5580]: pam_unix(sshd:session): session closed for user core Jul 10 00:15:59.806761 systemd[1]: sshd@19-10.0.0.15:22-10.0.0.1:47678.service: Deactivated successfully. Jul 10 00:15:59.808799 systemd[1]: session-20.scope: Deactivated successfully. Jul 10 00:15:59.809686 systemd-logind[1514]: Session 20 logged out. Waiting for processes to exit. Jul 10 00:15:59.810889 systemd-logind[1514]: Removed session 20. Jul 10 00:16:04.819450 systemd[1]: Started sshd@20-10.0.0.15:22-10.0.0.1:47682.service - OpenSSH per-connection server daemon (10.0.0.1:47682). Jul 10 00:16:04.879955 sshd[5595]: Accepted publickey for core from 10.0.0.1 port 47682 ssh2: RSA SHA256:CN83gutZb/k5+6WAkn10Pe0824AMOrEDH4+5h0rggeY Jul 10 00:16:04.881635 sshd-session[5595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:16:04.886102 systemd-logind[1514]: New session 21 of user core. Jul 10 00:16:04.893113 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 10 00:16:05.031193 sshd[5597]: Connection closed by 10.0.0.1 port 47682 Jul 10 00:16:05.031483 sshd-session[5595]: pam_unix(sshd:session): session closed for user core Jul 10 00:16:05.035584 systemd[1]: sshd@20-10.0.0.15:22-10.0.0.1:47682.service: Deactivated successfully. Jul 10 00:16:05.037743 systemd[1]: session-21.scope: Deactivated successfully. Jul 10 00:16:05.038607 systemd-logind[1514]: Session 21 logged out. Waiting for processes to exit. Jul 10 00:16:05.040188 systemd-logind[1514]: Removed session 21. Jul 10 00:16:05.751855 update_engine[1516]: I20250710 00:16:05.751744 1516 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 10 00:16:05.752427 update_engine[1516]: I20250710 00:16:05.752043 1516 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 10 00:16:05.752427 update_engine[1516]: I20250710 00:16:05.752312 1516 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 10 00:16:05.760198 update_engine[1516]: E20250710 00:16:05.760094 1516 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 10 00:16:05.760198 update_engine[1516]: I20250710 00:16:05.760142 1516 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 10 00:16:05.760198 update_engine[1516]: I20250710 00:16:05.760169 1516 omaha_request_action.cc:617] Omaha request response: Jul 10 00:16:05.760526 update_engine[1516]: E20250710 00:16:05.760288 1516 omaha_request_action.cc:636] Omaha request network transfer failed. Jul 10 00:16:05.761208 update_engine[1516]: I20250710 00:16:05.761135 1516 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jul 10 00:16:05.761208 update_engine[1516]: I20250710 00:16:05.761171 1516 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 10 00:16:05.761208 update_engine[1516]: I20250710 00:16:05.761178 1516 update_attempter.cc:306] Processing Done. Jul 10 00:16:05.761424 update_engine[1516]: E20250710 00:16:05.761390 1516 update_attempter.cc:619] Update failed. Jul 10 00:16:05.761424 update_engine[1516]: I20250710 00:16:05.761418 1516 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jul 10 00:16:05.761469 update_engine[1516]: I20250710 00:16:05.761425 1516 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jul 10 00:16:05.761469 update_engine[1516]: I20250710 00:16:05.761432 1516 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jul 10 00:16:05.761558 update_engine[1516]: I20250710 00:16:05.761521 1516 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 10 00:16:05.761612 update_engine[1516]: I20250710 00:16:05.761561 1516 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 10 00:16:05.761612 update_engine[1516]: I20250710 00:16:05.761569 1516 omaha_request_action.cc:272] Request: Jul 10 00:16:05.761612 update_engine[1516]: Jul 10 00:16:05.761612 update_engine[1516]: Jul 10 00:16:05.761612 update_engine[1516]: Jul 10 00:16:05.761612 update_engine[1516]: Jul 10 00:16:05.761612 update_engine[1516]: Jul 10 00:16:05.761612 update_engine[1516]: Jul 10 00:16:05.761612 update_engine[1516]: I20250710 00:16:05.761575 1516 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 10 00:16:05.761794 update_engine[1516]: I20250710 00:16:05.761739 1516 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 10 00:16:05.762061 update_engine[1516]: I20250710 00:16:05.762028 1516 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 10 00:16:05.766549 locksmithd[1573]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jul 10 00:16:05.770741 update_engine[1516]: E20250710 00:16:05.770695 1516 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 10 00:16:05.770790 update_engine[1516]: I20250710 00:16:05.770746 1516 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 10 00:16:05.770790 update_engine[1516]: I20250710 00:16:05.770754 1516 omaha_request_action.cc:617] Omaha request response: Jul 10 00:16:05.770790 update_engine[1516]: I20250710 00:16:05.770762 1516 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 10 00:16:05.770790 update_engine[1516]: I20250710 00:16:05.770768 1516 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 10 00:16:05.770790 update_engine[1516]: I20250710 00:16:05.770776 1516 update_attempter.cc:306] Processing Done. Jul 10 00:16:05.770790 update_engine[1516]: I20250710 00:16:05.770781 1516 update_attempter.cc:310] Error event sent. Jul 10 00:16:05.770996 update_engine[1516]: I20250710 00:16:05.770793 1516 update_check_scheduler.cc:74] Next update check in 45m55s Jul 10 00:16:05.771383 locksmithd[1573]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jul 10 00:16:10.050934 systemd[1]: Started sshd@21-10.0.0.15:22-10.0.0.1:54062.service - OpenSSH per-connection server daemon (10.0.0.1:54062). Jul 10 00:16:10.109567 sshd[5613]: Accepted publickey for core from 10.0.0.1 port 54062 ssh2: RSA SHA256:CN83gutZb/k5+6WAkn10Pe0824AMOrEDH4+5h0rggeY Jul 10 00:16:10.111073 sshd-session[5613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:16:10.115395 systemd-logind[1514]: New session 22 of user core. Jul 10 00:16:10.124133 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 10 00:16:10.239336 sshd[5615]: Connection closed by 10.0.0.1 port 54062 Jul 10 00:16:10.239670 sshd-session[5613]: pam_unix(sshd:session): session closed for user core Jul 10 00:16:10.252950 systemd[1]: sshd@21-10.0.0.15:22-10.0.0.1:54062.service: Deactivated successfully. Jul 10 00:16:10.254883 systemd[1]: session-22.scope: Deactivated successfully. Jul 10 00:16:10.255597 systemd-logind[1514]: Session 22 logged out. Waiting for processes to exit. Jul 10 00:16:10.258514 systemd[1]: Started sshd@22-10.0.0.15:22-10.0.0.1:54070.service - OpenSSH per-connection server daemon (10.0.0.1:54070). Jul 10 00:16:10.259246 systemd-logind[1514]: Removed session 22. Jul 10 00:16:10.310208 sshd[5629]: Accepted publickey for core from 10.0.0.1 port 54070 ssh2: RSA SHA256:CN83gutZb/k5+6WAkn10Pe0824AMOrEDH4+5h0rggeY Jul 10 00:16:10.311775 sshd-session[5629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:16:10.318204 systemd-logind[1514]: New session 23 of user core. Jul 10 00:16:10.334136 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 10 00:16:11.346395 sshd[5631]: Connection closed by 10.0.0.1 port 54070 Jul 10 00:16:11.347593 sshd-session[5629]: pam_unix(sshd:session): session closed for user core Jul 10 00:16:11.357904 systemd[1]: sshd@22-10.0.0.15:22-10.0.0.1:54070.service: Deactivated successfully. Jul 10 00:16:11.360100 systemd[1]: session-23.scope: Deactivated successfully. Jul 10 00:16:11.361025 systemd-logind[1514]: Session 23 logged out. Waiting for processes to exit. Jul 10 00:16:11.364482 systemd[1]: Started sshd@23-10.0.0.15:22-10.0.0.1:54082.service - OpenSSH per-connection server daemon (10.0.0.1:54082). Jul 10 00:16:11.365925 systemd-logind[1514]: Removed session 23. Jul 10 00:16:11.421909 sshd[5642]: Accepted publickey for core from 10.0.0.1 port 54082 ssh2: RSA SHA256:CN83gutZb/k5+6WAkn10Pe0824AMOrEDH4+5h0rggeY Jul 10 00:16:11.423703 sshd-session[5642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:16:11.428716 systemd-logind[1514]: New session 24 of user core. Jul 10 00:16:11.438129 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 10 00:16:13.342224 sshd[5644]: Connection closed by 10.0.0.1 port 54082 Jul 10 00:16:13.342652 sshd-session[5642]: pam_unix(sshd:session): session closed for user core Jul 10 00:16:13.354871 systemd[1]: sshd@23-10.0.0.15:22-10.0.0.1:54082.service: Deactivated successfully. Jul 10 00:16:13.358151 systemd[1]: session-24.scope: Deactivated successfully. Jul 10 00:16:13.358416 systemd[1]: session-24.scope: Consumed 653ms CPU time, 77M memory peak. Jul 10 00:16:13.359071 systemd-logind[1514]: Session 24 logged out. Waiting for processes to exit. Jul 10 00:16:13.367460 systemd[1]: Started sshd@24-10.0.0.15:22-10.0.0.1:54090.service - OpenSSH per-connection server daemon (10.0.0.1:54090). Jul 10 00:16:13.371270 systemd-logind[1514]: Removed session 24. Jul 10 00:16:13.420690 sshd[5663]: Accepted publickey for core from 10.0.0.1 port 54090 ssh2: RSA SHA256:CN83gutZb/k5+6WAkn10Pe0824AMOrEDH4+5h0rggeY Jul 10 00:16:13.422280 sshd-session[5663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:16:13.426799 systemd-logind[1514]: New session 25 of user core. Jul 10 00:16:13.436130 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 10 00:16:14.151163 sshd[5665]: Connection closed by 10.0.0.1 port 54090 Jul 10 00:16:14.151614 sshd-session[5663]: pam_unix(sshd:session): session closed for user core Jul 10 00:16:14.162640 systemd[1]: sshd@24-10.0.0.15:22-10.0.0.1:54090.service: Deactivated successfully. Jul 10 00:16:14.164551 systemd[1]: session-25.scope: Deactivated successfully. Jul 10 00:16:14.165408 systemd-logind[1514]: Session 25 logged out. Waiting for processes to exit. Jul 10 00:16:14.168499 systemd[1]: Started sshd@25-10.0.0.15:22-10.0.0.1:54100.service - OpenSSH per-connection server daemon (10.0.0.1:54100). Jul 10 00:16:14.169369 systemd-logind[1514]: Removed session 25. Jul 10 00:16:14.219384 sshd[5677]: Accepted publickey for core from 10.0.0.1 port 54100 ssh2: RSA SHA256:CN83gutZb/k5+6WAkn10Pe0824AMOrEDH4+5h0rggeY Jul 10 00:16:14.220688 sshd-session[5677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:16:14.225037 systemd-logind[1514]: New session 26 of user core. Jul 10 00:16:14.237104 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 10 00:16:14.662295 sshd[5679]: Connection closed by 10.0.0.1 port 54100 Jul 10 00:16:14.662607 sshd-session[5677]: pam_unix(sshd:session): session closed for user core Jul 10 00:16:14.666854 systemd[1]: sshd@25-10.0.0.15:22-10.0.0.1:54100.service: Deactivated successfully. Jul 10 00:16:14.668903 systemd[1]: session-26.scope: Deactivated successfully. Jul 10 00:16:14.669772 systemd-logind[1514]: Session 26 logged out. Waiting for processes to exit. Jul 10 00:16:14.671321 systemd-logind[1514]: Removed session 26. Jul 10 00:16:19.675826 systemd[1]: Started sshd@26-10.0.0.15:22-10.0.0.1:60744.service - OpenSSH per-connection server daemon (10.0.0.1:60744). Jul 10 00:16:19.731117 sshd[5694]: Accepted publickey for core from 10.0.0.1 port 60744 ssh2: RSA SHA256:CN83gutZb/k5+6WAkn10Pe0824AMOrEDH4+5h0rggeY Jul 10 00:16:19.732804 sshd-session[5694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:16:19.738642 systemd-logind[1514]: New session 27 of user core. Jul 10 00:16:19.739257 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 10 00:16:19.777916 containerd[1577]: time="2025-07-10T00:16:19.777863255Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4d5372cec1709e091b845e5ce9a503ca943a7fd81c0f613208c6fd9dc382ab1c\" id:\"c413fbfa22af4c2c632bade1a2a54700fe1c25a3dd2de8d759fc55c566fee15e\" pid:5709 exited_at:{seconds:1752106579 nanos:777490332}" Jul 10 00:16:19.909596 sshd[5715]: Connection closed by 10.0.0.1 port 60744 Jul 10 00:16:19.910376 containerd[1577]: time="2025-07-10T00:16:19.910338943Z" level=info msg="TaskExit event in podsandbox handler container_id:\"aa05ca4624e9d0f6fbaa76e5a51c36ce6aa10671abe68b3f391cea3714f03643\" id:\"8705df250defd432cac6396c61d0879f55044ab78cc6479e3011b2920965d207\" pid:5728 exited_at:{seconds:1752106579 nanos:909851422}" Jul 10 00:16:19.910600 sshd-session[5694]: pam_unix(sshd:session): session closed for user core Jul 10 00:16:19.916686 systemd[1]: sshd@26-10.0.0.15:22-10.0.0.1:60744.service: Deactivated successfully. Jul 10 00:16:19.918944 systemd[1]: session-27.scope: Deactivated successfully. Jul 10 00:16:19.919753 systemd-logind[1514]: Session 27 logged out. Waiting for processes to exit. Jul 10 00:16:19.922159 systemd-logind[1514]: Removed session 27. Jul 10 00:16:22.667575 containerd[1577]: time="2025-07-10T00:16:22.667521789Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a71e2915761605f91634366398d6b4fe36d4f01c29f4b40031df31cc2ec5cdb6\" id:\"b9328b36cf6a16ca92c7ec67eb2edfeb3b69ccf259e016493f692aeb51963505\" pid:5770 exited_at:{seconds:1752106582 nanos:667130993}" Jul 10 00:16:24.925573 systemd[1]: Started sshd@27-10.0.0.15:22-10.0.0.1:60758.service - OpenSSH per-connection server daemon (10.0.0.1:60758). Jul 10 00:16:24.990777 sshd[5784]: Accepted publickey for core from 10.0.0.1 port 60758 ssh2: RSA SHA256:CN83gutZb/k5+6WAkn10Pe0824AMOrEDH4+5h0rggeY Jul 10 00:16:24.992766 sshd-session[5784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:16:24.999643 systemd-logind[1514]: New session 28 of user core. Jul 10 00:16:25.004849 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 10 00:16:25.204597 sshd[5787]: Connection closed by 10.0.0.1 port 60758 Jul 10 00:16:25.209760 systemd[1]: sshd@27-10.0.0.15:22-10.0.0.1:60758.service: Deactivated successfully. Jul 10 00:16:25.204851 sshd-session[5784]: pam_unix(sshd:session): session closed for user core Jul 10 00:16:25.212069 systemd[1]: session-28.scope: Deactivated successfully. Jul 10 00:16:25.213112 systemd-logind[1514]: Session 28 logged out. Waiting for processes to exit. Jul 10 00:16:25.214907 systemd-logind[1514]: Removed session 28. Jul 10 00:16:25.893217 containerd[1577]: time="2025-07-10T00:16:25.893155157Z" level=info msg="TaskExit event in podsandbox handler container_id:\"aa05ca4624e9d0f6fbaa76e5a51c36ce6aa10671abe68b3f391cea3714f03643\" id:\"70014662ddb4bfd22ea65dfa5cd18a43ceea301ad6b643c923c1256bdbf58cc1\" pid:5812 exited_at:{seconds:1752106585 nanos:892388832}" Jul 10 00:16:30.218875 systemd[1]: Started sshd@28-10.0.0.15:22-10.0.0.1:44906.service - OpenSSH per-connection server daemon (10.0.0.1:44906). Jul 10 00:16:30.280437 sshd[5830]: Accepted publickey for core from 10.0.0.1 port 44906 ssh2: RSA SHA256:CN83gutZb/k5+6WAkn10Pe0824AMOrEDH4+5h0rggeY Jul 10 00:16:30.282582 sshd-session[5830]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:16:30.287296 systemd-logind[1514]: New session 29 of user core. Jul 10 00:16:30.299222 systemd[1]: Started session-29.scope - Session 29 of User core. Jul 10 00:16:30.462482 sshd[5832]: Connection closed by 10.0.0.1 port 44906 Jul 10 00:16:30.462871 sshd-session[5830]: pam_unix(sshd:session): session closed for user core Jul 10 00:16:30.467337 systemd[1]: sshd@28-10.0.0.15:22-10.0.0.1:44906.service: Deactivated successfully. Jul 10 00:16:30.470071 systemd[1]: session-29.scope: Deactivated successfully. Jul 10 00:16:30.471279 systemd-logind[1514]: Session 29 logged out. Waiting for processes to exit. Jul 10 00:16:30.472647 systemd-logind[1514]: Removed session 29. Jul 10 00:16:35.482196 systemd[1]: Started sshd@29-10.0.0.15:22-10.0.0.1:44916.service - OpenSSH per-connection server daemon (10.0.0.1:44916). Jul 10 00:16:35.535836 sshd[5848]: Accepted publickey for core from 10.0.0.1 port 44916 ssh2: RSA SHA256:CN83gutZb/k5+6WAkn10Pe0824AMOrEDH4+5h0rggeY Jul 10 00:16:35.537718 sshd-session[5848]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:16:35.543137 systemd-logind[1514]: New session 30 of user core. Jul 10 00:16:35.557113 systemd[1]: Started session-30.scope - Session 30 of User core. Jul 10 00:16:35.684426 sshd[5850]: Connection closed by 10.0.0.1 port 44916 Jul 10 00:16:35.686380 sshd-session[5848]: pam_unix(sshd:session): session closed for user core Jul 10 00:16:35.691958 systemd[1]: sshd@29-10.0.0.15:22-10.0.0.1:44916.service: Deactivated successfully. Jul 10 00:16:35.694597 systemd[1]: session-30.scope: Deactivated successfully. Jul 10 00:16:35.695661 systemd-logind[1514]: Session 30 logged out. Waiting for processes to exit. Jul 10 00:16:35.697150 systemd-logind[1514]: Removed session 30.